text
stringlengths
1
2.25M
--- author: - Gabriel Istrate title: The phase transition in random Horn satisfiability and its algorithmic implications --- Introduction {#sec:intro} ============ [*Phase transitions in combinatorial problems*]{} were first displayed in the seminal work of Erdős and Rényi [@erdos:renyi] on random graphs. Working with the constant probability model $G(n,p)$ they showed that the probability that the graph has a “large” connected component exhibits a sharp increase at some “threshold” value of $p$. The empirical observation from [@cheeseman-kanefsky-taylor], that for a number of –complete problems the “hardest on the average” instances are located near such threshold points has attracted considerable interest in such threshold phenomena from several communities, such as Theory of Computing, Artificial Intelligence and Statistical Mechanics. Recent studies [@2+p:rsa; @2+p:nature] have provided further evidence that (at least some) phase transitions have indeed an impact on algorithmic complexity, and have offered additional insight on the cases when this happens. It turns out that there are two different notions of phase transition in a combinatorial problem $P$. One definition applies to optimization problems and directly parallels the approach from Statistical Mechanics. Potential solutions for an instance of $P$ are viewed as “states” of a system. One defines an abstract [*Hamiltonian (energy) function*]{}, that measures the “quality” of a given solution, and apply methods from the theory of spin glasses [@virasoro-parisi-mezard] to make predictions on the typical structure of optimal solutions. In this setting a phase transition is defined as non-analytical behavior of a certain “order parameter” called free energy, and a discontinuity in this parameter, manifest by the sudden emergence of a [*backbone*]{} of constrained “degrees of freedom” [@2+p:rsa] is responsible for the exponential slow-down of many natural algorithms. The second definition is combinatorial and pertains to decision problems. It is the concept of [*threshold property*]{} from random graph theory, more precisely a restricted version of this notion, called [*sharp satisfiability threshold*]{}. A satisfiability threshold always exists for monotone problems [@bollob-thomasson], but may or may not be sharp (we speak of a [*coarse threshold*]{} in the latter case). It is this notion of phase transition that we are concerned with in this paper. From the practical perspective of [@cheeseman-kanefsky-taylor] phase transitions are most appealing in problems that are thought to be “hard”, in particular, in –complete problems. Therefore a lot of recent work has been directed towards locating phase transitions in such problems. In some cases, the most proeminent of which is Hamiltonian cycle [@hamcyclerand]), a complete analysis has been obtained. In other (e.g., 3-SAT [@frieze-suen; @kranakis-3sat; @achlioptas:3sat:pie; @janson-et-al:3sat] and graph-coloring [@chvatal-color; @achlioptas-molloy]), obtaining such an analysis is hard, and indeed not yet accomplished task: for these problems there exists a fairly large gap between the best rigorous lower and upper bounds, and the methods that were used to obtain these bounds do not seem to be capable to yield a tight analysis. Understanding the reasons that make problems with similar computational complexity differ so much with respect to their “mathematical tractability” is clearly a topic worth investigating. A natural intuitive explanation of this discrepancy is that problems that are easy to analyze “coincide with high probability” with problems with a simple “local” structure, while problems that are “hard to analyze” lack such an approximation. Such is the case, for instance, of the above mentioned Hamiltonian cycle, that “coincides with high probability” with the graph property “having minimum degree two” [@aks:hamcycle-hitting]. Support in favor of this intuition also comes from Friedgut’s result on the existence of a sharp threshold for 3-SAT [@friedgut:k:sat]: his proof relies on showing that problems with coarse thresholds can be well approximated by some simple “local” property, and then proving that 3-SAT lacks such an approximation. While his result sheds no light on the “mathematical tractability” of Hamiltonian cycle, it is tempting to speculate that there might be a suitable generalization of the concept of “coarse threshold”, that 3-SAT still lacks, and that encompasses all known “mathematically tractable cases”. A natural testbed for the above intuition is the case of polynomial time solvable problems. In these cases the hypothesis predicts that one should be able to obtain a complete analysis: often tractability arises from the existence of a “local” characterization, that circumvents the need for exhaustively searching the exponentially large space of potential solutions. Another reason is methodological: studying tractable problems usually amounts to probabilistic analyses of decision algorithms for these problems using a methodology based on Markov chains, a task that can often be accomplished. Such an approach was successful for some tractable versions of propositional satisfiability: out of the six maximally tractable cases of SAT that Schaefer identified in his celebrated Dichotomy Theorem [@schaefer-dich], two are trivially satisfiable and two have completely analyzed phase transitions. The transition for 2-SAT, the satisfiability problem for CNF formulas with clauses of size two, has been studied in [@mickgetssome; @goerdt:2cnf] and that for XOR-SAT, the satisfiability problem for linear systems of equations with boolean variables, has been studied in [@xorsat]). The remaining two cases are the Horn formulas and the negative Horn formulas (which are, of course, dual). In this paper we deal with these two cases. Unlike the other two nontrivial cases, we show that Horn satisfiability has a [*coarse threshold*]{}. In the “critical region” the number of clauses is exponential in the number of variables, hence from a practical perspective, our results show that if do not restrict clause length, random Horn formulas of practical interest are almost certainly satisfiable (we have subsequently analyzed the bounded clause length case in [@istrate:stoc99]). Also, we obtain our result by modeling , a natural implementation of [*positive unit resolution*]{}, by a Markov chain, and our method yields as a byproduct an average-case analysis of this algorithm. Results ======= A [*Horn clause*]{} is a disjunction of literals containing [*at most one positive literal*]{}. It will be called [*positive*]{} if it contains a positive literal and [*negative*]{} otherwise. A Horn formula is a conjunction of Horn clauses. [*Horn satisfiability*]{} (denoted by $\HSAT$) is the problem of deciding whether a given Horn formula has a satisfying assignment. Since our main interest is in phase transitions in decision problems in the class NP, we will discuss the notion of satisfiability threshold in the framework of [*–decision problems*]{}. Our definition is slightly different from the standard one (e.g. [@papad:b:complexity]), and accommodates the fact that legal encodings of instances of a problem have in general lengths from a restricted set of values. An [*–decision problem*]{} is a five-tuple $P=(\Sigma,D,f,g)$ such that 1. $\Sigma$ is a finite alphabet. 2. $f,g:{\bf N}\goesto {\bf N}$ are polynomial time computable, polynomially bounded functions. In addition $f$ has range $\{0,1\}$. A length $n$ is called [*admissible*]{} if $f(n)=1$. 3. $D\subset \Sigma^{*}\times \Sigma^{*}$ is a polynomial time computable relation. 4. for every pair $(x,y)\in \Sigma^{*}\times \Sigma^{*}$, if $(x,y)\in D$ then the length of $x$ is acceptable and $[|y|\leq g(|x|)]$. A string $x$ having an admissible length will be called [ *an instance of $P$*]{}. A string $y$ such that $(x,y)\in D$ is called [*a witness for $x$*]{}, and we write $x\in P$ to state the fact that there exists a witness for the instance $x$. Finally problem $P$ is [ *monotonically decreasing*]{} if for every instance $x$ of $P$ and every witness $y$ for $x$, $y$ is a witness for every instance $z$ obtained by turning some bits of $x$ from 1 to 0. Monotonically increasing problems can be similarly defined. The three standard probabilistic models from random graph theory [@bol:b:random-graphs], the constant probability model, the counting model, the multiset model extend directly to any –decision problem, and are equivalent under fairly liberal conditions. For the purposes of this paper we recall the definition of the multiset model: Let $P$ be an –decision problem The [*random multiset model*]{} $\overline{\Omega}(n,m)$ has two parameters, an [*admissible length $n$*]{} and an [*instance density*]{} $1\leq m \leq n$. A random sample $x$ from $\Omega(n,m)$ is an instance of $P$ obtained by first setting $x=0^{n}$, then choosing, uniformly at random and with repetition, $m$ bits of $x$ and switching them to $1$. Next we define out threshold properties for monotonically decreasing problems under the multiset model. Similar definitions can be given for monotonically increasing problems, or when using one of the two other random models. Let $P$ be any monotonically decreasing decision problem under the multiset random model $\Omega(n,m)$. A function $\overline{\theta}$ is a [*threshold function for $P$*]{} if for every function $m$, defined on the set of admissible instances and taking integer values, we have 1. if $m(n)=o(\overline{\theta}(n))$ then $\lim_{n\goesto \infty} \PR_{x\in \Omega(n,m)}[x\in P]=1$, and 2. if $m(n)=\omega(\overline{\theta}(n))$ then $\lim_{n\goesto \infty} \PR_{x\in \Omega(n,m)}[x\in P]=0$, $\theta$ is called [*a sharp threshold*]{} if in addition the following property holds: 3. For every $\epsilon >0$ define the two functions $\mu_{1}(n), \mu_{2}(n)$ by $$\mu_{1}(n)=\min\{m\in {\bf N}: \PR_{x\in \Omega(n,m)}[x\in P]\leq 1-\epsilon\},$$ $$\mu_{2}(n)=\min\{m\in {\bf N}: \PR_{x\in \Omega(n,m)}[x\in P]\leq \epsilon\}.$$ Then we have $$\\lim_{n\goesto \infty}\frac{\mu_{2}(n)-\mu_{1}(n)}{\overline{\theta}(n)}=0.$$ If, on the other hand, for some $\epsilon >0$ the amount $\frac{\mu_{2}(n)-\mu_{1}(n)}{\overline{\theta}(n)}$ is bounded away from 0 as $n\goesto \infty$, $\overline{\theta}$ is called a [*coarse threshold*]{}. These two cases are not exhaustive as the above quantity could in principle oscillate with $n$. Nevertheless they are so for most “natural” problems. A useful modification of the above framework has the set of admissible lengths specified by an increasing function $N:{\bf N}\goesto {\bf N}$. We correspondingly redefine the random model as $\Omega(n,m)=\overline{\Omega}(N(n),m)$ and the threshold function by $\theta(n)=\overline{\theta}(N(n))$. Such will be the case of random Horn satisfiability, for which a random formula from $\Omega(n,m)$ is obtained by choosing $m$ clauses independently, uniformly at random and with repetition from the set of all $N(n) = (n+2)\cdot 2^{n}-1$ Horn clauses over variables $x_{1}, \ldots, x_{n}$. The following is our main result: \[maintheorem\] $\theta(n)= 2^{n}$ is a threshold function for random Horn satisfiability. Moreover, for every constant $c>0$ $$\label{formula} \lim_{n\goesto \infty} \PR_{\Phi \in \Omega(n,c\cdot2^{n})}[ \Phi \mbox{ is satisfiable} ] = 1-F(e^{-c}),$$ where $$F(x) = (1-x)(1-x^2)(1-x^4)\cdots (1-x^{2^{k}})\cdots.$$ The result makes clear that random Horn satisfiability has a [*coarse threshold*]{}. \[pur\] [.75 ]{} if ($\Phi$ contains no positive unit clauses) return $TRUE$ else choose a random positive unit clause $x$ if ($\Phi$ contains the clause $\overline{x}$) return $FALSE$ else let $\Phi^{'}$ be the formula obtained by setting $x$ to 1 in $\Phi$ return $PUR(\Phi^{'})$. The algorithm $\PUR$, employed in the proof Theorem \[maintheorem\] is displayed in Fig. 1. $\PUR$ is a natural implementation of positive unit resolution, which is complete for HORN-SAT [@unit-resolution]. As a byproduct, our analysis yields the following two results, which provide an average-case analysis of : \[sat\] Let $X_{n}\in [0,n]$ be the r.v. denoting the number of iterations of  on a random [*satisfiable*]{} formula $\Phi\in \Omega(n,c\cdot 2^{n})$. Then $X_{n}$ converges in distribution to a distribution $\rho$ on $[0,n]$ having support on the nonnegative integers, $\rho=(\rho_{k})_{k\geq 0}$, $\rho_{k}= Prob[\rho = k]$, given by $$\rho_{k}=\frac{e^{-2^{k}c}}{1-F(e^{-c})}\cdot \prod_{i=1}^{k-1} (1-e^{-2^{i}c}).$$ The case of unsatisfiable formulas displays one feature not present in the previous result: fluctuations due to the nature of the binary expansion of $n$, [*wobbles*]{} in the terminology of P. Flajolet [@flajolet:aofa]. \[unsat\] Let $Y_{n}$ be the r.v. denoting the number of iterations of  on a random formula $\Phi\in \Omega(n,c \cdot 2^{n})$, and, for $k\in [0,n]$, possibly a function of $n$, let $\eta_{n,k}$ be the probability that $Y_{n}=\lfloor \log_{2} n \rfloor +k$, conditional on $\Phi$ being unsatisfiable. Then - $\lim_{n\goesto \infty}|k-\log_{2}(n)|=\infty$ implies that $\lim_{n\goesto \infty}\eta_{n,k}=0$ - for every $k\in {\bf Z}$ $$\eta_{n,k} = G(k-1,c_{n})-G(k,c_{n})+o(1),$$ where $$G(k,c)= e^{-c (\sum_{j=-\infty}^{k}2^{j})},$$ $$c_{n}=\frac{c}{2^{\{\log_{2} (\sqrt n)\}}}.$$ Notation and useful results =========================== For $n\in \N$ and $0\leq p\leq 1$, we denote by $B(n,p)$ a random variable having a Bernoulli distribution with parameters $n,p$. For $\lambda\in {\bf R}$, $Po(\lambda)$ will denote a Poisson distribution with expected value $\lambda$. We will use “with high probability” (w.h.p.) as a substitute for “with probability $1-o(1)$.” We also say that a sequence $(p_{n})_{n\in \N}$ of real numbers is [*exponentially small*]{} (written $o(1/poly)$) if for every polynomial $Q$, $p_{n}=o(1/Q(n))$. We will measure, as usual, the distance between two probability distributions with integer values $P=(p_{i})$ and $Q=(q_{i})$ by their [*total variation distance*]{} $d_{TV}(P,Q)= \frac{1}{2}\cdot \sum_{i}|p_{i}-q_{i}|$, and recall the following inequalities from [@sheu:poisson] and [@barbour:holst:janson] (page 2 and Remark 1.4): \[b:h:j\]If $n,p,\lambda, \mu >0$ then $$d_{TV}(B(n,p),Po(np))\leq \min\left\{np^{2},\frac{3p}{2}\right\}$$ $$d_{TV}(Po(\lambda), Po(\mu))\leq |\mu - \lambda|.$$ Given two probability distributions $D$ and $D^{\prime}$, we say that [*$D^{\prime}$ stochastically dominates $D$*]{} if for every $x$, $\Pr[D\geq x] \leq \Pr[D^{\prime}\geq x]$, and write $D\prec D^{\prime}$ when this holds. The following are two conditional probability tricks. \[trick-approx\] Let $A_{n}, B_{n}$, and $C_{n}$ be events such that $\PR[ C_{n}|B_{n} ]=1-o(1)$. Then $$|\PR[A_{n}|B_{n}]-\PR[A_{n}|B_{n}\AND C_{n}]|=o(1).$$ Applying the chain rule for conditional probability we get $$\begin{aligned} |\PR[A_{n}|B_{n}]-\PR[A_{n}|B_{n}\AND C_{n}]| & = & \\ | \PR[A_{n}|B_{n}\AND C_{n}]\cdot \Pr[C_{n}|B_{n}]+\PR[A_{n}|B_{n}\AND \overline{C_{n}}]\cdot \Pr[\overline{C_{n}}|B_{n}]-\PR[A_{n}|B_{n}\AND C_{n}]| & = & \\ | \PR[A_{n}|B_{n}\AND C_{n}]\cdot(1-o(1))+ \PR[A_{n}|B_{n}\AND \overline{C_{n}}]\cdot o(1)-\PR[A_{n}|B_{n}\AND C_{n}]| = o(1). \end{aligned}$$ \[trick-max\] If $B$ is a random variable taking integer values in the interval $I$, then for every event $A$, $$\min_{\lambda \in I}\{ \PR[ A|(B=\lambda) ] \} \leq \PR[A] \leq \max_{\lambda \in I}\{ \PR[ A|(B=\lambda) ] \}.$$ Several “concentration of measure” results will be used in the sequel. They include: (Chernoff bound)\[chernoff\] Let $X_{1}, \ldots , X_{n}$ be independent 0/1 random variables with $Pr(X_{i}=1)=p$. Let $X=X_{1}+\ldots +X_{n}$, $\mu = E[X]$ and $\delta >0$. Then $$Pr[|X-\mu|\geq \delta \cdot \mu]\leq \left[\frac{e^{\delta}}{(1+\delta)^{1+\delta}}\right]^{\mu}.$$ A related inequality from [@probabilistic-method] is: \[chernoff:poisson\] Let $P$ have Poisson distribution with mean $\mu$. For $\epsilon >0$, $$\Pr[P\leq \mu \cdot (1-\epsilon)] \leq e^{\epsilon^{2}\cdot \mu /2},$$ $$\Pr[P\geq \mu \cdot (1+\epsilon)] \leq [e^{\epsilon}(1+\epsilon)^{-(1+\epsilon)}]^{\mu}.$$ We regard the algorithm $\PUR$ as working in stages, indexed by the number of variables still left unassigned; thus the stage number decreases as $\PUR$ moves on. Let $\Phi$ denote an input formula over $n$ variables. For $i, 1\leq i\leq n$, $A_i$, $R_i$, and $S_i$ respectively denote the event that $\PUR$ accepts at stage $i$, the event that $\PUR$ rejects at stage $i$, and the event that $\PUR$ reaches stage $i-1$ (“survives stage $i$”). Also, $\Phi_i$ denotes the $\Phi$ at the beginning of stage $i$, $N_i$ denotes the number of clauses of $\Phi_i$, $HP_{1,i}$ the number of positive unit clauses of $\Phi_{i}$, $HP_{2,i}$ the number of positive [*non-unit*]{} clauses, $HN_{1,i}$ the number of negative unit clauses and $HN_{2,i}$ the number of negative non-unit clauses. Finally, for simplicity define $\Pi=F(e^{-c})$ and $\Pi_{i}$ to be the product of the first $i$ terms from $\Pi$. We will assert stochastic domination via [*couplings of Markov chains*]{} (for an extensive treatment see [@lindvall:coupling]). The framework needed for our coupling result is made precise in the following definitions (especially tailored for the context of this paper, rather than being standard). Let $(X_{n})$ be a Markov chain having state space $S$ and transition matrix $X$. A [*stopping rule $H$ for $X_{n}$*]{} is a set $H$ of [*transitions of $(X_{n})$*]{} (i.e. pairs of states $(i,j)\in S\times S$ such that $X_{i,j}>0$). We will use stopping rules $H$ to talk about the probability (denoted $\Pr[A|H]$) of properties $A$ of the Markov chain that only hold conditional on $(X_{n})$ making only transitions from $H$. Let $X_{t}=(X_{0,t},\overline{X}_{t})$ and $Y_{t}=(Y_{0,t},\overline{Y}_{t})$ be two Markov chains on ${\bf Z}\times {\bf Z}^{d}$ having transition matrices $X$, $Y$, respectively. Let $H_{1}$, $H_{2}$ be two stopping rules for $(X_{n})$, $(Y_{n})$, respectively. Let $0\in B\subset\{0,\ldots, d\}$. A [*$(B,H_{1},H_{2})$-majorizing (Markovian) coupling of $X$ and $Y$*]{} is a Markov chain $Z=(Z_{t,1},Z_{t,2}))$ on $({\bf Z}\times {\bf Z}^{d})^{2}$, $Z_{t,1}=(Z_{t,01},\ldots, Z_{t,d1})$, $Z_{t,2}=(Z_{t,02},\ldots, Z_{t,d2})$, having transition matrix $(Z_{(i,j),(k,l)})_{i,j,k,l\in {\bf Z}^{d+1}}$ such that: - for every $i,j\in {\bf Z}^{d+1}$, $\Pr[Z_{t+1,1}]=j|Z_{t,1}=i]=X_{i,j}$, - for every $i,j\in {\bf Z}^{d+1}$, $\Pr[Z_{t+1,2}]=j|Z_{t,2}=i]=Y_{i,j}$, - for every $i,j,k,l\in {\bf Z}^{d+1}$, if $Z_{(i,j),(k,l)}>0$ and $(i,k)\in H_{1}$ then $(j,l)\in H_{2}$. - for every $t\geq 0$ and every state $(Z_{t,1},Z_{t,2})$ of $Z_{t}$ reachable through moves in $H_{1}\times ({\bf Z}^{d+1})^{2}$ only, we have $$Z_{t,i1}=Z_{t,i2}\mbox{ for all } i\in B,$$ and $$Z_{t,01}\leq Z_{t,02}.$$ The first two conditions express the fact that the coupling is [*Markovian*]{}. The third condition (denoted symbolically $H_{1}\leq H_{2}$) relate the two stopping rules. Finally, the last condition allows us to compare two quantities of interest for the Markov chains $(X_{n})$ and $(Y_{n})$, namely $\sum_{i\in B}X_{i,t}$ and $\sum_{i\in B}Y_{i,t}$. Let us now formally state this comparison result. \[maj:coupling\] Let $(X_{t})$, $(Y_{t})$, $H_{1}$, $H_{2}$, $B$ be as in the previous definition, and suppose it is possible to construct a $(B,H_{1},H_{2})$-majorizing coupling of $(X_{t})$ and $(Y_{t})$. Then, for every $a\in {\bf Z}$, $$\Pr[\sum_{i\in B}X_{i,t}\geq a | H_{1}]\leq \Pr[\sum_{i\in B}Y_{i,t}\geq a | H_{2}]$$ Define $$H_{B,a}=\{\lambda=(\lambda_{0},\ldots, \lambda_{d}): \sum_{i\in B}\lambda_{i}\geq a\}.$$ Then $$\begin{aligned} \Pr[X_{t}\in H_{B,a}| H_{1}] & = & \sum_{x\in H_{B,a}}\Pr[X_{t}=x|H_{1}]\label{e0}\\ & = & \sum_{x\in H_{B,a}}\Pr[Z_{t,1}=x|H_{1}\times S^{2}]\label{e1}\\ & = & \sum_{x\in H_{B,a}}\sum_{y\in S} \Pr[(Z_{t,1}=x) \AND (Z_{t,2}=y)|H_{1}\times S^{2}]\label{e2}\\ & = & \sum_{x\in H_{B,a}}\sum_{y\in S} \Pr[(Z_{t,1}=x) \AND (Z_{t,2}=y)|H_{1}\times H_{2}]\label{e3}\\ & = & \sum_{x\in H_{B,a}}\sum_{y\in H_{B,a}} \Pr[(Z_{t,1}=x) \AND (Z_{t,2}=y)|H_{1}\times H_{2}]\label{e4}\\ & = & \sum_{y\in H_{B,a}}\sum_{x\in H_{B,a}} \Pr[(Z_{t,1}=x) \AND (Z_{t,2}=y)|H_{1}\times H_{2}]\label{e5}\\ & \leq & \sum_{y\in H_{B,a}}\sum_{x\in S} \Pr[(Z_{t,1}=x) \AND (Z_{t,2}=y)|H_{1}\times H_{2}]\label{e6}\\ & \leq & \sum_{y\in H_{B,a}}\sum_{x\in S} \Pr[(Z_{t,1}=x) \AND (Z_{t,2}=y)|S^{2}\times H_{2}]\label{e7}\\ & = & \sum_{y\in H_{B,a}}\Pr[Z_{t,2}=y|S^{2}\times H_{2}]\label{e8}\\ & = & \Pr[Y_{t}\in H_{B,a}| H_{2}]\label{e9}.\end{aligned}$$ Lines \[e1\], \[e9\] follow from the Markovian character of the coupling. Line \[e3\] follows from $H_{1}\leq H_{2}$. The rest are simple arithmetical calculations. The couplings we need are very simple, and employ the following idea: suppose the recurrences describing $X_{t+1}-X_{t}$ and $Y_{t+1}-Y_{t}$ are identical, except for one term, which is $B(m_{1},\tau)$ for $(X_{t})$ and $B(m_{2},\tau)$ for $(Y_{t})$, where $m_{1}\leq m_{2}$ are positive integers and $\tau \in (0,1)$. Obtain a coupling by identifying $B(m_{1},\tau)$ with the outcome of the first $m_{1}$ Bernoulli experiments in $B(m_{2},\tau)$. The Uniformity Lemma ==================== The crux of our analysis relies on the observation that the behavior of $\PUR$ on a random Horn instance can be described by a stochastic recurrence (Markov chain). \[pur:2\] [.8 ]{} if ($\Phi$ contains no positive unit clauses) first eliminate a random clause then independently, with probability $1/t$ eliminate every remaining clause. and continue recursively else choose a random positive unit clause $x$ set $x$ to 1 in $\Phi$ and continue recursively \[uniformity\]([**“The Uniformity Lemma” :**]{}) 1. Suppose $\PUR$ does not halt before stage $t$. Then, conditional on $N_{t}$, the clauses of $\Phi_{t}$ are random and independent. 2. Consider $PUR_{2}$, the modified version of the algorithm $\PUR\ $ from Figure 2 (that does not check for accepting/rejecting, but may produce empty clauses). Let $E_{i}$ represent the number of empty clauses at stage $i$. Then for every stage $t$, conditional on $\Gamma_{t}=(HN_{1,t},HN_{2,t},HP_{1,t}, HP_{2,t}, E_{t})$ the clauses of $\Phi_{t}$ are chosen uniformly at random and are independent. 3. Consider again the original version of  . Suppose now that we condition $\Gamma_{t}$ and on the fact that $\Phi$ survives Stage $t$ as well. Then we have $$\label{eq:markovchain} N_{t-1}=N_{t}-\Delta_{1,P}(t)-\Delta_{2,P}(t),$$ where - $\Delta_{1,P}(t)$, the number of positive clauses that are satisfied at stage $t$, has the distribution $1+B\left(HP_{1,t}-1,\frac{1}{t}\right)$. - $\Delta_{2,P}(t)$, the number of positive non-unit clauses that are satisfied at stage $t$, has the binomial distribution $B\left(HP_{2,t},\frac{1}{t}\right)$. The proof is based on the [*method of deferred decisions*]{} [@deferred:decisions]. The crux of this method is to consider the random formula $\Phi$ as being disclosed gradually as the algorithm proceeds, rather than as being completely determined at the very beginning of the algorithm. Following a suggestion of Achlioptas [@achlioptas:3sat:pie] the process can be conveniently imagined as having the occurrences of each literal in the formula represented by a card that has the literal as it value. The cards corresponding to each clause are arranged in separate piles, and are all initially face down (to reflect the fact that initially we don’t know anything about the formula). Part of the unveiling process will consist of [*dealing*]{} (turning face up) the cards from each pile that contain a specific literal. We also assume that (unless other specified by the unveiling process) the still undealt parts of each pile is “hidden”, so that we don’t know its height. 1. For the first part of the lemma (that conditions only on $N_{t}$) the disclosure process consists of first unveiling, at each stage greater than $t$, the location of a random positive unit clause of $\Phi_{t}$ (guaranteed to exist). We fill it with a random variable among those left. The process continues by providing 1. all the occurrences of this variable. 2. the locations and complete contents of clauses that contain this variable in positive form, and 3. the locations of the clauses that have been completely filled. We refer to the clauses in the latter two cases as [*blocked*]{}, since we have complete information about them, and they will no longer be involved in the unveiling process. Suppose $\PUR$ arrives at stage $t$ on $\Phi$. Then in stages $i=n, n-1, \ldots, t+1$, $\Phi_i$ should have contained a unit clause consisting of a positive literal but not its complement. This information does not condition in any way the structure of the clauses of $\Phi_{t}$, that correspond to the non-blocked piles, counted by $N_{t}$. In fact that the only information we have at Stage $t$ about these piles is their number $N_{t}$. For each such pile all disclosed literals appear only in negative form, since otherwise the clause would have been satisfied and blocked. Hence the [*residual*]{} (hidden) part still obeys the Horn restriction. Given the uniformity in the choice of the initial clauses of $\Phi$, it follows that the clauses of $\Phi_{t}$ are chosen uniformly at random (and independently) among all nonempty Horn clauses in the remaining variables. 2. We will prove the result inductively, starting with Stage $n$ (where it certainly is true) and working downwards. At each stage, the disclosure process will offer some information on the type of the hidden portion of the clause, namely whether it is a positive unit, positive non-unit, negative or empty. For notational convenience define $p_{1}(t)=\frac{1}{t}$, $p_{2}(t)=\frac{1}{2^{t-1}-1}$, $p_{3}(t)=\frac{1}{2}$, $p_{4}(t)=\frac{t-1}{(2^{t}-t-1)}$. [**If $HP_{1,t}>0$, to carry on the disclosure process:**]{} 1. choose a random positive unit clause, fill it with a random variable $x$ among those left, and block. 2. independently with probability $1/t$ fill any of the remaining positive unit clauses with $x$ and block. 3. for any positive non-unit clause: 1. with probability $p_{1}(t)$ fill one entry of the clause with $x$, fill the rest of the clause with a random, non-empty, combination of negated remaining literals and block. 2. if the first case did not happen then, with probability $p_{2}(t)$, fill one entry with $\overline{x}$ and set the type of the remaining clause to “positive unit”. 3. if the first two cases did not happen then, with probability $p_{3}(t)$, fill one entry with $\overline{x}$ (but do nothing else). 4. otherwise do nothing. 4. for any negative unit clause: 1. with probability $p_{1}(t)$ fill one entry of the clause with $\overline{x}$, set the type of the remaining clause to “empty”. 2. otherwise do nothing. 5. for any negative non-unit clause: 1. with probability $p_{4}(t)$ fill one entry of the clause with $\overline{x}$ and set the type of the remaining clause to “negative unit”. 2. if the first case did not happen then, with probability $p_{3}(t)$, fill one entry of the clause with $\overline{x}$ (but do nothing else). 3. otherwise do nothing. [**In the opposite case, $HP_{1,t}=0$,** ]{} the disclosure process consists of performing the procedure described in the algorithm, and additionally filling every eliminating clause with a random Horn clause in the remaining variables that is not a positive unit clause. By a tedious but straightforward case analysis it is easy to see that in both cases the uniformity property carries through to the next stage. The reason is that in all cases the only information we disclose about each remaining clause is its type, but not its content. Moreover, we get the following recurrences for the case $HP_{1,t}>0$ : $$\left\{\begin{array}{l} HP_{1,t-1}= HP_{1,t}-1-\Delta_{1,P}(t)+\Delta_{12,P}(t),\\ HP_{2,t-1}= HP_{2,t}-\Delta_{2,P}(t)-\Delta_{12,P}(t),\\ HN_{1,t-1}=HN_{1,t}-\Delta_{E}(t)+\Delta_{12,N}(t),\\ HN_{2,t-1}=HN_{2,t}-\Delta_{12,N}(t),\\ E_{t-1}= E_{t}+\Delta_{E}(t), \end{array} \right.$$ where $$\left\{\begin{array}{l} \Delta_{1,P}(t)=B\left(HP_{1,t}-1, p_{1}(t)\right),\\ \Delta_{2,P}(t)=B\left(HN_{2,t},p_{1}(t)\right),\\ \Delta_{12,P}(t)=B\left(HP_{2,t}-\Delta_{2,P}(t),p_{2}(t)\right)\\ \Delta_{E}(t)=B\left( HN_{1,t}, p_{1}(t)\right),\\ \Delta_{12,N}(t)=B\left(HN_{2,t}, p_{4}(t)\right). \end{array} \right.$$ 3. The conditioning on  surviving Stage $t$ implies that up to Stage $t-1$ the algorithm  and its modified version $PUR_{2}$ work in the same way. With respect to $PUR_{2}$ it gives us one additional piece of information with respect to merely conditioning on $\Gamma_{t}$: that $\Delta_{E}(t)=0$. The desired recurrence follows from the previous point. Comments on the Uniformity Lemma --------------------------------- A few comments on the contents of the uniformity lemma are in order. Although (as shown by Lemma \[uniformity\] (i)) it would seem that we can characterize the state of  at Stage $t$ by a single number, $N_{t}$, this is not so, for two reasons: - first, the above uniformity result is conditional (on  surviving Stage $t+1$) and does not hold throughout the whole evolution of the algorithm. For instance it is [*not*]{} true at stages before stage $t+1$, since unit clauses that are the negation of the variable being set cannot appear. An unconditional uniformity result is provided by Lemma \[uniformity\] (ii). However, it applies to a modified algorithm, which is no longer complete for  , and cannot be used to obtain an exact result (rather than just a lower bound on the threshold, as it is done e.g. in [@frieze-suen] for $k$-SAT). - second, as shown by Lemma \[uniformity\] (iii), a stochastic recurrence for $N_{t-1}$ [*cannot*]{} be determined by only using the value of $N_{t}$; instead we need additional information on the structure of $\Phi_{t}$ captured by the five-tuple $\Gamma_{t}$. Fortunately it is possible to circumvent both these problems. On one hand it will turn out that all we need for the analysis is the conditional uniformity result (i), as long as we can “control” the value $N_{t}$. On the other hand, this value can be indirectly estimated throughout the “most interesting regime of  “. A coupling result ----------------- The following result makes a first step towards estimating $N_{t}$, by showing that we can “approximate” this value by the value of a Markov chain with a simpler structure. The intuitive idea is simple: by Lemma \[uniformity\] (iii) the “net decrease” $N_{t-1}-N_{t}$ is approximately $1+B(HP_{1,t}+HP_{2,t}-1,\frac{1}{t})$ which is intuitively less than $1+B(N_{t}-1,\frac{1}{t})$. \[pur:3\] [0.8 ]{} if ($\Phi$ contains no positive unit clauses) first eliminate a random clause then independently, with probability $1/t$ eliminate every remaining clause and continue recursively else first, independently with probability $1/t$ eliminate every negative non-unit clause then choose a random positive unit clause $x$ set $x$ to 1 in $\Phi$ and continue recursively \[major\] Consider the modified version of  from Figure 3. Then 1. Conditional on $\Gamma^{(2)}_{t}=(HN^{(2)}_{1,t},HN^{(2)}_{2,t},HP^{(2)}_{1,t}, HP^{(2)}_{2,t}, E^{(2)}_{t})$ (the same quantities as in Lemma \[uniformity\] (ii); we only use the superscript to indicate the fact that we are dealing with a different algorithm) the clauses of $\Phi_{t}$ denote their number by $N^{(2)}_{t}$) are uniform and independent. 2. Define $S_{0}=\{[(a,b,c,d,e)\goesto (a_{1},b_{1},c_{1},d_{1},e_{1})]: (c>0)\&\&(e_{1}=0)\}$. Define the stopping rules $H_{2}$, $H_{3}$ for $\Gamma_{t}$, $\Gamma^{(2)}_{t}$ to be respectively the set of legal transitions of $\Gamma_{t}$, $\Gamma^{(2)}_{t}$ that are in $S_{0}$. Finally, define $B=\{0,1,2, 3\}$. Then it is possible to construct a $(B,H_{2},H_{3})$–majorizing coupling of the Markov chains $\Gamma_{t}$ and $\Gamma^{(2)}_{t}$. 3. If $HP^{(2)}_{1,t}>0$ then $N^{(2)}_{t-1}=N^{(2)}_{t}-1-\Delta_{1,P}(t)-\Delta_{2,P}(t)- \Delta_{1,N}(t)-\Delta_{2,N}(t)$, where $$\left\{\begin{array}{l} \Delta_{1,P}(t)=B\left(HP_{1,t}-1, \frac{1}{t}\right),\\ \Delta_{2,P}(t)=B\left(HN_{2,t},\frac{1}{t}\right),\\ \Delta_{1,N}(t)=B\left(HN_{1,t}, \frac{1}{t}\right),\\ \Delta_{2,N}(t)=B\left(HN_{2,t}, \frac{1}{t}\right). \end{array} \right.$$ Consequently, irrespective of the value of $HP^{(2)}_{1,t}$, $$N^{(2)}_{t}-N^{(2)}_{t-1}\stackrel{D}{=}1+B(N^{(2)}_{t}-1, \frac{1}{t}).$$ <!-- --> 1. The proof is identical to the one of Lemma \[uniformity\] (ii), and thus omitted. 2. The intuition behind the definition of the set $S_{0}$ is simple, and displays the connection with the desired analysis of the algorithm  : we restrict the set of legal transitions of $\Gamma_{t}$, $\Gamma^{(2)}_{t}$ to those for which $HP_{1,t}>0$ and $E_{t-1}=0$ (in other words those for which  survives stage $t$, and thus works like $PUR_{2}$). The coupling can be described in a very intuitive way. Suppose that we carry on the disclosure process corresponding to the algorithm $PUR_{2}$, but the blocking of a clause is accomplished by placing a red pebble on the corresponding pile, rather than physically eliminating it. We modify this process to also place, at each stage $j$ such that $HP_{1,j}>0$, some blue pebbles on the piles corresponding to negative non-unit clauses, at follows: each such clause that has no pebble on it independently receives a blue pebble with probability $1/j$. It is easy to see that the new pebbling process (red and blue) simulates the algorithm $PUR_{3}$. The coupling easily follows. 3. The result follows from point 1, by separately considering the behavior of $PUR_{3}$ in the two cases, $HP^{(2)}_{1,t}>0$, $HP^{(2)}_{1,t}>0$. The proof outline ================= We will prove only the second part of the theorem, since the first part directly follows from it. By the proof of Lemma \[uniformity\] the behavior of the algorithm can be described (with the above mentioned caveats) by a stochastic recurrence involving $N_t$. Proposition \[first\] below proves the important fact that with high probability $N_{t}$ stays close to its expected value, which is $N_{n}(1-o(1))$ for $t=n-O(n^{1/2})$. So, intuitively, the number of clauses of $\Phi_{t}$ stays (almost) the same, while the number of variables decreases by one. The net effect of one iteration is thus to “double the constant $c$”. We build the proof on three technical lemmas, Lemmas \[second\], \[third\], and \[fourth\]. Intuitively, these lemmas show the following: - Lemma \[second\] states that with probability $1-o(1)$ $\PUR$ rejects “in the first $\log n+\theta(1)$ stages” (if at all;we will make this more precise in Theorem \[unsat\]). - Lemma \[third\] states that with probability $1-o(1)$ $\PUR$ does not reject in any fixed number of steps. - Lemma \[fourth\] obtains a coarse inequality for the satisfaction probability $$e^{-c}-o(1)\leq \PR[\Phi \in \mbox{HORN-SAT}] \leq \frac{e^{-c/4}}{1-e^{-c/4}}+o(1).$$ A consequence of this result is that a constant number, say $k$, of iterations “blows up” $c$ so that the resulting constant $2^{k}c$ is so large that $\Phi_{n-k}$ is unsatisfiable with probability arbitrarily close to 1. Next we obtain a relation between the probability that $\PUR$ rejects $\Phi_n$ and the probability that $\PUR$ rejects $\Phi_{n-1}$ ($\Phi_{n-1}$ is defined with probability $1-o(1)$ in the case when $c=\Theta(1)$ due to Lemma \[third\]): the former is equal to the latter multiplied by the probability that PUR survives stage $n$. This latter term is one minus the probability that  accepts at stage $n$, which is asymptotically equal $e^{-c}$, and minus the probability that  rejects at step $n$, which is $o(1)$ and can be asymptotically neglected. Iterating this relation for a large enough (but constant) number of steps $k$ that make $\Pr[\Phi_{n-k}\mbox{ is unsatisfiable}]$ “close enough to 1” and the partial product $\Pi_{k}$ “close enough to $\Pi$” allows us to argue that, for every $\epsilon >0$, the probability that  rejects is, for sufficiently large $n$, within $\epsilon$ of the value $\Pi$ prescribed by the theorem. The key lemmas ============== \[first\] For every $c>0$ and every $t, n-c\sqrt n \leq t \leq n$, the conditional probability that the inequality $$\label{concentrate} N_{n}-(n-t)\left[1+\frac{2(N_{n}-1)}{t}\right]\leq N_{j}\leq N_{n}$$ holds for all $t\leq j \leq n$, in the event that $\PUR$ reaches stage $t$, is $1-o(1)$. For ease of notation, define $E_{t}$ to be the event that Relation \[concentrate\] holds, and the sequences $y_{t}=N_{n}-(n-t)\left[1+\frac{2(N_{n}-1)}{t+1}\right]$ and $z_{t}=N_{n}$. By the Lemma \[major\] (ii) and Lemma \[maj:coupling\] we have: $$\Pr[N^{(2)}_{t}\geq y_{t}| H_{3}]\leq \Pr[N_{t} \geq y_{t}|H_{2}].$$ But conditioning on $H_{3}$, $H_{2}$ is the same thing as conditioning on the algorithms not remaining without unit clauses, and not producing empty clauses, in other words working like  . So $$\Pr[E_{t}| S_{t+1 }]\geq \Pr[N^{(2)}_{t}\geq y_{t}| H_{3}].$$ $H_{3}$ implies that $N^{(2)}_{j+1}-N^{(2)}_{j}\stackrel{D}{=}B(N^{(2)}_{j+1}-1,\frac{1}{j+1})$ for every $j\geq t$. So, defining the Markov chain $U_{t}$ by $U_{n}=N_{n}$ and $U_{t}-U_{t-1}\stackrel{D}{=}1+\eta_{t}$, where the $\eta_{j}$ are independent variables having the Bernoulli distribution $B(N_{j}-1,\frac{1}{j})$, it follows that $$\label{et} \Pr[U_{t}\geq y_{t}]=\Pr[N^{(2)}_{t}\geq y_{t}| H_{3}]\leq \Pr[E_{t}| S_{t+1}]$$ By the Chernoff bound, and reasoning inductively, we infer that with probability $1-o(1)$ we have $\eta_{j}\leq \frac{2(U_{j}-1)}{j}\leq \frac{2(N_{n}-1)}{t}$ for every $t\leq j \leq n$. Plugging this inequality in the definition of $U_{t}$ and using equation \[et\] proves the lemma. \[second\] Let $p=p(n)$ such that $\lim_{n\goesto \infty} [n-\log_{2} n -p(n)]=\infty$. Then $\PR[R_p|S_{p+1}]$, i.e., the conditional probability that $\PUR$ rejects at stage $p(n)$ in the event that $\PUR$ reaches stage $p(n)$, is $1-o(1)$. To prove this lemma we need the following trivial combinatorial result: \[ballsandbins\] Let $a(n)$ white balls and $b(n)$ black balls be thrown independently into $n$ bins. Pick a random bin among those containing a white ball, and let $X_{n}$ be the event that the chosen bin contains a black ball as well.Then $Pr[X_{n}]=1-(1-\frac{1}{n})^{b(n)}$. It is easy to see that the bin we choose can be seen as the result of choosing a random bin among [*all*]{} $n$ bins. So $\Pr[X]$ is simply the probability that a randomly chosen bin gets a black ball. But this is $1-(1-\frac{1}{n})^{b\cdot n}$. [**Proof of Lemma \[second\]:**]{} Let $T$ denote the event $E_{n}\AND E_{n-1}\AND \cdots \AND E_{p}$. It follows from Proposition \[first\] that $\PR[T|S_j]=1-o(1)$. Then, by Fact \[trick-approx\], $\PR[R_p|S_{p+1}]=\PR[R_p|S_{p+1}\AND T]+o(1)$. Since $T$ implies $N_{p}\in I= [y_{p}, z_{p}]$, $$\PR[R_p|S_{p+1}\AND T]\geq \min_{\lambda \in I}\{ \PR[R_p|S_{p+1}\AND T\AND (N_{p}= \lambda)] \}.$$ Thus, the claim holds if we show that $\max_{\lambda \in I} \PR[\overline{R_p}|S_{p+1}\AND T\AND (N_{p}= \lambda)] = o(1)$. Suppose that $N_{p}=\lambda$, the events $T$, $S_{p+1}$ hold, and we further condition on the number of negative unit clauses. The event $R_{p}$ can be mapped into $X_{p}$ of the previous “balls into bins” experiment, with the positive unit clauses representing the white balls, the negative unit clauses being the black balls, and the remaining $p$ variables being the bins. From Lemma \[uniformity\] it follows that the number of negative unit clauses of $\Phi_{p}$ has a binomial distribution $B(\lambda,\frac{p}{N(p)})$. Since $\lambda\frac{p}{N(p)}\geq y_{p}\frac{p}{N(p)}= (1+o(1))c\cdot 2^{\log_{2}(n)+p(n)}= \omega(n)$, it follows easily by the Chernoff bound that with probability $1-o(1)$ the number of both positive and negative unit clauses of $\Phi_{p}$ is larger than $\frac{py_{p}}{2N_{p}}$. Since this amount is $\omega(n)$ the claim is a consequence of Lemma \[ballsandbins\]. \[reject1\] With probability $1-o(1)$ $\PUR$ does not reject $\Phi$ at stage $n$. Let $U$ be the number of unit clauses in $\Phi$. The variable $U$ has a binomial distribution with parameters $2^{n}c$ and $\frac{2n}{(n+2)2^{n}-1}$, so it is asymptotically a Poisson distribution with parameter $2c$. In fact Proposition \[b:h:j\] and Proposition \[chernoff:poisson\] together imply that with probability $1-o(1)$, $U\leq 2c(1+n^{1/3})\leq 4cn^{1/3}$. Consider the $U$ unit clauses of $\Phi$ as being balls to be tossed into $n$ bins. The probability that two of them end up in the same bin is at most ${{U}\choose{2}}\cdot \frac{1}{n}$, which, in view of the above upper bound on $U$, is $o(1)$. So with probability $1-o(1)$ no variable appears more than once in a unit clause of $\Phi$, and thus, $\PUR$ does not reject. \[third\] For every $k>0$, with probability $1-o(1)$, $\PUR$ does [*not*]{} reject in any of the stages $n, n-1,\ldots, n-k+1$. A simple induction on $k$, coupled with the fact that, conditioned on $N_{t}$, $\Phi_{t}$ is a random formula, and Proposition \[concentrate\]. \[fourth\] For every positive constant $c$, $e^{-c}-o(1)\leq \PR[\Phi \in \HSAT] \leq \frac{e^{-c/4}}{1-e^{-c/4}}+o(1)$. Let $c>0$ be a constant. $$\begin{aligned} \PR[\Phi \in \HSAT] &\geq& \PR[\mbox{$\PUR$ accepts at the first step}] \\ &=& \PR[\mbox{$\Phi$ contains no positive unit clauses}] \\ &=& \left(1-\frac{n}{(n+2)\cdot 2^{n}-1}\right)^{2^{n}c} \\ &=& e^{-\frac{n2^{n}\cdot c}{(n+2)\cdot 2^{n}-1}}-o(1) \\ &\geq& e^{-c}-o(1),\end{aligned}$$ since $\frac{n2^{n}}{(n+2)\cdot 2^{n}-1}\leq 1$. This proves the lower bound. In order to prove the upper bound, define $p=\log_{2} n +\log \log n$, let $Y$ be the event “$\PUR$ accepts,” and let $Z$ the event “$\PUR$ stops in at most $p$ iterations.” By Lemma \[second\], $\PR[Z]=1-o(1)$, so $\PR[Y]\leq \PR[Y|Z]=o(1)$. However, given $Z$, $Y$ is equivalent to $A_{n}\OR (A_{n-1}\AND S_{n})\OR (A_{n-p+1}\AND S_{n}\AND \cdots S_{n-p+2})$. So, by the Bayes rule, $\PR[Y|Z]$ is at most $$\PR[A_{n}]+\PR[A_{n-1}|S_{n}]+\cdots +\PR[A_{n-p+1}|S_{n}\AND S_{n-1} \AND \cdots \AND S_{n-p+2}].$$ We cannot apply directly Fact \[trick-approx\], because this sum has an unbounded number of terms. Instead, we will use the following simple consequence of Bayes conditioning: $$\PR[A_{i}|S_{n}\AND\cdots \AND S_{i+1}]\leq \PR[A_{i}|S_{n}\AND\cdots S_{i+1}\AND E_{i}] +\PR[\overline{E_i}|S_{n}\AND\cdots S_{i+1}].$$ From Proposition \[first\] the sum of all “second terms” is $o(1)$. As to the first term, the conditioning implies that the clauses of $\Phi_i$ are chosen uniformly at random and their number is between $y_i$ and $z_i$. Since $\PUR$ accepts $\Phi_i$ if and only if $\Phi_i$ contains no positive literals, we have $$\begin{aligned} 1-\left(1-\frac{i}{(i+2)2^{i}-1}\right)^{y_i}-o(1) &\leq& \PR[\overline{A_i}|\overline{S_n}\AND \cdots \AND \overline{S_{i+1}}\AND E_{i}] \label{time:acc} \\ &\leq& 1-\left(1-\frac{i}{(i+2)2^{i}-1}\right)^{z_{i}}+o(1)\nonumber. \end{aligned}$$ in particular $$\PR[A_{i}|S_{n}\AND \cdots \AND S_{i+1}\AND E_{i}]\leq \left(1-\frac{i}{(i+2)2^{i}-1}\right)^{y_{i}}.$$ The right hand side is less or equal than $e^{-\frac{iy_{i}}{(i+2)2^{i}-1}}$. Since $\frac{i}{i+2}\geq \frac{1}{3}$ and $y_{i}\geq N_{n}\cdot (1-\frac{\log n + \log \log n}{n-\log n +\log \log n}) \geq \frac{3N(n)}{4}$ for a sufficiently large $n$ we have, (assuming such an $n$) $e^{-\frac{iy_{i}}{(i+2)2^{i}-1}}\leq e^{-\frac{iy_{i}}{(i+2)2^{i}}}\leq e^{-\frac{2^{n-i}c}{4}}$. Summing up all these upper bounds for $\PR[A_{i}|S_{n}\AND \cdots \AND S_{i+1}\AND E_{i}]$ and observing the exponents as part of the progression $\{\frac{c}{4} \cdot j\}$, we obtain the desired upper bound $\frac{e^{-c/4}}{1-e^{-c/4}}+o(1)$. Putting it all together {#putting:together} ======================= Now we complete the proof of Theorem \[maintheorem\] by proving equation (\[formula\]). In order to prove this result it suffices to show that $$\label{limit} \lim_{n\goesto \infty} \PR_{\Phi \in \Omega(m,n)} [\mbox{$\PUR$ rejects $\Phi$}] = F(e^{-c}).$$ It is easy to see that $F$ is well-defined on $(0,1)$ and has the following Taylor series expansion $$\tilde{F}(x)=(-1)^{b_{0}}+(-1)^{b_{1}}x+ (-1)^{b_{2}}x^{(2)}+\cdots (-1)^{b_{i}}x^{i}+ \cdots$$ with $b_{i}$ being the number of ones in the binary representation of $i$. Also $F$ is monotonically decreasing, positive on $(0,1)$, and has limit 1 at 0. Fix $\epsilon >0$. Let $R$ be the event “$\PUR$ rejects $\Phi$”. What we need to show is that for a sufficiently large $n$, $$\label{epsilonfinal} (1-\epsilon)\Pi \leq \PR[R]\leq (1+\epsilon)\Pi.$$ Since $\Pi$ converges and $\Pi>0$, there exists some $k_{0}$ such that for all $k\geq k_{0}$, $$\label{pi:k} \sqrt{1-\epsilon}<\frac{\Pi_{k}}{\Pi}<(1+\epsilon).$$ By Lemma \[fourth\], there exist some $n_{0}>0$ and $c_{0}>0$ such that for every $n>n_{0}$ and every $c>c_{0}$, $\PR_{\Phi\in \Omega(n,2^{n}c)}[\mbox{$\PUR$ rejects $\Phi$}] >\sqrt{1-\epsilon}$. Keeping in mind the fact that events $A_{n}, A_{n-1}, \cdots, A_{n-k+1}$ are incompatible with $R$ we obtain the equality $$\PR[R] = \PR[R|\overline{A_n}\AND \cdots \AND \overline{A_{k}}] \cdot \PR[\overline{A_n}]\cdot \prod_{1\leq i\leq k} \PR[\overline{A_{n-i}}|\overline{A_n}\AND \cdots \AND \overline{A_{n-i+1}}].$$ for every fixed $k$. Although conceptually simple, the rest of the proof is a little bit cumbersome. We first consider the case $c>4\ln 2$ (so that the upper bound in Lemma \[fourth\] is strictly less than one). Choose $k$ so that, for large enough $n$, $y_{n-k}>c_{0}\cdot 2^{n-k}$. This is possible since $y_{n-k}\geq c\cdot 2^{n}[1-\frac{k}{n-k}]$. We claim (and it is in the proof of these two relations where the assumption $c>4\ln 2$ will be used) that for every $j$, $n-k \leq j\leq n$, that $$\label{final-one} \PR[\overline{A_j}|\overline{A_n}\AND \cdots \AND \overline{A_{j+1}}] = \PR[\overline{A_j}|S_{n}\AND \cdots \AND S_{j+1}]+o(1),$$ and $$\label{final-two} \PR[R|\overline{A_n}\AND \cdots \AND \overline{A_{j+1}}] = \PR[R|S_{n}\AND \cdots \AND S_{j+1}]+o(1).$$ We will postpone proving these equations and will see how the theorem can be proven from these equations. From equations \[time:acc\] and  \[final-one\] it follows that $$\begin{aligned} 1-\left(1-\frac{n-i}{(n-i+2)2^{n-i}-1}\right)^{y_{n-i}} & - & o(1) \\ &\leq & \PR[\overline{A_{n-i}}|\overline{A_n}\AND \cdots \AND \overline{A_{n-i+1}}] \\ &\leq& 1-\left(1-\frac{n-i}{(n-i+2)2^{n-i}-1}\right)^{z_{n-i}}+o(1)\nonumber. \end{aligned}$$ This proves that, for every $i=1, \ldots, k$, $$\lim_{n\goesto \infty} \PR[\overline{A_{n-i}}|\overline{A_n}\AND \cdots \AND \overline{A_{n-i+1}}] = (1-e^{-c\cdot 2^{i}}).$$ In a similar vein, we have, for large enough $n$, $$\sqrt{1-\epsilon} \leq \PR[R|\overline{A_n}\AND \cdots \AND \overline{A_{n-k+1}}] \leq 1.$$ If we take a large enough $n$, since the second part is asymptotically equal to $\Pi_{k}$, by (\[pi:k\]) we have (\[epsilonfinal\]). For a general $c>0$, define $c^{*}$ to be the infimum of all $c$’s for which the relation \[limit\] holds for every $c^{\prime}>c$. Suppose $c^{*}>0$. The single-step version of (\[final-two\]) provides $\PR[R|\overline{A_n}] = \PR[R|S_{n}]+o(1)$, so $\PR[R]=\Pr[\overline{A_n}]\Pr[R|\overline{A_n}]+o(1)$. Let $c<c^{*}$ and let $n_{1}$ be such that for all $n\geq n_{1}$, $2c(1-\frac{1}{n})^{2}>c^{*}$. By Fact \[trick-approx\] and Proposition \[first\] we have $\PR[R|S_{n}] =\PR[R|S_{n}\AND E_{n-1}]+o(1)$. Then by Fact \[trick-max\] we have $\min_{\lambda \in I}\{\PR[R|S_{n}\AND E_{n-1}\AND (N_{n-1}=\lambda)]\} \leq \PR[R|S_{n}\AND E_{n-1}] \leq \max_{\lambda \in I}\{\PR[R|S_{n}\AND E_{n-1}\AND (N_{n-1}=\lambda)]\}$. Conditioned on surviving stage $n$ and on the value of $N_{i}$, $\Phi_{n-1}$ is a random formula. Since both $y_{n-1}$ and $z_{n-1}$ are asymptotically equal to $2^{n}c$, for large $n$, $\Phi_{n-1}$ is a random formula with $n-1$ variables and $2^{n-1}\cdot (2c+o(1))$ clauses. Thus, $\lim_{n\goesto \infty}\PR[R|S_{n}]=\lim_{n\goesto \infty}\PR[R|S_{n}\AND E_{n-1}]= F(e^{-2c})$. Since $\PR[\overline{A_n}]$ is asymptotically equal to $1-e^{-c}$, and $F(c)=(1-e^{-c})F(2c)$, (\[epsilonfinal\]) holds for $c$. This shows that $c^{*}=0$, hence  \[limit\] is true for every $c>0$. Now what remains is to prove (\[final-one\]) and (\[final-two\]). We will prove only (\[final-one\]); proving the other is quite similar. Let $T$ be the event that $\PUR$ rejects in one of the first $k$ stages. Note that $\PR[T]=o(1)$, as seen in Lemma \[third\]. Note that $T=R_{n}\OR (S_{n}\AND R_{n-1})\OR \cdots \OR (S_{n}\AND \cdots \AND S_{n-k+2}\AND R_{n-k+1})$, so the probability of each of the $k$ terms in the disjunction is $o(1)$. Note that $$\PR[\overline{A_n}\AND \cdots \AND \overline{A}_{j}] =\sum_{\stackrel{r\geq j+1}{\epsilon_{r}\in \{-1,+1\}}}\, \PR[\overline{A_n}\AND \cdots \AND \overline{A_j}\AND R_{n}^{\epsilon_{n}}\AND \cdots \AND R_{j+1}^{\epsilon_{j+1}}],$$ where $X^{-1}$ denotes the opposite of the event $X$. All terms in the sum, other than $\PR[\overline{A_n}\AND \cdots \AND \overline{A_j}\AND R_{n}^{-1}\AND \cdots \AND R_{j+1}^{-1}]$ are either inconsistent (the algorithm rejects twice) or imply one of the terms appearing in the disjunction of the decomposition of $T$. Thus, $$\PR[\overline{A_n}\AND \cdots \AND \overline{A_j}] =\PR[\overline{A_n}\AND\overline{R_n}\AND \cdots \AND \overline{A_{j+1}}\AND\overline{R_{j+1}}\AND \overline{A_j}]+o(1),$$ that is, $$\PR[\overline{A_n}\AND \cdots \AND \overline{A_j}] = \PR[S_{n}\AND \cdots \AND S_{j+1}\AND \overline{A_j}]+o(1).$$ Similarly, $\PR[\overline{A_{n}}\AND \cdots \AND \overline{A_{j+1}}] = \PR[S_{n}\AND \cdots \AND S_{j+1}]+o(1)$. Note that for every sequence of events $A_{n}$ and $B_{n}$ with $\liminf_{n\goesto \infty}\PR[B_{n}]>0$, $|\frac{\PR[A_{n}]+o(1)}{\PR[B_{n}]+o(1)} -\frac{\PR[A_{n}]}{\PR[B_{n}]}|=o(1).$ So, it suffices to show that $\liminf_{n\goesto \infty}\PR[S_{n}\AND \cdots \AND S_{n-k}]>0$. This probability is $1- \PR[\PUR$ accepts in one of the first $k$ steps$]$ $ - \PR[\PUR$ rejects in one of the first $k$ steps$]$, and thus, is at least $1- \frac{e^{-c/4}}{1-e^{-c/4}}-o(1)- \PR[T]$. Since $\frac{e^{-c/4}}{1-e^{-c/4}}<1$, the required condition is guaranteed. Proof of Theorem 2.2 ==================== From equations (\[final-one\]) and (\[time:acc\]) and Proposition \[first\] it follows that the probability that the algorithm accepts [*exactly at Stage $k$*]{}, given that it has not stopped before, tends (as $n\goesto \infty$) to $e^{-2^{k}c}$. We have $$\begin{aligned} \Pr[A_{n-k}\AND [\Phi \in \mbox{SAT }]] & = &\Pr[A_{n-k}\AND [\Phi \in \mbox{SAT }]\AND S_{n-k+1}]\\ & = & \Pr[A_{n-k}\AND [\Phi \in \mbox{SAT }]| S_{n-k+1}]\cdot \Pr[S_{n-k+1}]\\ & = &\Pr[A_{n-k}| S_{n-k+1}]\cdot \Pr[S_{n-k+1}]. \end{aligned}$$ Therefore $$\begin{aligned} \rho_{k}=\lim_{n\goesto \infty}\Pr[A_{n-k}|\Phi \in \mbox{SAT }] & = & \lim_{n\goesto \infty}\frac{\Pr[A_{n-k}| S_{n-k+1}]}{\Pr[\Phi \in \mbox{SAT }]}\cdot \Pr[S_{n-k+1}]\\ & = &\frac{e^{-2^{k}c}}{1-F(e^{-c})}\cdot \prod_{i=1}^{k-1} (1-e^{-2^{i}c}). \end{aligned}$$ Proof of Theorem 2.3 ==================== We will only provide an outline of the proof of Theorem \[unsat\], sine its overall philosophy is quite similar to the one used to prove Theorem \[maintheorem\]. Redefine, for the purpose of this section, the index $k$ to refer to events taking place at stage $n-\lfloor \log_{2}(n) \rfloor -k$. For instance $S_{k}$ is the same as the event $Y_{n}>n-\lfloor \log_{2}(n) \rfloor -k$. Theorem \[unsat\] follows, of course, from the following claim \[approx\] $$\label{ultima} \lim_{n\goesto \infty}\Pr[Y_{n}> \lfloor \log_{2}(n) \rfloor +k|R]-G(k,c_{n})=0.$$ To prove Lemma \[approx\] we first show, using methods similar to the ones used to prove Lemma \[fourth\], the following result \[approx2\] $$\lim_{k\goesto -\infty}\liminf_{n\goesto \infty} \Pr[Y_{n}> \lfloor \log_{2}(n) \rfloor +k|R]=1.$$ The proof of Lemma \[approx\] proceeds now by observing that $$\begin{aligned} \Pr[(Y_{n} & > & \lfloor \log_{2} (n) \rfloor +k) \AND R]\\ & = & \Pr[S_{k}\AND R] = \Pr[S_{k-1}\AND \overline{R}_{k}\AND R]\\ & = & \Pr[\overline{R}_{k}\AND R| S_{k-1}]\Pr[S_{k-1}]\\ & = & (\Pr[\overline{R}_{k}| S_{k-1}]-o(1))\cdot (\Pr[S_{k-1}\AND R]+o(1))\\ & = & (\Pr[\overline{R}_{k}| S_{k-1}\AND E_{k}]-o(1))\cdot (\Pr[S_{k-1}\AND R]+o(1))\\ & = & \Pr[\overline{R}_{k}| S_{k-1}\AND E_{k}]\cdot \Pr[(Y_{n}> \lfloor \log_{2} (n) \rfloor +k-1) \AND R]+o(1)\\\end{aligned}$$ By Lemma \[ballsandbins\] the first term is approximately $e^{-c_{n}\cdot 2^{k}}$. Iterating downwards for a constant number of steps, up to $k_{0}\in {\bf Z}$, we infer $$\Pr[Y_{n}> \lfloor \log_{2} (n) \rfloor +k| R]= \Pr[Y_{n}> \lfloor \log_{2} (n) \rfloor +k_{0}|R ]\cdot \prod_{j=k_{0}+1}^{k}\Pr[\overline{R}_{k}|S_{k-1}\AND E_{k}]+o(1).$$ Choosing $k_{0}$ small enough so that, by Lemma \[approx2\], the first term is “close enough to 1” and the product is “close enough to $G(c_{n},k)$” proves relation \[ultima\]. Further discussions and open problems ===================================== There are several versions of Horn satisfiability whose phase transition is worth studying. One of them is the class of [*extended Horn formulas*]{} [@extended:horn:jacm; @extended:horn:ipl], for which $\PUR$ is still a valid algorithm [@extended:horn:jacm]. On the other hand, Horn-like restrictions have been employed to design tractable restrictions of various formalisms of interest in Artificial Intelligence, for example in constraint programming, temporal reasoning, spatial reasoning, etc. In many such cases positive unit resolution has natural analogs, (for instance [*arc-consistency*]{} in the case of ORD-HORN formulas in temporal reasoning [@ord-horn]), and it would be interesting to see whether the ideas in this paper can inspire similar results. Let us also remark that, as shown in [@istrate:stoc99], the average-case behavior of as displayed in Theorem \[sat\], is responsible for a physical property called [*critical behavior*]{}, widely studied in Statistical Mechanics and related areas (see, for instance, [@slade:critical], for the case of percolation), and similar to the one observed experimentally in [@kirkpatrick:selman:scaling] for the case of $k$-SAT. One final issue is whether one can meaningfully define and study the existence of a “physical phase transition” in HORN-SAT. The major problem is a “degeneracy” property of our random model for Horn satisfiability: one can satisfy all but the positive unit clauses of any formula by the assignment $11\ldots 1$. But under the random model employed in this paper the fraction of such clauses is $o(1)$, a property that is not shared by any of the previously studied problems, and which makes the “physical interpretation” problematic. Whether the problem becomes meaningful under a different random model remains to be seen. Acknowledgment {#acknowledgment .unnumbered} ============== This paper is part of the author’s Ph.D. thesis at the University of Rochester. Preliminary conference versions have appeared as [@istrate:aim98] and [@istrate:soda99]. Support for this work has come from the NSF CAREER Award CCR-9701911 and the DARPA/NSF Grant 9725021. I thank Mitsu Ogihara for substantive conversations that led to discovering the results in this paper. I also thank Jin-Yi Cai and Nadia Creignou for insightful comments and enjoyable discussions.
--- abstract: 'The acceleration and radiative processes active in low-power radio hotspots are investigated by means of new deep near-infrared (NIR) and optical VLT observations, complemented with archival, high-sensitivity VLT, radio VLA and X-ray [*Chandra*]{} data. For the three studied radio galaxies (3C105, 3C195 and 3C227), we confirm the detection of NIR/optical counterparts of the observed radio hotspots. We resolve multiple components in 3C227 West and in 3C105 South and characterize the diffuse NIR/optical emission of the latter. We show that the linear size of this component ($\gtrsim$4 kpc) makes 3C105 South a compelling case for particles’ re-acceleration in the post-shock region. Modeling of the radio-to-X-ray spectral energy distribution (SED) of 3C195 South and 3C227 W1 gives clues on the origin of the detected X-ray emission. In the context of inverse Compton models, the peculiarly steep synchrotron curve of 3C195 South sets constraints on the shape of the radiating particles’ spectrum that are testable with better knowledge of the SED shape at low ($\lesssim$GHz) radio frequencies and in X-rays. The X-ray emission of 3C227 W1 can be explained with an additional synchrotron component originating in compact ($<$100 pc) regions, such those revealed by radio observations at 22 GHz, provided that efficient particle acceleration ($\gamma\gtrsim$10$^7$) is ongoing. The emerging picture is that of systems in which different acceleration and radiative processes coexist.' author: - | G. Migliori$^{1,2}$[^1], M. Orienti$^{1}$, L. Coccato$^3$, G. Brunetti$^1$, F. D’Ammando$^{1}$, K.-H. Mack$^{1}$, M.A. Prieto$^{4}$\ $^{1}$Istituto di Radioastronomia - INAF, Via P. Gobetti 101, I-40129 Bologna, Italy\ $^{2}$Dipartimento di Fisica e Astronomia, Università di Bologna, Via Gobetti 93/2, I-40129 Bologna, Italy\ $^{3}$European Southern Observatory, Karl-Schwarzschild-Strae 2, D-85748 Garching b. München, Germany\ $^4$Instituto de Astrofísica de Canarias, c/ Vía Láctea s/n, E-38205 La Laguna (Tenerife), Spain\ date: 'Received ; accepted ?' title: 'Particle acceleration in low-power hotspots: modelling the broad-band spectral energy distribution' --- \[firstpage\] radio continuum: galaxies - radiation mechanisms: non-thermal - acceleration of particles Introduction ============ Hotspots are compact and bright regions typically located at the edge of the lobes of powerful radio galaxies. In the standard scenario, the hotspots mark the region where a relativistic jet impacts the surrounding medium and particles are accelerated by strong shocks and may radiate up to X-rays. The main mechanism at the origin of X-ray emission from hotspots is still debated. In many cases, the near infrared (NIR) and optical fluxes rule out a single synchrotron radio-to-X-ray component [see e.g. @zhang18 for a recent compilation]. Interestingly, there seems to be a connection between the radio luminosity of the hotspots and their X-ray properties. In fact, in powerful hotspots (with 1.4 GHz luminosities $\gtrsim$10$^{25}$ W Hz$^{-1}$ sr$^{-1}$), like CygnusA [@stawarz07], the X-ray emission is consistent with synchrotron self-Compton radiation (SSC) from relativistic electrons [e.g. @harris94; @harris00; @hardcastle04; @KS05; @werner12]. However, in low-power hotspots (with 1.4 GHz luminosities $\lesssim$10$^{25}$ W Hz$^{-1}$ sr$^{-1}$), SSC radiation would require a large departure from conditions of energy equipartition between particles and magnetic field to reproduce the observed levels of X-ray emission [@hardcastle04]. An alternative process is inverse Compton (IC) scattering off the cosmic microwave background (CMB) seed photons [@Kat03; @Tav05]. For this mechanism to be effective, the plasma in the hotspot must be still relativistic and the region seen under a small viewing angle, two assumptions in contrast with the constraints derived from the observed symmetrical, large-scale morphology of the radio galaxies. Moreover, one-zone synchrotron-IC models cannot account for the offsets that are often observed between the centroids of the X-ray and radio-to-optical emission [@hardcastle07; @perlman10; @mo12]. A decelerating jet with multiple, radiatively interacting emitting regions has been put forward as a viable solution by @gk04. Alternatively, the X-ray emission may be explained in terms of synchrotron radiation from a highly energetic population of particles, different from that responsible for the radio-to-optical emission [e.g. @hardcastle04; @hardcastle07; @tingay08; @mo12; @mingo17; @mo17].\ The spectral shape of the synchrotron emission from low and high radio power hotspots are also different, with the former one having higher break frequencies ($\nu_{break}$, i.e. the synchrotron frequencies of the oldest electrons still within the hotspot volume) than the latter ones. This can be explained if the break frequency is related to the magnetic field strength ($B$) in the hotspot volume, so that the electrons producing the optical emission survive a longer time in low radio power hotspots (with correspondingly lower B) than in high radio power hotspots. The observed dependence, $\nu_{break}\propto B^{-3}$, is in agreement with theoretical expectations based on the shock-acceleration model [@brunetti03]. On the other hand, the scenario of radiation from particles accelerated by a single strong shock at the jet termination is challenged by a number of observational issues. In several sources, we observe hotspot complexes with multiple bright features surrounded by diffuse emission [@black92; @lehay97]. In particular, the discovery of optical diffuse emission extending on kpc-scale can be hardly reconciled with the short radiative lifetimes of optical-emitting particles [e.g. @aprieto97; @prieto02; @lv99; @cheung05; @mack09; @erlund10; @mo12]. This suggests that efficient and spatially distributed acceleration mechanisms could be active in the post-shock region. Recent Atacama Large Millimeter Array (ALMA) observations of the hotspot 3C445 South unveiled highly polarized regions, suggesting the presence of shocks, enshrouded by unpolarized diffuse emission, compatible with instabilities and/or projection effects in a complex shock surface [@mo17].\ In this paper we present results on a new multi-band campaign of four low-power hotspots from the sample presented in @mack09: 3C 105 South (z=0.089), 3C 227 East and West (z=0.0863) and 3C 195 South (z=0.109). The new Very Large Telescope (VLT) observations were requested with the goal of defining the hotspots’ broad-band spectral energy distribution (SED), constrain the emission mechanisms at work at high-energies and search for diffuse optical emission, as a possible signature of particle re-acceleration. Indeed, the selection criteria of low power hotspots [radio power, redshift and declination, see @mack09] set very constraining limits for optical detection, even with the most sensitive telescopes as the VLT, hence the small number of sources. Nonetheless, dedicated studies of a few sources, representative of the entire population, have the potential to progress our understanding of the particle acceleration and radiative processes active in these structures.\ We analyze the new VLT observations in the NIR/optical bands and radio observations obtained with the Very large Array (VLA). We also retrieve archival [*Chandra*]{} data in order to extend the study to X-rays. Two hotspots (namely 3C105 South and 3C227 West) showed extended optical emission in earlier VLT images presented in @mack09, whereas the other two (3C195 South and 3C227 East) were only tentatively detected and the new VLT observations also aimed at confirming their NIR-optical emission.\ This paper is organized as follows: in Section 2 we present the observations and data analysis; results are reported in Section 3; spectral modeling of the best candidates is described in Section 4 and the results discussed in Section 5 and conclusions are drawn in Section 6.\ Throughout this paper, we assume the following cosmology: $H_{0} = 71\; {\rm km/s\, Mpc^{-1}}$, $\Omega_{\rm M} = 0.27$ and $\Omega_{\rm \Lambda} = 0.73$, in a flat Universe. The spectral index, $\alpha$, is defined as $S {\rm (\nu)} \propto \nu^{- \alpha}$. Observations ============ Optical and near infrared observations -------------------------------------- Optical and NIR observations of the hotspots were taken with the VLT in Paranal, Chile, under the observing programs 69.B-0544, 072.B-0360 (P.I. Prieto) and 084.B-0362 (P.I. Orienti). Observations of period 69 were acquired in 2001-2003 using the Infrared Spectrometer And Array Camera (ISAAC, NIR imaging). Observations of period 72 were acquired in 2003 November-December, using the FOcal Reducer and low dispersion Spectrograph 1 (FORS1, optical imaging). Observations of period 84 were acquired between 2009 November and 2010 March, using FORS2 and ISAAC (optical and NIR imaging, respectively). The observations performed in 2009 and 2010 are presented here for the first time, whereas those taken between 2001 and 2003 were already presented in @mack09 and [@mo12] and are here re-analyzed. Table \[optical\_log\] provides further details on the observing log.\ The reduction of individual exposures was carried out using standard tasks of FORS imaging pipeline, executed under the ESOreflex environment [@Freudling13]. Standard reduction includes bias observations, sky flats to correct for field illumination and pixel-to-pixel sensitivity variations. The sky background was computed in regions free from sources and interpolated over the field of view. Individual exposures were then coadded with the [iraf]{} task [imcombine]{} using bright sources as reference for alignment. Astrometric correction, to compensate systematics in the telescope pointing, was performed using the Two Micron All Sky Survey (2MASS) point-like source catalogs [@Skr06]. The number of stars used for the astrometric correction goes from a minimum of 5 (for 3C105South in Ks/J/H) up to 40 (3C195South in R and B), resulting in a positional uncertainty within 0.5 in all cases.\ Finally, a precise estimation of the residual sky contamination was computed on the final stacked image on a region close to the source of interest to compensate for large-scale variations in the field of view introduced by the instrument pipeline. The standard deviation of the counts in the sky-selected region was used to estimate the photometric error due to the sky background.\ Despite observations of standard star fields were foreseen, some of our targets missed the relevant calibrations in several bands (B, R, Ks, H, Js for 3C227West; B, R, Ks for 3C227East; B, R, Js for 3C105South and Ks for 3C195South). For those fields, we calibrated our observations using known sources in the observed field, exploiting the information of the Position and Proper Motions eXtended (PPMX) catalogue [@roser08 i.e. the only catalogue that contains non-saturated sources in our fields of view]. The standard deviation of the difference between the measured and tabulated magnitudes was used to estimate the error on the photometric zeropoint; this was combined to the photometric error determined above to compute the total uncertainties of our measurements (see Table \[optical\_log\]). --------- -------------- ----------------------- -------------------- ------------------- Hotspot Date Instrument, Band Central wavelength Photometric error (YYYY-MM-DD) ($\mu$m) (mag) 3C105 S 2001-08-20 [ISAAC, Ks]{} 2.16 0.02 2002-09-24 [ISAAC, H]{} 1.65 0.04 2009-11-03 [ISAAC, Js]{} 1.24 0.17 2003-11-26 [FORS1, R\_BESS]{} 0.657 0.86 2009-11-25 [FORS2, b\_HIGH]{} 0.440 0.30 3C195 S 2009-11-03 [ISAAC, Ks]{} 2.16 0.16 2003-01-21 [ISAAC, H]{} 1.65 0.04 2003-12-18 [FORS1, R\_BESS]{} 0.657 0.01 2003-11-30 [FORS1, B\_BESS]{} 0.429 0.02 3C227 E 2010-01-17 [ISAAC, Ks]{} 2.16 0.13 2003-12-18 [FORS1, R\_BESS]{} 0.657 0.28 2003-12-18 [FORS1, B\_BESS]{} 0.429 0.24 3C227 W 2001-04-18 [ISAAC, Ks]{} 2.16 0.07 2009-12-27 [ISAAC, H]{} 1.65 0.14 2009-12-27 [ISAAC, Js]{} 1.24 0.11 2010-02-13 [FORS2, R\_SPECIAL]{} 0.655 0.08 2010-02-15 [FORS2, b\_HIGH]{} 0.440 0.21 --------- -------------- ----------------------- -------------------- ------------------- \[optical\_log\] X-ray observations ------------------ X-ray observations of our targets were publicly available in the NASA’s high-energy archive[^2]. Given the need for high angular resolution mapping, we considered only [*Chandra*]{} pointings, which allow us to resolve X-ray structures on sub-arcsecond scales. We re-analyzed for consistency the archival [*Chandra*]{} observations using up-to-date calibration files. A log table of the X-ray observations is reported in Table \[Xobs\]. The observations were taken with the ACIS-S array in very faint (VF) mode. For 3C105 and 3C227, the southern hotspot and the western hot spot complex, respectively, were placed near the aim point, on the S3 chip. Due to the radio source angular extension of 3C227, the eastern hotspot fell on a different chip. The observation of 3C195 was centered on the X-ray core, however the whole radio structure ($\lesssim$130) fits on the S3 chip. The X-ray data analysis was performed with the [*Chandra*]{} Interactive Analysis of Observation (CIAO) 4.9 software [@Fru06] using the calibration files CALDB version 4.7.7. We ran the `chandra\_repro` reprocessing script, that performs all the standard analysis steps. We checked and filtered the data for the time intervals of background flares. For imaging purposes, the two observations of 3C227 were merged together. By default, the energy-dependent sub-pixel event-resolution (EDSER) algorithm, which improves the ACIS image quality, was applied to all datasets. We generated smoothed images in full pixel resolution (0.492) and rebinned to a pixel size of 0.296 and 0.123. The spectral analysis was performed with Sherpa [@Free01]. We employed the cstat statistic [@cash79] together with the neldermead optimization method [@neldermead65]. The spectra were not rebinned and, if not differently specified, the background was modeled. Uncertainties are given at the 90 per cent confidence level. When the statistics were too low, we used PIMMS to convert the 0.5–7.0 keV net count rates into the 1 keV flux densities.\ ----------- ------- -------------- ---------- -- Name ObsID Date Livetime (YYYY-MM-DD) (ksec) 3C105 S 9299 2007-05-12 8.1 3C195 S 11501 2010-01-09 19.8 3C227 E/W 6842 2006-01-15 29.8 7265 2006-01-11 19.9 ----------- ------- -------------- ---------- -- : Log of the [*Chandra*]{} observations of the hotspots. Columns: 1-hotspot name; 2-[*Chandra*]{} observation ID; 3-observation date; 4-time on source after filtering for flaring background. \[Xobs\] Radio observations ------------------ We retrieved archival VLA data for the hotspots 3C105 South, 3C227 East and West, and 3C195 South. Observations of 3C 105 South at 4.8 and 8.4 GHz, and 3C 227 West at 8.4 GHz were performed with the array in A-configuration and were centred on the hotspot itself, while observations at 4.8 GHz for 3C 195 and 3C 227 were performed with the array in B-configuration and were centred on the nucleus of the radio galaxy. In all the observations the absolute flux density scale was calibrated using the primary calibrator 3C286. The phase calibrators were 0424$+$020, 0730$-$116, and 0922$+$005 for 3C105, 3C195, and 3C227, respectively. The observations were performed with the historical VLA and the bandwidth was 50 MHz per per intermediate frequency (IF), with the exception of the observations of 3C105 South and 3C227 West that had a bandwidth of 25 MHz per IF, and 12.5 MHz per IF, respectively. Calibration and data reduction were carried out following the standard procedures for the VLA implemented in the National Radio Astronomy Observatory (NRAO)’s Astronomical Image Processing System (AIPS) package. Final images were produced after a few phase-only self-calibration iterations and using uniform weighting algorithm. Primary beam correction was applied at the end of the imaging process. The rms noise level on the image plane is negligible if compared to the uncertainty of the flux density due to amplitude calibration errors that, in this case, are estimated to be $\sim$ 3 per cent. Log of the radio observations is reported in Table 3. In addition to the archival data, we got Jansky VLA observations at 22 GHz of the hotspots 3C227 West and East (project code 18A-087). Observations were performed on 2018 March 5 with the array in A-configuration. Details on the observations and data calibration and imaging are discussed in @mo20. --------- ------- ------------------ ------- ---------- ------------ ------- --------------------------------- Name Freq. Beam PA rms Date Code Offset from the pointing centre GHz arcsec deg mJy/beam 3C105 S 4.8 0.38$\times$0.32 57 0.05 19-07-2003 AM772 on source 3C105 S 8.4 0.23$\times$0.12 38 0.05 19-07-2003 AM772 on source 3C195 S 8.4 1.30$\times$0.80 $-$16 0.11 17-01-2004 AM772 on source 3C227 E 4.8 1.25$\times$1.19 37 0.08 13-07-1986 AS264 110 3C227 W 4.8 1.25$\times$1.19 37 0.08 13-07-1986 AS264 110 3C227 W 8.4 0.38$\times$0.23 48 0.03 25-05-1990 AB534 on source --------- ------- ------------------ ------- ---------- ------------ ------- --------------------------------- \[radio-data\] Results ======= Image registration and flux measurements ---------------------------------------- To construct the SED of individual hotspot components, the flux density at the various wavelengths must be measured in the same region, avoiding at the same time contamination from unrelated sources and components. Radio and optical/NIR images were aligned with respect to each others using reference sources from the 2MASS catalogue (see Sec. 2.1). Figure \[radio-optical-images\] shows the optical B-band image of each hotspot with superimposed radio contours.\ We performed the astrometric correction of the X-ray images by comparing the cores’ X-ray and radio positions and verified the accuracy of the registration (within 0.1 arcsec) using sources with infrared counterparts in the 2MASS.\ We defined a common region of integration for the radio, optical/NIR and X-rays images, following the contour on the radio emission that corresponds to the 5 per cent of the radio peak flux of each hotspot component. NIR and optical images with the regions of integration and [*Chandra*]{} X-ray images with overlaid radio contours are presented in Figs. \[3c105\_figure\] to \[3c227west\_figure\]. Radio, NIR and optical fluxes for each hotspot component are reported in Table \[fluxes\]. There are discrepancies in some bands between our measurements and those reported by @mack09 and @mo12 for 3C 105 South and 3C195 South. The reason of the discrepancy boils down to few facts: (a) in this work we have performed a local evaluation of the sky-background, as the sky residuals in the background-subtracted product of the pipeline were not negligible; (b) we have adopted polygonal region for flux integration following a fixed iso-contour level, which is different from what used in the past; and (c) for the exposures that have no available standard star observations on the same night, we used field stars to calibrate them, whereas the previous studies used standard star observations acquired on different nights, with probably different atmospheric conditions.\ For 3C227 West, the X-ray counts were enough to obtain spectra of each of the two components of the hotspot complex. An absorbed power-law model with the column density fixed to the Galactic value [$\rm N_H=2\times10^{20}$ cm$^{-2}$ @nhref] was used to simultaneously fit the spectra of the two [*Chandra*]{} observations. The best fit values for each spectrum are reported in Table \[Xrayspec\]. The 0.5–7.0 keV net count rates of 3C105 South, 3C195 South and 3C227 East were converted into unabsorbed flux densities at 1 keV assuming an absorbed power law model with photon index $\Gamma=$ 1.8 (in accordance with the best fit model for 3C227 West) and the column density fixed to the Galactic value (Table \[Xrayspec\]). Keeping into account differences in the selected regions, our results are in broad agreement with those reported for the targets in the literature [@hardcastle07; @massaro11; @mo12; @mingo17].\ ------------- -------------- -------------- --------------------- ---------------------- --------------------- ------------------------ ------------------------ -- Name 4.8 GHz 8.4 GHz Ks H Js R B (mJy) (mJy) ($\mu$Jy) ($\mu$Jy) ($\mu$Jy) ($\mu$Jy) ($\mu$Jy) 3C105 S1 26.4$\pm$0.7 18.4$\pm$0.5 2.6$\pm0.2$ 2.8$\pm0.2$ 1.1$^{+0.2}_{-0.2}$ 0.3$^{+0.4}_{-0.2}$ $<$0.10 3C105 S2 540$\pm$16 372$\pm$11 14.4$\pm0.4$ 13.5$^{+0.7}_{-0.6}$ 5.8$^{+1.0}_{-0.9}$ 1.0$^{+1.2}_{-0.5}$ 0.2$\pm$0.1 3C105 S3 403$\pm$12 260$\pm$8 23.4$\pm$0.5 22.7$\pm$0.9 8.5$^{+1.5}_{-1.3}$ 1.3$^{+1.6}_{-0.7}$ 0.3$\pm$0.1 3C105 S Ext 275$\pm$8 130$\pm$4 17$\pm2$ 17$\pm$2 6.4$^{+4}_{-3}$ 1.4$^{+5.0}_{-1.4}$ 0.9$^{+0.5}_{-0.3}$ 3C195 S - 94$\pm$3 3.3$^{+0.8}_{-0.7}$ $<$0.46 - 0.26$^{+0.01}_{-0.02}$ 0.14$^{+0.01}_{-0.01}$ 3C227 E1 102$\pm$3 - $<$1.10 - - 0.19$^{+0.07}_{-0.06}$ $<$0.14 3C227 E2 84$\pm$3 - $<$1.10 - - 0.4$^{+0.1}_{-0.1}$ 0.5$^{+0.2}_{-0.1}$ 3C227 W1 90$\pm$3 63$\pm$2 11$^{+2}_{-1}$ 9.5$^{+0.6}_{-1.3}$ 5.3$^{+0.7}_{-0.6}$ 1.2$\pm0.1$ 1.0$\pm$0.2 3C227 W2 28.3$\pm$0.8 15.7$\pm$0.5 9.5$\pm$1.3 4.3$\pm$0.8 2.9$\pm$0.4 0.44$^{+0.07}_{-0.06}$ 0.4$\pm$0.1 ------------- -------------- -------------- --------------------- ---------------------- --------------------- ------------------------ ------------------------ -- \[fluxes\] ------------- ------------- ----------------------- ---------------------------------- Component $\Gamma$ N$_{H,Gal}$ F$_{1keV}$ cm$^{-2}$ erg cm$^{-2}$ s$^{-1}$ 3C105 S1 1.8(f) 10.4$\times$10$^{20}$ (5.2$\pm$1.3)$\times$10$^{-15}$ 3C105 S2+S3 1.8(f) ” (2.1$\pm$0.8)$\times$10$^{-15}$ 3C105 S Ext 1.8(f) ” $<$0.9$\times$10$^{-15}$(\*) 3C195 S 1.8(f) 7.8$\times$10$^{20}$ (1.3$\pm$0.4)$\times$10$^{-15}$ 3C227 E 1.8(f) 2$\times$10$^{20}$ (1.0$\pm$ 0.2)$\times$10$^{-15}$ 3C227 W1 1.8$\pm$0.3 ” (4.0$\pm$0.7)$\times$10$^{-15}$ 3C227 W2 1.8$\pm$0.5 ” (2.1$\pm$0.7)$\times$10$^{-15}$ ------------- ------------- ----------------------- ---------------------------------- \[Xrayspec\] Notes on individual sources --------------------------- ### 3C105 South The hotspot complex consists of a jet knot (S1), observed from radio to X-rays, and a double hotspot (see Fig. \[3c105\_figure\]). The detection of NIR emission from 3C105 South was first reported in @mack09. @mo12 presented a radio-to-X-ray study of the compact features of the hotspot complex, while here we focused on the diffuse component. The primary hotspot (S2) is brighter in radio than the secondary hotspot (S3), while it becomes the faintest in NIR and optical (Fig. \[3c105\_figure\]), suggesting that the energy distribution of the radiating particles in the two hotspots is different. In the [*Chandra*]{} observations the jet knot S1 is clearly observed, while X-ray emission from S2 and S3 is only marginally detected (6.6$\pm$2.6 net counts in the 0.5–7 keV energy range). The three compact components of the hotspot complex are enshrouded by diffuse emission (3C 105 S ext) that could not be well characterized in @mo12. The NIR/optical diffuse emission has been obtained by subtracting the emission of the three main components from the total flux: with the new J- and B-band data, the diffuse emission is now detected in all radio-to-optical bands (see Table \[fluxes\]). The extraction regions of the NIR/optical compact emission were defined based on the radio images, with the limit being (conservatively) fixed at 5 per cent of the radio peak of each component. While this choice was dictated by the need to have common regions for the SED, it is still reasonable as long as the radio and NIR/optical peaks of the three features are spatially coincident. The extension of the diffuse emission cannot be easily determined because of its irregular spatial distribution. To get an indicative estimate, we extracted the brightness profile of the H-band emission from a rectangular region (6$\times$0.8) covering the S3 component and its dowstream region. The profile is shown in Fig. \[3c105profile\] together with the level of the background and with the profile of a point-like source in the field, rescaled to the peak of S3. Emission above 3$\times$rms level is detected up to 2.5 ($\sim$4 kpc) from the peak. We adopted this estimate as a reference value for the projected size of the diffuse emission.\ No significant X-ray diffuse emission is observed in the hotspot complex. A 3$\sigma$ upper limit was derived from the counts in the total region excluding the three compact components (Table \[Xrayspec\]). ### 3C195 South Tentative detection of NIR emission from 3C195 South was reported by @mack09. The new VLT observations confirm the emission from this hotspot in K band, while only an upper limit is obtained in H band. In the optical window, this hotspot is clearly detected in R- and B-band and, within the extraction region, the emission is extended (Fig. \[3c195south\_figure\]).\ In X-rays, weak, compact emission is detected by [*Chandra*]{} at $\sim$3$\sigma$ level, in agreement with the results obtained by @mingo17. Because of the faintness of the X-ray flux, we cannot be conclusive on the slight offset between the radio and X-ray centroids (Fig. \[3c195south\_figure\]). There is no evidence of diffuse X-ray emission around the compact component. ### 3C227 East Emission from 3C227 East is clearly seen in the optical R- and B-band [see Fig. \[3c227east\_figure\] and @mack09], while with the new VLT pointing we only measured an upper limit to the K-band flux. The presence of foreground stars in the hotspot complex hampers an accurate determination of the optical flux for the hotspot components. In order to avoid flux contamination from these stars, we selected two extraction sub-regions (labelled E1 and E2, see Fig \[3c227east\_figure\]). The fluxes are reported in Table \[fluxes\]. We checked and no galaxy is reported within E1 and E2 in the Sloan Digital Sky Survey [SDSS, @sdss12], which, in this field, detected galaxies with similar, or fainter, optical fluxes than ours. We then estimated the probability that the optical emission in the two regions is due to unassociated background galaxies. Using the galaxy number-apparent magnitude relation derived from deep optical surveys [see e.g. @madau00], the expected number of galaxies within each area (approximated to a circle of $\sim$1.3 radius) at the measured R and B fluxes is $\sim$0.008 and the probability of having one galaxy within E1 or E2 is $<$1 per cent. In addition, extended NIR emission, possibly spatially overlapping with the faint X-ray flux (see below), was reported by @mack09 based on previous VLT observations in 2002, giving further support to a non-thermal origin of the NIR-optical component. The presence of foreground stars precludes also the detection of possible extended optical emission enshrouding the main hotspot components.\ [*Chandra*]{} observations could clearly detect X-ray emission from the hotspot (Fig. \[3c227east\_Xray\] and Orienti et al. 2020), in agreement with previous work by @hardcastle07 and @mingo17. No significant X-ray diffuse emission is observed in the hotspot complex. ### 3C227 West The hotspot complex of 3C227 West shows a primary eastern component (W1) and secondary one (W2), located $\sim$10 arcsec west of W1. NIR K-band emission from both hotspots of 3C227 West was reported by @mack09. Our multi-band VLT observations detect for the first time the hotspots in NIR H- and Js-band, and confirm the previous observation in optical R- and B-bands (Fig. \[3c227west\_figure\]). Because of a foreground star in the proximity of the secondary hotspot, it was necessary to slightly modify the extraction region of the optical flux. For this reason, the measured NIR-optical flux should be considered as a lower limit. In both hotspots, the NIR/optical emission is extended and the size is about 1.5$\times$2 arcsec$^{2}$ (3.2$\times$2.4 kpc$^{2}$) for W1 and about 1$\times$2 arcsec$^{2}$ (1.6$\times$3.2 kpc$^{2}$) for W2. In particular, in the H-band image the structure of W1 appears resolved both in the S-E to N-W and N-E to S-W directions. Differently from 3C105 South, we did not observe any diffuse NIR/optical bridge connecting the primary and secondary hotspots.\ [*Chandra*]{} observations detected significant X-ray emission from both the primary and secondary hotspots. As first reported in @hardcastle07, a 0.8$\pm$0.1 arcsec displacement to the North (1.3$\pm$0.2 kpc) is observed between the X-ray emission and the radio-to-optical emission of the primary hotspot, with the former occurring upstream towards the nucleus and likely marking regions of current acceleration (Figure \[3c227west\_figure\]). Note that, as discussed by @hardcastle07, the fact that for W1 the emission is spatially resolved disfavors the possibility of a by-chance alignment of the radio and X-ray emission. In the 22 GHz image, W1 is resolved in two arc-shaped components, with the northern one apparently leaning against the edge of the X-ray emission [see Fig. \[3c227\_22GHz\_Xray\] and @mo20]. The total 22 GHz flux is 27.4[$\pm$]{}0.8 mJy.\ The association of the X-ray emission with the secondary hotspot W2 is instead more uncertain. The X-ray emission is co-spatial but offset ($\sim$1.4 arcsec to the south-east) with respect to the peak of the radio emission at 4.8 GHz. The morphology in the X-ray and 8.4 GHz maps does not match: the radio emission is sandwiched between two X-ray components, one at about 1.7 arcsec (2.7 kpc) to the East of the radio peak and the other (7 counts between 0.5–7 keV) to the North. In particular, this latter one could be associated with the foreground star and therefore it was not considered in the estimate of the X-ray flux of W2.\ The primary hotspot, W1, is the brightest in all energy bands: the W1 to W2 flux ratio is S$_{\it W1}$/S$_{\it W2}$ $\sim$3.5 in radio, $\sim$1.2 in NIR, $\sim$2.5 in optical, and $\gtrsim$4 in X-rays. Broad-band spectral energy distribution ======================================= The new VLT observations, together with the archival data, allow us to model the SED of our targets. The NIR-optical data are important to determine the high energy part of the synchrotron spectrum, check for changes in the spectral slope with respect to the radio band and constrain parameters such as the cut-off frequency (i.e. the synchrotron frequency of the electrons with the largest energy injected in the post-shock region) and the break frequency. Here, we modeled for the first time the diffuse emission of 3C105 South, 3C105 S Ext, whereas SED modeling of its three compact components was presented in @mo12. The SED of 3C195 South in @mack09 did not extend to the X-ray data, which we now included in the modeling. In 3C227 West we focused on the primary hotspot 3C227 W1, as the secondary one has less certain multi-band association and suffers of flux contamination in the NIR-optical band. A first analysis of this hotspot in radio and X-rays was discussed in @hardcastle07. @mack09 provided the first SED incorporating high angular resolution NIR/optical data. This work improves over it with further high angular resolution IR and optical VLT data. Furthermore, we exploited the spatial and flux information obtained from the high-angular resolution and high-sensitivity JVLA observations of W1 at 22 GHz, which are presented in a companion paper [@mo20] and summarized here (Sec. 2.3). We did not model 3C227 East as the presence of foreground stars in its hotspot complex precludes an accurate estimate of optical flux from this region. We used a leptonic, synchrotron and IC model to reproduce the emission. For simplicity, we assumed a spherical shape of the emitting region. In case of a different morphology of the observed emission (cylinder, ellipsoid etc.), we calculated the radius $R$ of a sphere with the same volume and uniformly filled by relativistic plasma (i.e. a filling factor equal one was assumed) and magnetic field $B$. We allowed the value of $R$ to vary within the measured size of the radio emission of each component. The advance speeds measured for the hotspots range between 0.01$c$ and 0.3$c$ [@PC03; @Nag06; @An12], therefore we began assuming a subrelativistic plasma flow [$v=$0.05$c$ see also @kap19], i.e. values of the bulk Lorentz factor $\Gamma_{bulk}$ close to 1. The spectrum of the electrons’ energy distribution (EED) was described by a single power-law or a broken power-law: $$N(\gamma)= \begin{cases} k \gamma^{-p_1}\ \rm for\ \gamma_{min}\leq \gamma<\gamma_{break} \\ k \gamma_{break}^{p_2-p_1} \gamma^{-p_2}\ \rm for\ \gamma_{break}\leq \gamma<\gamma_{max}\end{cases}$$ where $\gamma_{min}$, $\gamma_{max}$ and $\gamma_{break}$ are the minimum, maximum Lorentz factors and the Lorentz factor at the energy break (or synchrotron frequency break), respectively, and $p_1$, $p_2$ are the EED spectral indexes below and above the break. If not differently specified, $\gamma_{min}$ was set to 100. The limited sampling of the SED did not allow us to identify breaks or change of slope in the synchrotron spectra of our hotspots between the radio and NIR/optical bands, hence a simple power law was typically adopted (i.e. $\gamma_{break}=\gamma_{max}$) and we discuss changes from this initial assumption. The electrons radiate via synchrotron mechanism; the locally produced synchrotron photons and photons of the CMB provide the seed photons for the IC mechanism. In the modeling, we began assuming energy equipartition between the particles and the magnetic field.\ The values of the model parameters are reported in Table \[sedtab\]. Note that our goal here was to achieve a broad evaluation of the models. The values in Table \[sedtab\] should be considered indicative since, given the limited datasets, we did not perform a fit to the data.\ Only the SED of 3C105 S ext (i.e. the diffuse emission), for which we only measured an upper limit in X-rays, could be successfully modeled assuming an equipartition magnetic field (Model 1). For this target, the X-ray upper limit allowed us to estimate the minimum $B$ (assuming no beaming effects), below which the observed radio-to-optical synchrotron emission would imply a detectable level of IC flux in X-rays, $B_{min}\gtrsim$7 $\mu$G (see Figure \[sed\_models\]). The magnetic field in equipartition, estimated from the integrated synchrotron spectrum, $B_{eq}=$42 $\mu$G, is compatible with this limit. For $B_{eq}=$42 $\mu$G, the detected NIR/optical diffuse emission is given by particles with $\gamma\approx 10^5-10^6$ and radiative ages of the order of $\approx$17 kyr. Note that, since the emitting region is relatively large ($R\sim$5 kpc), the energy density of the locally produced synchrotron photons is lower than that of the CMB ones, hence the IC/CMB emission dominates over the SSC one.\ We explored the possibility that a spectral break is present between the radio and NIR band. In Figure \[sed\_models\] (upper, right panel), we show that for $p_1=$2.6 and $p_2=p_1+1$, synchrotron curves with $\gamma_{break}$ smaller than $\sim 2\times 10^5$ underestimate the NIR-optical fluxes by a factor $\gtrsim$3.5. This holds true even allowing for a flatter spectrum, $p_1=$2.5 [corresponding to the radio spectral index of the full hotspot region, $\alpha=0.75$, in @mack09] and $B\sim$35 $\mu$G.\ In the other two hotspots, 3C195 S and 3C227 W1, the NIR-optical emission is resolved but it is compact and not diffuse, as one could expect from a post-shock region. For the assumed bulk motion (0.05$c$), the predicted IC emission under the equipartition assumption (Model 1) significantly underestimates the observed X-ray fluxes. If we release the equipartition condition, the X-rays can be ascribed to IC assuming ratios of the energy density of the particles to magnetic field ($U_e/U_B$) larger than $\gtrsim10^3$. In this scenario, the dominant contribution in the X-ray band may be either IC/CMB or SSC (see Figure \[sed\_models\]), depending on the volume of the region. For example, in 3C195 S the relative weight of the two contributions is reversed going from the maximum value of $R$ determined by the radio measurements ($R=$2.8 kpc, Model 2 in Table \[sedtab\]), with IC/CMB$>$SSC, to the assumption of a more compact emitting region, $\sim$1 kpc (Model 3 in Table \[sedtab\]). This shows that $R$ is a key parameter of the modeling. In X-rays we are limited by the resolving power of the current instruments. However low-frequency observations can now probe the plasma structure down to hundreds/tens of parsecs for the closest targets [see @mo20 and the JVLA 22 GHz observation of 3C227 W1].\ An high IC/CMB X-ray flux can be obtained if the plasma in the hotspot is still moving relativistically, with $\Gamma_{bulk}\sim$3–4 ($\approx$0.94$c$–0.97$c$), and is seen at moderate or small inclination angles, $\theta\lesssim$20[$^\circ$]{}(Models 4 and 3 in Table \[sedtab\] for 3C195 S and 3C227 W1, respectively). Indeed, nor the scales at which deceleration takes place, neither the bulk flow speeds are known in powerful jets, which could be still mildly relativistic [see e.g. @MH09] close to their termination point. However, the symmetrical large-scale radio morphology of the two radio galaxies does not support small viewing angles, unless of invoking local deviations of the plasma flow from the direction of the jet’s main axis. [lccccccc]{} & $R$ & $B$ &$\gamma_{min}$/$\gamma_{max}$/$\gamma_{break}$ &$p_1$/$p_2$ &$\Gamma_{bulk}$ &$\theta$ &(U$_B$/U$_e$)\ & kpc &$\mu$G & & & &deg. &\ \ Model 1 &4.9 &42 &100/8e5/– &2.6/– &1.0 &45 &1.0\ Model 2 &4.9 &7 &100/2e6/– &2.6/– &1.0 &45 & 0.0013\ \ Model 1 &2.8 &76 &100/1.e6/– &3.05/– &1.0 &45 &1.0\ Model 2 &2.8 &10 &100/2.0e6/– &3.05/– &1.0 &45 &2.1e-3\ Model 3 &1.0 &13.5 &100/1.7e6/3.e3 &2.05/3.05 &1.0 &45 &2.3e-4\ Model 4 &1.0 &53 &100/7e5/– &3.05/– &3.0 &18.0 &1.0\ \ Model 1 &1.6 &72 &100/9e5/– &2.6/– &1.0 &45.0 &1.0\ Model 2 &1.6 &2.1 &100/5e6/1.5e6 &2.4/3.4 &1.0 &45.0 &3.1e-6\ Model 3 &1.6 &13 &100/1.5e6/2e5 &2.4/3.4 &4.0 &18.0 &0.15\ Discussion ========== Low-power hotspots have proved to be optimal targets for studying the synchrotron emission from the highest energised particles accelerated in the flow: observations in the NIR/optical bands of selected samples have reached detection rates up to 70 per cent [@prieto02; @brunetti03; @mack09]. The theoretical explanation is that relativistic particles accelerated in low-power hotspots, likely with lower magnetic field, have longer radiative lifetimes and $\nu_{break}$ shifted at higher frequencies (NIR/optical bands) compared with those of high-power hotspots [@brunetti03], which are typically in the millimeter range [e.g. @meise97]. In about 80 per cent of the hotspots in @mack09 detected in the VLT observations (3C105 South, 3C195 South, 3C227 West, 3C445 North and 3C445 South) the NIR/optical synchrotron emission either displays compact components surrounded by diffuse emission or an extended structure [see also @prieto02; @mack09; @mo12]. [*3C105 S ext –*]{} The hotspot 3C105 South falls in the first category. In this source, the detection of the optical counterparts of both, the primary and secondary (S2 and S3), hotspots, identifies two main sites of particle acceleration, with the secondary component being likely produced by the impact of the outflow from the primary upon the cocoon wall [@mo12]. Synchrotron NIR/optical emission enshrouds both components. Such emission extends at least $\sim$4 kpc (projected size, in H band) to the West of the secondary hotspot (S3), as also seen in the radio structure [see also Figure 1 in @mo12]. If this area coincides with the post-shock region, the detection of optical-NIR emission is somehow surprising in the scenario of one single acceleration episode. In fact, assuming $B_{eq}$, the electrons responsible for optical-NIR emission have $\gamma>10^5$ and estimated radiative ages in the range $\approx 10^3$ yr for the compact regions and $10^4$ yr for the diffuse one [see also @mack09; @mo12]. Even assuming the optimistic scenario of ballistic streaming of the electrons, for the longest cooling times the particles would cover $\sim$3 kpc, a distance that is barely consistent with the [*projected*]{} extension of the putative post-shock region. Moreover, a random B field could further increase the path of the electrons in the region, thus making the tension with their radiative lifetimes irreversible. One possibility is that the electrons, accelerated in the shock front, stream along the magnetic lines. If so, then (i) particles are no longer accelerated after leaving the shock region and (ii) the electrons should diffuse (guided by the $B$ topology) on a shorter time than their radiative cooling time, $\tau_{diff}\lesssim \tau_{rad}$ [see e.g. Sec. 4 in @meise89]. The natural implication of (i) is that the $\gamma_{break}$ of the electrons producing the diffuse emission ($\gamma_{break,ext}$) cannot be greater than that in the compact regions S2 and S3 ($\gamma_{break,comp}$), and this translates into a constraint on the magnetic field in the diffuse emission region ($B_{ext}$): $$B_{ext}\gtrsim \nu_{break,ext}\times \frac{B_{comp}}{\nu_{break,comp}}\,\,{\rm G} \label{B1}$$ where $B_{comp}$ is the magnetic field in the compact regions in G, $\nu_{break,ext}$ and $\nu_{break,comp}$ are the synchrotron break frequencies of the diffuse and compact regions, respectively. A second constraint is obtained from (ii), for dominant synchrotron losses (an assumption that is justified by the results of the SED modeling): $$B_{ext}\lesssim \left(\frac{2.1\times10^{12} c}{ L_{obs}}\right)^{\frac{2}{3}}\left(\frac{1}{\nu_{break,ext}}\right)^{\frac{1}{3}}\left(\frac{\lambda_{mfp}}{L_{obs}}\right)^{\frac{2}{3}}\,\,{\rm G} \label{B2}$$ where $\lambda_{mfp}$ is the mean-free-path of the electrons at $\gamma_{break,ext}$ (for example the bending scale of $B$), and $L_{obs}$ is the linear size of the diffuse emitting region (i.e. the post-shock region), both in cm. Modeling of the SED of 3C105 S ext has set a minimum value for $\nu_{break,ext}$ around $\sim10^{12}$ Hz ($\gamma_{break,ext}\sim10^{5}$, see Sec 4 and Figure \[sed\_models\], upper right panel). The values of $B_{comp}$ and $\nu_{break,comp}$ are taken from modeling of the primary and secondary hotspots, S2 and S3, in @mo12: $B_{comp}=270-290$ $\mu$G and $\nu_{break,comp}=(0.75-1.5)\times 10^{13}$ Hz. The size of the region $L_{obs}$ ranges from $\sim$4 to $\sim$10 kpc to account for projection effects. In Figure \[B\_lambda\], $B_{ext}$ is plotted as a function of the $\lambda_{mfp}$ to $L_{obs}$ ratio. As a third, less constraining condition, we included in the plot the lower limit on $B_{ext}$, $\gtrsim$7 $\mu$G, inferred from the non-detection in X-rays (see Sec. 4 and Figure \[sed\_models\], upper left panel). The conditions on $B_{ext}$ are fulfilled for $\lambda_{mfp}/L_{obs}\gtrsim$0.01–0.1. For the considered range of $L_{obs}$, $\lambda_{mfp}$ is ($\gtrsim$40–100 pc). For a reference, these lower limits of $\lambda_{mfp}$ would be compatible with the sizes of the ordered component of the magnetic field inferred in the hotspots of 3C227 and 3C445 from the VLA observations at 22 GHz [@mo20]. For larger values, instead, the scenario of particles streaming along the magnetic field becomes challenging. High-resolution measurements of the polarized radio component of 3C105 S ext could help to probe the $B$ field topology at these scales.\ Stochastic (re-)acceleration of the particles out of the main shock site is the other possibility. One can speculate that turbulence, generated via dissipation of the jet’s kinetic energy, plays a role, re-energizing particles to maximum $\gamma\sim 10^6$, which produce the NIR/optical emission, but not beyond (hence the non-detection in X-rays). In the other two hotspots, 3C195 S and 3C227 W1, the sharp drop of the optical fluxes in the SEDs clearly rules out a single, radio-to-X-ray, component, even allowing for spectral breaks. The modeling gives hints about the nature of the second radiative component that generates the X-ray emission. [*3C195 S –*]{} The SED of 3C195 S is remarkable because of its steep radio-to-NIR spectrum ($\alpha>$1.0). If the spectrum extends below the GHz frequencies without any change of slope, a dominant IC/CMB component at high-energies constrains $\gamma_{min}$ to values $\gtrsim$50–100, not to exceed the observed optical flux. For the same reason, if we reduce the volume of the emitting region and the synchrotron radiative field becomes dominant over the CMB one (see Model 3 in Table \[sedtab\] and Figure \[sed\_models\]), either a $\gamma_{min}$ greater than a few thousands or a spectral break are required. If due to radiative cooling, a break at such low energies would imply an extremely old electron population in an hotspot that has switched off, thus excluding fast jet’s intermittence. Alternatively, it could reflect an unusual initial EED shape inherent to the acceleration mechanism. Interestingly, in a number of hotspots [@leahy89; @lazio06; @godfrey09; @mckean16], evidence for a flattening of the radio spectrum at low frequencies, in the GHz to tens of MHz band, has been found. This could be caused either by a turn-over of the EED [e.g. @leahy89] or by the transition between acceleration processes [@stawarz07]. Observations at low (MHz) radio frequencies sampling the low-energy tail of the synchrotron spectrum [see e.g. @hardwood16; @hardwood17] can help discriminate among the different scenarios. The field of 3C195 has been observed at 150 MHz by the Giant Metrewave Radio Telescope (GMRT) as part of the TIFR GMRT Sky Survey (TGSS) project. We retrieved and inspected the 150 MHz image of our target in the TGSS Alternative Data Release [TGSS ADR[^3]; @intema17]. Unfortunately, the angular resolution of the survey (25$\times$25) is not sufficient to reliably de-blend the hotspot emission from the lobe component. A comparison of the X-ray photon index with the steep radio-optical spectral index is a further test of the (single-zone) IC scenario: for example, similar spectral indexes in the two bands would play in favour of the IC/CMB emission. Indeed, deep X-ray observations are necessary to measure the photon index with a sufficient level of precision. [*3C227 W1 –*]{} A single zone radiating model was initially applied also to the SED of 3C227 W1. As for 3C195, a large particle dominance is required if the X-ray emission is of IC origin and not relativistically boosted [$U_B/U_e\lesssim 10^{-5}$, see Table \[sedtab\] and @hardcastle07]. For this model, the radio-to-optical spectrum and the best fit value of the X-ray spectral index suggest that the IC total emission peaks above $>$10 keV, at $\approx 10^{24}$ Hz (in Figure \[sed\_models\]). Therefore, we looked for observations of the source in the hard X-ray to $\gamma$-ray band. The radio galaxy 3C227 was pointed twice by the [*NuSTAR*]{} mission [the Nuclear Spectroscopic Telescope Array, @harrison13], which is imaging the sky in the hard X-rays (3-80 keV band). We retrieved and analyzed the public data. A point source is clearly visible in the [*NuSTAR*]{} image at the location of the AGN. No significant signal is detected at the hot spot position and, given the instrument PSF, flux contamination from the core results in a relatively shallow upper limit (6$\times$10$^{-14}$ [erg cm$^{-2}$ s$^{-1}$]{}), which hampers a meaningful test of the model. In the MeV-GeV band, the sensitivity limit[^4] of the [*Fermi*]{} Large Area Telescope [LAT @atwood2009] is $>2-3\times 10^{-13}$ [erg cm$^{-2}$ s$^{-1}$]{}. Thus, the $\gamma$-ray predicted IC flux is at the LAT detection threshold at best. In addition, the LAT PSF (0.8[$^\circ$]{}at 1 GeV) would not allow us to disentangle the hotspot $\gamma$-ray flux from the possible AGN contribution. For 3C227 W1 the sensitivity of the current instruments does not make it possible a test of the IC model in the hard X- and $\gamma$-rays. However the IC scenario has also other observational issues. In fact, although the multi-band emission of 3C227 W1 is broadly co-spatial, both hotspots of 3C227 West show a displacement of $\sim$1.3$-$2.7 kpc between the X-ray and radio-to-optical centroids, with the former occurring upstream towards the nucleus [see also @hardcastle07]. Similar misalignments are observed in other hotspots, like Pictor A West [@hardcastle16] and 3C445 South [@perlman10; @mo12]. As discussed in these works, such offsets can be hardly reconciled with standard SSC and unbeamed/beamed IC-CMB one-zone models [though see recent developments in simulations modeling the non thermal emission of relativistic flows, e.g. @vaidya2018]. A displacement between the X-ray and radio emission is instead expected in the model proposed by @gk04 of a decelerating flow, in which freshly accelerated relativistic electrons from the fast upstream region of the flow upscatter to high energies the radio photons produced in the downstream, slower region by electrons that have radiatively cooled. However, this model involves boosting of the high energy emission and fine-tuned jet parameters (e.g. inclination, bulk motion and location of the radiating regions), while offsets are frequently observed.\ Alternatively, the X-ray emission may be explained in terms of synchrotron emission from a second population of relativistic electrons, possibly produced in an acceleration event spatially and/or temporally separated from that responsible for the radio-to-optical emission. In the western hotspot of Pictor A, high resolution radio imaging has unveiled structures with maximum estimated linear scales of $\sim$16 pc, which could be the sites where the X-ray emission is produced [@tingay08]. The discovery of X-ray flux variability on month-to-year timescales [@hardcastle16] further supports the hypothesis of the X-ray emission originating in compact (sub-parsec), possibly transient regions.\ Evidence of the presence of similar sub-regions, with linear size $\lesssim$100 pc, in low-luminosity hotspots, including 3C227 West, comes from the 22 GHz JVLA observations at high-angular resolution and high-sensitivity that we have recently acquired [see Orienti et al. 2020 and @prieto02; @mack09; @mo12 for similar results in the NIR/optical band]. Here we use the information on the physical scales of these regions to investigate the scenario of a synchrotron origin of the X-ray radiation. The observed radio-to-optical emission is accounted for by synchrotron emission from a kpc-scale region under the assumption of energy equipartition (same parameters as Model 1 in Table \[sedtab\]). As discussed in Sec. 4, the total IC flux from this component is significantly lower than the flux measured by [*Chandra*]{}. The X-ray emission results instead by summing together the synchrotron contribution produced by a number of pc-scale regions. As an example, in Figure \[sed\_models\], we modeled the emission of one such compact region, assuming $R=$60 pc (i.e. within the observed radio upper limits, $B=$70 $\mu$G, $\gamma_{min}=10^3$, $\gamma_{max}=10^8$ and again $U_B\sim U_e$). To be in agreement with the JVLA observations, the 22 GHz flux density of each single 60 pc region must lie below the 3$\sigma$ noise level (18 $\mu$Jy) measured at the location of the X-ray emission. For these parameters, a cooling break is expected around $\gamma_{break}\sim 8\times 10^7$, implying that this second synchrotron component rapidly drops at $>$10 keV energies. We assumed $p_1=2.4$ (and $p_2=3.4$ above the break), however a harder synchrotron spectrum ($p1=2.0$) would be equally in agreement with the radio and X-ray observational constraints. For the assumed region’s parameters about 20 regions are sufficient to obtain the observed X-ray flux (see Figure \[sed\_models\]). Indeed, these numbers should be considered as indicative and could change if we modify the parameters and assumptions (e.g. smaller $R$, departures from the energy equipartition assumption). In this scenario, electrons are efficiently accelerated in clumps. Given the fast cooling time of the high-energy particles, the X-ray emission would trace the most recent acceleration sites, while the bulk of the (slowly evolving) radio emission could result from the accumulation over time of the several acceleration episodes. Summary & Conclusions ===================== In this work, we presented new NIR and optical data of low-power hotspots, investigated their structure at different wavelengths and modeled their SED. The main results can be summarized as follows: - we confirm the detection in the NIR/optical bands of all targets, with the exception of one uncertain association (3C227 W2 in 3C227 West), with the emission being typically resolved; - the radio and NIR/optical diffuse emission in 3C105 South [as already reported by @mo12], which surrounds the bright and compact hotspots, likely coincides with the post-shock region. The constraints on the cooling times and on the minimum mean free path ($\gtrsim$40-100 pc) of the particles producing such emission make a robust case for some kind of mechanism accelerating particles in the post-shock region, e.g. Fermi II shock re-acceleration as proposed in e.g. @prieto02. Radio observations probing the configuration of the magnetic field in this region could further test this scenario; - in view of its SED, 3C195 South is a good candidate to confirm/disprove the IC hypothesis for the X-ray emission. Our modeling showed that NIR/optical data set precise constraints to the SSC or IC/CMB emission, which are testable with (i) low-frequency radio observations with angular resolution down to a few arcseconds or less, such as those that the Low Frequency Array [LOFAR @lofar13] is acquiring in the northern Hemisphere [see the LOFAR Two Metre Sky Survey, LoTSS, @Lotts19]; and (ii) tighter constraints to the X-ray spectral slope; - we showed that synchrotron emission produced in compact regions, whose existence is confirmed by JVLA observations at 22 GHz [@mo20], is a viable explanation for the X-rays in 3C227 W1. The large Lorentz factors, 10$^7$–10$^8$, of the electrons emitting in X-rays imply that efficient particle acceleration is ongoing in the clumps. The targets of our study are representative of standard low-power hotspots. Hence, it is likely that they share the same properties and mechanisms of their class. To make progress, on one hand we need to increase the sample of sources with high-sensitivity and high-resolution radio observations, which are necessary to map the complex, small-scale structure of the plasma and magnetic field. On the other hand, for the first time, we have access to time baselines ($\sim$10-20 years) in X-rays that allow us to test the high-energy variability in the context of the discussed scenarios. At the same time, simulations connecting the macro physical scales of relativistic jets with the micro-physics of the particle acceleration and radiative processes [@marti19 for a review] can provide the theoretical framework to decode the physics of hotspots and jets. Acknowledgment {#acknowledgment .unnumbered} ============== We thank the anonymous referee for reading the manuscript carefully and making valuable suggestions. Based on VLT programs 72B-0360B, 70B-0713B, 267B-5721. LC is grateful to INAF-IRA for the hospitality during the course of this project. FD acknowledges financial contribution from the agreement ASI-INAF n. 2017-14-H.0. This work was partially supported by the Korea’s National Research Council of Science & Technology (NST) granted by the International joint research project (EU-16-001). The VLA is operated by the US National Radio Astronomy Observatory which is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. This work has made use of the NASA/IPAC Extragalactic Database NED which is operated by the JPL, Californian Institute of Technology, under contract with the National Aeronautics and Space Administration. This research has made used of SAOImage DS9, developed by the Smithsonian Astrophysical Observatory (SAO). This research has made use of software provided by the Chandra X-ray Center (CXC) in the application packages CIAO and ChIPS. Ahn C. P., et al., 2012, ApJS, 203, 21 An T., et al., 2012, ApJS, 198, 5 Atwood, W. B., Abdo, A. A., Ackermann, M., et al. 2009, , 697, 1071 Black A. R. S., Baum S. A., Leahy J. P., Perley R. A., Riley J. M., Scheuer P. A. G., 1992, MNRAS, 256, 186 Brunetti, G., Mack, K.-H., Prieto, M.A., Varano, S. 2003, MNRAS, 345, 40 Cash, W. 1979, The Astrophysical Journal, 228, 939 Cheung, C.C., Wardle, J.F.C., Chen, T. 2005, ApJ, 628, 104 Erlund, M.C., Fabian, A.C., Blundell, K.M., Crawford, C.S., Hirst, P. 2010, MNRAS, 404, 629 Freeman, P., Doe, S., Siemiginowska, A. 2001, SPIE, 4477, 76 Freudling, W., Romaniello, M., Bramich, D.M., Ballester, P., Forchi, V., Garc[í]{}a-Dabl[ó]{}, C.E., Moehler, S., Neeser, M. J. 2013, A&A, 559, 96 Fruscione, A., et al. 2006, SPIE, 6270 Georganopoulos, M., Kazanas, D. 2004, ApJ, 604, L81 Godfrey L. E. H., et al., 2009, ApJ, 695, 707 Hardcastle, M.J., Harris, D.E., Worrall,D.M., Birkinshaw, M. 2004, ApJ, 612, 729 Hardcastle, M.J., Croston, J.H., Kraft, R.P. 2007, ApJ, 669, 893 Hardcastle, M.J., et al. 2016, MNRAS, 455, 3526 Harris, D.E., Carilli, C.L., Perley, R.A. 1994, Nature, 367, 713 Harris D. E., et al., 2000, ApJL, 530, L81 Harrison, F. A., Craig, W. W., Christensen, F. E., et al. 2013, , 770, 103 Harwood, J. J., Croston, J. H., Intema, H. T., et al. 2016, , 458, 4443 Harwood, J. J., Hardcastle, M. J., Morganti, R., et al. 2017, , 469, 639 HI4PI Collaboration, Ben Bekhti, N., Flöer, L., et al. 2016, Astronomy and Astrophysics, 594, A116 Kappes A., Perucho M., Kadler M., Burd P. R., Vega-Garc[í]{}a L., Br[ü]{}ggen M., 2019, A&A, 631, A49 Kataoka J., Edwards P., Georganopoulos M., Takahara F., Wagner S., 2003, A&A, 399, 91 Kataoka J., Stawarz [Ł]{}., 2005, ApJ, 622, 797 Kraft, R.P., Birkinshaw, M., Hardcastle, M.J., Evans, D.A., Croston, J.H., Worrall, D.M., Murray, S.S. 2007, ApJ, 659, 1008 Intema H. T., Jagannathan P., Mooley K. P., Frail D. A., 2017, A&A, 598, A78 Lähteenmäki, A., Valtaoja, E. 1999, AJ, 117, 1168 Lazio T. J. W., Cohen A. S., Kassim N. E., Perley R. A., Erickson W. C., Carilli C. L., Crane P. C., 2006, ApJL, 642, L33 Leahy J. P., Muxlow T. W. B., Stephens P. W., 1989, MNRAS, 239, 401 Leahy J. P., et al., 1997, MNRAS, 291, 20 Mack, K.-H., Prieto, M.A., Brunetti, G., Orienti, M. 2009, MNRAS, 392, 705 Madau P., Pozzetti L., 2000, MNRAS, 312, L9 Mart[í]{} J.-M., 2019, Galax, 7, 24 Massaro, F., Harris, D.E., Cheung, C.C. 2011, ApJS, 197, 24 McKean J. P., et al., 2016, MNRAS, 463, 3143 Meisenheimer, K., Röser, H.-J., Hiltner, P.R., Yates, M.G., Longair, M.S., Chini, R., Perley, R.A. 1989, A&A, 219, 63 Meisenheimer, K., Yates, M.G., Röser, H.-J. 1997, A&A, 325, 57 Mingo, B., et al. 2017, MNRAS, 470, 2762 Morganti, R., Oosterloo, T. A., Reynolds, J. E., Tadhunter, C. N., & Migenes, V. 1997, , 284, 541 Mullin L. M., Hardcastle M. J., 2009, MNRAS, 398, 1989 Nagai H., Inoue M., Asada K., Kameno S., Doi A., 2006, ApJ, 648, 148 Nelder, J.A., Mead, R., 1965, Computer Journal, 7, 308-313 Orienti, M., Prieto, M. A., Brunetti, G., Mack, K.-H., Massaro, F., Harris, D.E. 2012, MNRAS, 419, 2338 Orienti, M., Brunetti, G., Nagai, H., Paladino, R., Mack, K.-H., Prieto, M.A. 2017, MNRAS, 469L, 123 Orienti M., Migliori G., Brunetti G., Nagai H., D’Ammando F., Mack K.-H., Prieto M. A., 2020, MNRAS.tmp, doi:10.1093/mnras/staa777 Perlman, E.S., Georganopoulos, M., May, E.M., Kazanas, D., 2010, ApJ, 708, 1 Polatidis A. G., Conway J. E., 2003, PASA, 20, 69 Prieto, M.A., Kotilainen, J.K. 1997, ApJ, 491, 77 Prieto, M.A., Brunetti, G., Mack, K.-H. 2002, Science, 298, 193 Röser, S., Schilbach, E., Schwan, H., Kharchenko, N.V., Piskunov, A.E., Scholz, R.-D. 2008, A&A, 488, 401 Shimwell T. W., et al., 2019, A&A, 622, A1 Skrutskie, M.F., et al. 2006, AJ, 131, 1163 Stawarz, Ł., Cheung, C.C., Harris, D.E., Ostrowski, M. 2007, ApJ, 662, 213 Tavecchio F., Cerutti R., Maraschi L., Sambruna R. M., Gambill J. K., Cheung C. C., Urry C. M., 2005, ApJ, 630, 721 Tingay, S.J., Lenc, E., Brunetti, G., Bondi, M. 2008, AJ, 136, 2473 Vaidya, B., Mignone, A., Bodo, G., Rossi, P., & Massaglia, S. 2018, , 865, 144 van Haarlem M. P., et al., 2013, A&A, 556, A2 Werner, M.W., Murphy, D.W., Livingston, J.H., Gorjian, V., Jones, D.L., Meier, D.L., Lawrence, C.R. 2012, ApJ, 759, 86 Zhang, J., Du, S.-s., Guo, S.-C., et al. 2018, , 858, 27 [^1]: E-mail: [email protected] [^2]: <https://heasarc.gsfc.nasa.gov/docs/archive.html.> [^3]: http://tgssadr.strw.leidenuniv.nl/doku.php [^4]: The integral sensitivity is evaluated as the minimum flux above 100 MeV to obtain the 5$\sigma$ detection in 10 years of LAT observation in survey mode, assuming a power law spectrum with index 2. See http://www.slac.stanford.edu/exp/glast/groups/canda/\ lat\_Performance.htm
--- abstract: 'We report on finding variations in amplitude of the two main oscillation frequencies found in the Be star Achernar, over a period of 5 years. They were uncovered by analysing photometric data of the star from the SMEI instrument. The two frequencies observed, 0.775 d$^{-1}$ and 0.725 d$^{-1}$, were analysed in detail and their amplitudes were found to increase and decrease significantly over the 5-year period, with the amplitude of the 0.725 d$^{-1}$ frequency changing by up to a factor of eight. The nature of this event has yet to be properly understood, but the possibility of it being due to the effects of a stellar outburst or a stellar cycle are discussed.' author: - | K. J. F. Goss$^{1}$, C. Karoff$^{1,2}$, W. J. Chaplin$^{1}$, Y. Elsworth$^{1}$, I. R. Stevens$^{1}$\ $^{1}$ School of Physics and Astronomy, University of Birmingham, Edgbaston, Birmingham, B15 2TT\ $^{2}$ Department of Physics and Astronomy, Aarhus University, Ny Munkegade 120,DK-8000 Aarhus C, Denmark\ Email: [email protected] bibliography: - 'redo\_achernar.bib' title: Variations of the amplitudes of oscillation of the Be star Achernar --- \[firstpage\] Asteroseismology, techniques: photometric, stars: oscillations, stars: emission line, Be, stars: activity, stars: individual: Achernar Introduction ============ $\alpha$ Eridani, also known as Achernar (HD 10144), is one of the brightest stars in the Southern hemisphere. With an apparent magnitude equal to 0.46, it is the brightest and one of the nearest Be stars to Earth [@2007NewAR..51..706K]. Be stars are non-supergiant B-type stars that show, or have shown at one time or another, emission in the Balmer line series. The first Be star was reported in 1866 by Padre Angelo Secchi, where Balmer lines were observed in emission rather than in absorption [@2003PASP..115.1153P]. For Be stars, the rotational velocity is 70-80$\%$ of the critical limit [@2003PASP..115.1153P]. The rapid rotation causes two effects on the structure of the star: rotational flattening and equatorial darkening [@2007NewAR..51..706K]. Be stars have pulsation modes that are typical of $\beta$ Cephei and/or SPB stars, with frequencies roughly between 0.4 d$^{-1}$ (cycles per day) and 4 d$^{-1}$ [@2008CoAst.157...70G]. A more complete review of Be stars may be found in @2003PASP..115.1153P. In this paper we present an analysis of the temporal variation of the two main oscillation frequencies detected in Achernar. A description of the SMEI instrument used to collect the data is presented in Section 2. An overview of the data analysis procedure is given in Section 3. The results of the amplitude, frequency and phase analysis are presented in Section 4 and possible theories for the nature of the uncovered variation in oscillation amplitude are discussed in Section 5. Finally, concluding remarks are in Section 6. SMEI ==== Launched on 2003 January 6, the Solar Mass Ejection Imager (SMEI) on board the Coriolis satellite was designed primarily to detect and forecast Coronal Mass Ejections (CMEs) from the Sun moving towards the Earth. However, as a result of the satellite being outside the Earth’s atmosphere and having a wide angle of view it has been able to obtain photometric lightcurves for most of the bright stars in the sky. These data have been used to study the oscillations of a number of stars, for example: Arcturus [@2007MNRAS.382L..48T], Shedir [@2009arXiv0905.4223G], Polaris [@2008MNRAS.388.1239S], $\beta$ Ursae Minoris , $\gamma$ Doradus , $\beta$ Cephei stars (Stevens et al. 2010, in prep.) and Cepheid variables [@2010vsgh.conf..207B]. SMEI consists of three cameras each with a field of view of 60$^{\circ}$ $\times$ 3$^{\circ}$, which are sensitive over the optical waveband. The optical system is unfiltered, so the pass band is determined by the spectral response of the CCD. The quantum efficiency of the CCD is 45$\%$ at 700nm, falling to 10$\%$ at roughly 460nm and 990nm. The cameras are mounted such that they scan most of the sky every 101 minutes, therefore the notional Nyquist frequency for the data is 7.086 d$^{-1}$. Photometric results from Camera 1 and Camera 2 are used in the analysis of Achernar. Camera 3 is in a higher temperature environment than the other two cameras and as a result the photometric data is highly degraded. The photometric timeseries for Achernar is shown in Figure \[whole\_timeseries\]. Note that the pronounced u-shapes in the lightcurve are due to effects from the SMEI instrumentation. Since Camera 3 is not in use, the timeseries has a duty cycle of approximately 45$\%$. This duty cycle is typical of most stars observed with SMEI, although for some stars the duty cycle can be considerably higher. Figure \[timeseries\] shows an example segment of the Achernar timeseries obtained by SMEI where the flux has been converted into magnitudes. SMEI is capable of detecting millimagnitude brightness changes in objects brighter than 6.5 magnitudes. A detailed description of the SMEI instrument and the data analysis pipeline used can be found in (Spreckley $\&$ Stevens 2010, in prep.). ![5-year timeseries of Achernar data before a running mean was subtracted.[]{data-label="whole_timeseries"}](timeseries_orig.ps) ![30 day sample section of Achernar timeseries obtained with SMEI, which has been converted into magnitudes.[]{data-label="timeseries"}](timeseries.ps) Data Analysis ============= A 5-year dataset for Achernar was obtained by SMEI, running from 2003 June 13 to 2008 November 26 (Figure \[whole\_timeseries\]). Long term variations in the data were removed by subtracting a running mean with a length of 10 days. Various running mean lengths were tried and tested. It was found that the choice of smoothing did not significantly affect the amplitudes or the frequencies being analysed, nor was the error on the smoothing significant enough to be included in the error analysis of the frequencies. However the smoothing is required to reduce the noise at very low frequencies, e.g. long term variations in the timeseries such as the pronounced u-shapes in Figure \[whole\_timeseries\], an effect caused by the SMEI instrumentation. The data were then converted into magnitudes for analysis (see Figure \[timeseries\] for example segment). The timeseries as a whole was analysed using Period04 [@2005CoAst.146...53L]. We used Period04 to analyse frequencies in the timeseries between 0.000 d$^{-1}$ and 7.086 d$^{-1}$, over which it uses a Discrete Fourier Transform algorithm to create an amplitude spectrum. It is clear from the amplitude spectrum of Achernar (see Figure \[fig:spec\]), and other stars analysed using photometric data from SMEI, that there are frequencies present in the data that are due to the satellite. These frequencies occur at 1 d$^{-1}$, and multiples thereof, due to the sun-synchronous orbit of the satellite around the Earth. Any genuine signals from the star around the 1 d$^{-1}$ frequencies cannot be distinguished from those caused by the orbit of the satellite and are disregarded in the analysis. The timeseries, consisting of 1993 days in total, was then split into independent segments of 50 days. Each individual segment was analysed using Period04 for the frequency and amplitude of the two main components detected in the spectra, at F1 (0.775 d$^{-1}$) and F2 (0.725 d$^{-1}$), where the aim was to search for temporal variations of the parameters. Errors on the frequencies and amplitudes of the two main components were calculated using the Monte Carlo simulations in Period04 [@2005CoAst.146...53L]. Changes in phase were calculated using Period04, whereby the phase in each 50 day period was calculated at a fixed frequency. To maintain consistency in the analysis between the different segments, only the F1 and F2 frequencies were pre-whitened in the amplitude spectrum. This meant that other significant frequencies may still have been in the timeseries, which may have had consequences for calculations such as the SNR (signal-to-noise ratio) (see Section 4). Results ======= New Frequencies Found --------------------- ![Amplitude spectrum of Achernar, HD 10144[]{data-label="fig:spec"}](7588_spec.ps) The amplitude spectrum of the 5-year dataset of Achernar can be seen in Figure \[fig:spec\]. From this analysis, we are able to identify frequencies shown in Table $\ref{tab:acher}$. @1987MNRAS.227..123B first published a frequency of 0.792 d$^{-1}$ from simultaneous spectroscopy and photometry. A slightly different frequency of 0.7745 d$^{-1}$ was then determined by based on spectroscopic observations between 1996 and 2000 and is the more widely accepted value. This is the frequency F1 (0.775 d$^{-1}$), in Table \[tab:acher\]. reported on further frequencies using spectroscopic observations carried out between November 1991 and October 2000: 0.49 d$^{-1}$, 0.76 d$^{-1}$, 1.27 d$^{-1}$ and 1.72 d$^{-1}$. Only evidence of the 0.76 d$^{-1}$ frequency, which is likely to be the same frequency reported in , is evident in the SMEI data. There does appear to be a group of frequencies around 1.72 d$^{-1}$ in the SMEI data (see Figure \[fig:spec\]) but these were found to be combinations of the frequencies found in Table \[tab:acher\] and the 1 d$^{-1}$ frequency from the satellite. Frequencies F2 (0.725 d$^{-1}$) and F3 (0.680 d$^{-1}$) are frequencies where no published results were found in the literature. It is possible that the 1.72 d$^{-1}$ frequency observed by is actually the frequency F2 observed with SMEI but with an additional 1 day cycle effect. Further frequencies were found in the data, but these were the result of combinations of the frequencies mentioned above. @2008CoAst.157...70G reported on the first results on the Be stars observed with COROT. They found that in one Be star non-sinusoidal signals were present after already removing approximately 50 frequencies suggesting that the amplitudes or frequencies of the signals were changing during the observations. This is something that was observed when pre-whitening the data for Achernar in the initial amplitude spectrum. Many frequencies around the F2 frequency were removed from the timeseries, but evidence of this signal still remained, hence providing evidence for linewidth. This also occurred with the F1 frequency, but to a much lesser extent. -------------- ------------- ----------- ------- -- Frequency Amplitude SNR (d$^{-1}$) (mag) F1 0.775177(5) 0.0165(3) 27.09 F2$^{\star}$ 0.724854(6) 0.0129(3) 19.05 F3$^{\star}$ 0.68037(3) 0.0027(3) 4.11 -------------- ------------- ----------- ------- -- : Frequencies identified in Achernar, HD 10144. The starred ($^{\star}$) frequencies represent frequencies where no published results were found in the literature. Note: these frequencies are frequencies for the entire timeseries.[]{data-label="tab:acher"} Amplitude variation ------------------- ![A graph to compare the amplitude change of the two frequencies F1 and F2. The blue triangles represent the F1 frequency and the red squares represent the F2 frequency. The blue dotted line shows a smooth fit throught the F1 data points. The red dashed line shows a smooth fit through the F2 data points.[]{data-label="fig:acher_amp"}](amp_change.ps) ![Six amplitude spectra of Achernar at different epochs during the 5-year observation, showing frequencies between 0.5 d$^{-1}$ and 1.7 d$^{-1}$. Note F2 disappears in bottom two panels.[]{data-label="fig:timepanel"}](ext_6panel_7588_2.ps) The 5-year dataset was split into 50 day segments and the two frequencies with the largest amplitudes, F1 (0.775 d$^{-1}$) and F2 (0.725 d$^{-1}$), were analysed for changes in their frequency and/or amplitude. Figure \[fig:acher\_amp\] shows a plot of the amplitudes of these two frequencies as a function of time. The amplitudes vary and there is a significant increase in the amplitudes of both frequencies during the same time period, roughly between October 2004 and January 2007. The F2 frequency starts with a lower amplitude than the F1 frequency, but during the period when the amplitudes increase, the amplitude of the F2 frequency increases above the amplitude of the F1 frequency. The amplitudes of both frequencies decrease around January 2007, with the F2 frequency decreasing to an undetectable level, while the F1 frequency is still present. The absence of the F2 frequency at this time is not through lack of points in the dataset. The F2 frequency can no longer be detected in the 50 day time segments starting at: 2006-01-18, 2007-09-10, 2007-10-30, 2008-08-25 and 2008-10-14. The change in amplitude of the two frequencies is evident in Figure \[fig:timepanel\], which shows the amplitude spectra of Achernar at six different epochs separated by the large gaps seen in Figure \[whole\_timeseries\]. Here it is obvious that the amplitudes of both frequencies increase, with the F2 frequency increasing significantly more than the F1 frequency, and then decreasing to an undetectable level at the end of the observation. The noise around the frequencies increases when the amplitude increases. This can be seen when comparing the two top panels with the two middle planes in Figure 5. The fact that the noise around the frequencies increases when the amplitude increases suggests that the signals causing the frequencies may not be strictly coherent over timescales of hundreds of days. A non-coherent signal would also cause random phases. We therefore proceed to analyze frequency and phase variations in Section 4.3 below. In order to rule out the increase in amplitude of the two frequencies being due to effects from the SMEI instrument we looked at variations in the oscillations and light curves of other stars for comparison. In total, nine stars observed with SMEI were analysed to look for similar changes in the amplitude of oscillation, if oscillations were observed, and in the stability of the lightcurve over the same time period. If the effect were dependent on Right Ascension and Declination then other stars in the vicinity of Achernar would show this trend. Three stars in the vicinity of Achernar were analysed: HD 32249, HD 12311 and HD 3980, none of which showed the increase in amplitude. Another possibility is that the increase in amplitude may only be obvious in very bright stars (Achernar being the 9th brightest star in the sky). Arcturus, Vega and Capella (all stars brighter than Achernar) were analysed but no similar patterns were found. Three stars of photometric reference were also analysed: HD 168151, HD 155410 and HD 136064 [@NeilTarrant:2010], and they also showed null results. Frequency and Phase variation ----------------------------- ![$\emph{Panel 1:}$ Amplitude variations of the F1 frequency. $\emph{Panel 2:}$ Amplitude variations of the F2 frequency. $\emph{Panel 3:}$ Frequency variations in the F1 frequency. $\emph{Panel 4:}$ Frequency variations in the F2 frequency. $\emph{Panel 5:}$ Phase variations of the F1 frequency. $\emph{Panel 6:}$ Phase variations of the F2 frequency. The dashed lines show a smooth fit through the data points. Note that errors on some of the panels are smaller than the symbols.[]{data-label="fig:panel_amp_freq"}](panel_amp_freq.ps) Variations in Be stars can be ascribed to either rotation or non-radial oscillations. It is generally assumed that the oscillations will have constant frequency and phase whereas rotationally modulated variations will have a transient nature and thus non-constant frequency and phase i.e. they will be non-coherent. Both the frequency and phase of what are believed to be rotationally modulated variations can change due to outbursts from the central star to the surrounding disc (see for a discussion of this). On the other hand saw similar amplitude changes in relation to an outburst in what they believed were non-radial oscillations. The observations that we present here cover 5 years and thus are expected to cover many outbursts, but we do not have any information when these outbursts have taken place. Also the time scales of the amplitude changes that we report here are much longer than the expected time scale of the outbursts. We do therefore not have the possibility to correlate individual outbursts with amplitude, frequency or phase changes. On the other hand if the frequencies and phases of the identified oscillations are indeed coherent over the 5 year time-span it would seem likely that the variability is due to oscillations. In Figure \[fig:panel\_amp\_freq\] it is seen that F1 is a coherent oscillation with constant frequency and phase over the 5 year time-span. F2 shows a decrease in its frequency during 2004 and what appears to be a random phase i.e. it is not fully coherent. This makes it possible that F2 is due to rotational modulation, but the oscillation scenario cannot be completely ruled out. Firstly, the similarity of the amplitudes of F1 and F2 suggest a common origin. Secondly, it is not obvious that the lifetimes of the oscillations in Be stars are long compared to the 50 day segments used in this analysis. And thirdly, the change in the frequency of F2 appears at low amplitude and thus a low S/N. It is therefore not clear if the frequency change is indeed significant. Discussion ========== An explanation of the nature of the observed amplitude variation could be a transient frequency during a stellar outburst as explained by who report on non-radially pulsating Be stars. Be stars are known for their stellar outbursts where a large transfer of mass from the star to its circumstellar disc occurs. discuss transient periods that are within 10$\%$ of the main photospheric period and which only appear during outburst events. It is possible that the change in amplitude of the frequencies is due to temporary changes in the surface of the star such as a stellar outburst. report on the analysis of the Be star HD 49330, observed with the CoRoT satellite. They find a direct correlation between amplitude variations in the pulsation modes and outburst events. The amplitudes of the main frequencies (p mode oscillations, where gradients of pressure are the dominant restoring force) decrease before and for the duration of the outburst, only increasing after the outburst has finished. Other groups of frequencies (g mode oscillations, where gravity is the dominant restoring force) appear just before the outburst reaching maximum amplitudes, during the outburst and then disappearing once the outburst is over. However, it had not been determined whether the variations in pulsation modes produced the outburst, or whether the outburst leads to the excitation of the pulsation modes. show that the changes in stellar oscillations from possible stellar outbursts last up to tens of days whereas the change in amplitude of the frequencies in Achernar last much longer, up to approximately 1000 days. Long term variations in Be stars that last from months to years have been attributed to structural change in the circumstellar disk, e.g. an outburst filling the circumstellar disk with new material [@2009CoAst.158..194N]. However, the longer duration may be an indication that the variations we observe are not linked to an outburst event, but relates more to the internal structure of the star and could be evidence for a cycle similar to the Sun’s solar cycle. In the Sun the frequencies and amplitudes of the acoustic modes show variations that follow the changing magnetic activity during the solar cycle [@1990Natur.345..322E]. For the low-degree modes, the fractional change in frequency is approximately 1.3 $\times$ 10$^{-4}$ and the fractional change in amplitude approximately 0.2. Given a cycle effect for Achernar that changes both amplitude *and* frequency and also making the crude assumption that the ratio of the fractional changes in amplitude and frequency are the same as for the Sun, we find that we do not have the precision to detect such a change in frequency. Even if the cycle were to only change the amplitude, resulting in the associated amplitude modulation mentioned in Section 4.3, the frequency change is still too small to be seen. found long term variations of the equivalent width of the H$\alpha$ line in Achernar. These variations show that Achernar was in a strong emission phase (or Be phase) around 1965, 1978 and 1994. If the oscillation amplitude changes presented here are related to a B to Be phase transition then the changes suggest that Achernar was in a Be phase around 2006. Though we are not aware of any reports of Achernar showing strong emission around 2006, such a scenario is inconsistent with the 14-15 year cyclic B to Be phase transition suggested by . found that the orbital period of the close companion of Achernar was approximately 15 years and as a result its periodicity could be the trigger of the Be episodes. Again, if correct, this would imply that Achernar would be in a Be phase around 2010 whereas the oscillation amplitude variations indicate 2006. Conclusions =========== The long duration of the SMEI photometric data has allowed us to study the variations in the pulsation modes of the Be star Achernar over a period of 5-years. Analysis of the complete 5-year dataset has uncovered three significant frequencies: F1 (0.775 d$^{-1}$), F2 (0.725 d$^{-1}$), F3 (0.680 d$^{-1}$), of which only F1 has been published previously. F2 is believed to be transient in nature from analysis of the independent time segments, a phenomenon that the SMEI instrument has the ability to detect due to its long photometric timeseries. F3 has a SNR close to four and this frequency may be a pulsation or transient frequency. Analysis of the independent time segments showed that the amplitudes of the two main frequencies, F1 and F2, have significantly increased and then decreased over the period of 5-years. As discussed, this may be explained by the presence of a stellar outburst or a stellar cycle, but for the present these speculations remain inconclusive. Acknowledgments {#acknowledgments .unnumbered} =============== We would like to thank Steven Spreckley for his work in developing the SMEI pipeline and also Neil Tarrant for his useful input. We also thank Coralie Neiner for fruitful discussions. K.J.F.G., W.J.C., Y.E. and I.R.S. acknowledge the support of STFC. C.K. acknowledges the support from the Danish Natural Science Research Council. \[lastpage\]
--- abstract: 'Recent studies on nanoscale FET sensors reveal the crucial importance of the low frequency noise for determining the ultimate detection limit. In this letter, the $1/f$-type noise of Si nanowire (NW) FET sensors is investigated. We demonstrate by using a dual-gated approach that the signal-to-noise ratio (SNR) can be increased by almost two orders of magnitude if the NW-FET is operated in an optimal gate voltage range. In this case, the additional noise contribution from the contact regions is minimized, and an accuracy of 0.5 of a pH shift in one Hz bandwidth can be reached.' author: - 'A. Tarasov' - 'W. Fu' - 'O. Knopfmacher' - 'J. Brunner' - 'M. Calame' - 'C. Sch[ö]{}nenberger' nocite: '[@*]' title: 'Signal-to-noise ratio in dual-gated Si nanowire FET sensors' --- During the past decade, there has been a growing interest in applying the concept of an ion-sensitive field effect transistor (ISFET) [@Bergveld70; @Bergveld03] to nanoscale devices. It has been shown that carbon nanotube (CNT) [@Tans98; @Kong00; @Collins00; @Krueger01], graphene [@Ang08; @Ohno09; @Cheng10], and nanowire (NW) FETs [@Duan01; @Cui01; @Stern07] are especially promising for sensing applications. Compared to conventional FETs, nanoscaled devices provide a larger surface-to-volume ratio. This results in a high sensitivity of the overall FET channel conductance to changes in the surface potential caused by the adsorption of molecules [@Elfstrom07]. In order to reach the detection limit, intense attempts have recently been made to understand the factors determining the signal-to-noise ratio (SNR) [@Heller09; @Gao10; @Cheng10; @Rajan10]. Studies on CNT FETs [@Heller09] and on NW [@Gao10] showed that the SNR increases in the subthreshold regime, which is therefore the preferred regime for high sensitivity. However, a more detailed understanding of the noise properties is needed to optimize the SNR across the full operating range of the FET. In the present work, we measure the low-frequency $1/f$ noise of a dual-gated [@Elibol08; @Heller09; @Knopfmacher09; @Knopfmacher10] NW-FET in ambient and in a buffer solution and determine the resolution limit expressed in a noise equivalent threshold voltage shift $\delta V_{th}$, the latter being the measurement quantity in these types of sensors. We identify two regimes which differ in the relative importance of the contact and intrinsic NW resistance. The lowest value in $\delta V_{th}$ is found when the working point of the NW-FET is adjusted such that the intrinsic NW resistance dominates. In the other case when the contact resistance dominates the noise can be larger by almost two orders of magnitude for nominally the same overall resistance. This result shows the importance of being able to adjust the operating point properly. In the best possible case we determine a resolution limit of of a pH change in one Hz bandwidth, which is comparable to a commercial pH meter (0.1% [@Microsens]), but for a much smaller active sensing area. Silicon NW-FETs were produced by UV lithography according to the top-down approach which was previously described in detail [@Knopfmacher10]. This high-yield process provides reproducible, hysteresis-free NW-FETs with the following dimensions: length$\times$width$\times$height = 10$\mu$m$\times$ 700nm$\times$80nm (Fig. \[fig1\]a). A thin Al$_2$O$_3$ layer was deposited on the device to ensure leakage-free operation in an electrolyte solution. In addition, a liquid channel was formed in a photoresist layer, reducing the total area exposed to the electrolyte. The measurement setup is schematically shown in Fig. \[fig1\]b. The NWs were operated at low source-drain DC voltages $V_{sd}=10-100$mV in the linear regime. The source-drain current $I_{sd}$ through the NW was measured by a current-voltage converter with a variable gain ($10^5-10^9$V/A). The conductance $G$ of the NW-FET is then obtained as the ratio $G=I_{sd}/V_{sd}$ while varying both the back-gate $V_{bg}$ and liquid gate voltage $V_{lg}$. This yields a two-dimensional (2D) conductance map, as shown in Fig. \[fig1\]c [@Knopfmacher10]. The vertical axis $V_{ref}$ is the potential of the liquid, as measured by a calomel reference electrode. The equivalent voltage noise power spectral density $S_{V}$ (see e.g. [@Hooge81]) was determined along the solid white lines through a fast Fourier transform of the time dependent fluctuations of $I_{sd}$. The conductance map in Fig. \[fig1\]c displays two different regimes, above and below the white dashed line at about $V_{ref}=+0.4$V, which differ in their relative coupling to the two gates. This is visible in the slopes $s=\partial V_{ref}/\partial V_{bg}$ (short white lines and numbers), determined at constant $G$, that represent the ratio of the gate coupling capacitances $C_{bg}/C_{lg}$, where $C_{bg}$ denotes the back-gate and $C_{lg}$ the liquid-gate capacitance [@Knopfmacher10]. To understand the origin of the two different regimes, one has to see that the NW-FET resistance $R=1/G$ is composed of two resistances in series. The intrinsic NW resistance $R_{NW}$ and the contact resistance $R_{c}$. Due to the confinement of the liquid channel to the nanowire, $R_c$ is only weakly affected by the liquid gate (small $C_{lg})$. Hence, if $R_c$ dominates $R$, $C_{bg}/C_{lg}$ is large, which corresponds to the lower regime with $V_{ref} < 0.4$V. In contrast, if $R_c$ can be neglected, $R$ is determined by $R_{NW}$, which on its own is more strongly capacitively coupled to the liquid than to the back-gate. We refer to the two regimes as contact and NW dominated. Figure \[fig2\] shows the frequency dependence of the noise power $S_V(f)$ of the NW for different resistance values, measured in air (a) and in a buffer solution with pH 7 (b). The corresponding thermal background noise, recorded at zero bias, has been subtracted from the data. An example is shown by (). $S_V(f)$ has a clear $1/f$ dependence (dashed lines), and its amplitude is proportional to $V_{sd}^2$ (inset), as expected for $1/f$ noise [@Hooge81]. Such a behavior can phenomenologically be described by Hooge’s law [@Hooge69; @Hooge81] $$\label{Hooge} S_V(f)=V_{sd}^2\frac{\alpha}{Nf}.$$ The material dependent parameter $\alpha$ accounts for scattering effects and the constant $N$ denotes the number of fluctuators in the system. In Fig. \[fig3\], the normalized noise amplitude $S_V/V_{sd}^2$ at 10Hz is depicted as a function of $R$. The noise in the system increases dramatically (indicated by dashed lines) above a certain threshold resistance value. The position of this threshold (arrows) and the steepness of the rise depends on whether the NW is gated by the liquid or the back-gate. In air ($\blacktriangledown$), where $V_{bg}$ is the only applied gate voltage, the noise starts to increase at roughly 30M$\Omega$. A similar behavior is observed in liquid in the contact-dominated regime, i.e. for $V_{ref}=-0.3$V ([$\blacksquare$]{}). In contrast, within the NW-dominated regime ($\bullet$), the noise increases steeper, starting already at about 10M$\Omega$. For $R$ smaller than the respective thresholds, the noise level is approximately constant. The apparent superimposed structure observed in this range is wire specific. Different NW-FETs, while confirming the general dependence, typically display a different fine structure. The thresholds of $10-30$ M$\Omega$ correspond to the transition from the linear to the subthreshold regime of the FETs. The physical signal in FET sensors is the shift $\Delta V_{th}$ of the threshold voltage $V_{th}$ caused by a chemical change on the sensing surface. It is obtained from the measured conductance change $\Delta G$ and the transconductance $g=G'=dG(V_g)/dV_g$, characteristic for a given FET, as $\Delta V_{th}=\Delta G/g$. This equation can be used to determine the true figure of merit which is the equivalent noise power of the threshold voltage $\delta V_{th}$ given by $$\label{deltaVth} \delta V_{th}=\frac{\sqrt{S_V/V_{sd}^2(f)}}{g/G}=\frac{\sqrt{S_V/V_{sd}^2(f)}}{(\ln G)'}.$$ Here, we have made use of the relation $\delta G/G=\sqrt{S_V}/V_{sd}$. In Fig. \[fig4\]a we show $\delta V_{th}$ when the controlling gate is $V_{bg}$ for data measured in air () together with the data acquired in buffer solution at $V_{ref}=-0.3$V ([$\blacksquare$]{}). Both curves show a very similar behavior. Since we know that the liquid data obtained at $V_{ref}=-0.3$V is contact dominated, we conclude that the measurement in air is also contact dominated. In Fig. \[fig4\]b we summarize $\delta V_{th}$ for measurements done in an electrolyte. To obtain $\delta V_{th}$, we consistently use $V_{lg}$ as the controlling gate for all three data sets in this figure. Interestingly, in the NW-dominated regime ($\bullet$) $\delta V_{th}$ is much smaller than in the contact-dominated regime ([$\blacksquare$]{}). The difference can amount to almost two orders of magnitude. Although the voltage noise values $S_V$ are not much different in the two regimes (Fig. \[fig3\]), the sensitivities in the true measurement quantity greatly differ. This shows that the transconductance values, and therefore the gate-coupling to the liquid, are crucial factors determining the ultimate sensitivity. We also stress that $\delta V_{th}$ can be low over an extended range of NW resistance values $R$, from $\sim 1$ to $100$M$\Omega$. This range covers the transition from the linear to the subthreshold regime. The lowest value of $2-3\cdot 10^{-5}$V$/\sqrt{\textnormal{Hz}}$ corresponds to an accuracy of 0.5 of a typical Nernstian pH shift in one Hz bandwidth (right axis) throughout the full resistance range ([$\bullet$]{}). The data set obtained at a fixed $V_{bg}$ and varying $V_{ref}$ () demonstrate the cross-over between the two different regimes. In this case a very pronounced transition from a regime with low sensitivity (low $R$) to a regime with high sensitivity (larger $R$) is apparent. For this case, it has recently been pointed out [@Gao10] that the signal-to-noise ratio (SNR) (corresponding to $1/\delta V_{th}$) increases with resistance $R$ and is the highest in the subthreshold regime. We confirm this as well but we emphasize that the dual-gate approach used here provides a more general and detailed insight. For $V_{ref}=-0.3$V, the contact leads also contribute to the total noise and strongly decrease the SNR. In contrast, for $V_{ref}=+0.5$V, the resistance of the NW-FET is not contact-dominated. In that case the SNR is constantly large over the whole resistance range. As a last step, we estimate the minimum detectable number of charges $Q_0$ at the NW sensing surface. To do so, we first relate the charge noise $\delta Q_0$ to the respective quantity $\delta Q$ for charges in the NW and then to the conductance noise $\delta G=G\sqrt{S_V}/V_{sd}$ using the relations $\delta Q/\delta Q_0 = C_{NW}/C_{DL}$ and $\delta G = \delta Q \mu/L^2$, where $C_{NW}$, $C_{DL}$, $\mu$, and $L$ denote the density-of-state capacitance (quantum capacitance) of the NW, the double-layer capacitance, the mobility and the length of NW, respectively. In the subthreshold regime we have in addition $C_{NW}=Q/(k_BT/e)$, where $T$ is the absolute temperature and $k_B$ Boltzmann’s constant [@Gao10]. This altogether yields $$\delta Q_0= \frac{k_B T}{e}C_{dl}\sqrt{\frac{S_V}{V_{sd}^2}},$$ Using measured values for $V_{ref}=+0.5$V at 10Hz and $C_{dl}\approx 7\cdot 10^{-12}$F, we obtain $\delta Q_0/e \approx 10^1\dots 10^2$$\sqrt{\textnormal {Hz}}^{-1}$. In conclusion, we have studied the low-frequency noise in a dual-gated SiNW-FET sensor and the signal-to-noise ratio over a large resistance range. The deduced threshold voltage noise $\delta V_{th}$ is an important quantity in a FET sensor and strongly depends on the working point. We stress that $\delta V_{th}$ can be low over an extended range from the linear to the subthreshold regime, even though the voltage noise $S_V$ grows non-linearly with resistance and is the highest in the subthreshold range. We also confirmed recent studies that found the SNR increasing with resistance in a certain case. The authors acknowledge the LMN at the PSI Villigen for the oxidation of the SOI wafers. We are grateful for the support provided by the nano-tera.ch, Sensirion AG, and the Swiss Nanoscience Institute (SNI). [37]{} P. Bergveld, Development of an ion-sensitive solid-state device for neurophysiological measurements. *IEEE Trans. Biomed. Eng.* **17**, 70 (1970). P. Bergveld, Thirty years of ISFETOLOGY - What happened in the past 30 years and what may happen in the next thirty years. *Sens. Actuators B* **88**, 1 (2003). S. Tans, A. Verschueren, C. Dekker, Room-temperature transistor based on a single carbon nanotube. *Nature* **393**, 49 (1998). J. Kong, N. Franklin, C. Zhou, M. Chapline, S. Peng, K. Cho, H. Dai, Nanotube molecular wires as chemical sensors. *Science* **287**, 622 (2000). P. Collins, K. Bradley, M. Ishigami, A. Zettl, Extreme oxygene sensitivity of electronic properties of carbon nanotubes. *Science* **287**, 1801 (2000). M. Kr[ü]{}ger, M. R. Buitelaar, T. Nussbaumer, C. Sch[ö]{}nenberger, Electrochemical Carbon Nanotube Field-Effect Transistor. *Appl. Phys. Lett.* **78**, 1291 (2001). P. Ang, W. Chen, A. Wee, and K. Ping, Solution-Gated Epitaxial Graphene as pH Sensor. *J. Am. Chem. Soc.* **130**, 14392 (2008). Y. Ohno, K. Maehashi, Y. Yamashiro, and K. Matsumoto, Electrolyte-Gated Graphene Field-Effect Transistors for Detecting pH and Protein Adsorption. *Nano Lett.* **9**, 3318 (2009). Z. Cheng, Q. Li, Z. Li, Q. Zhou, and Y. Fang, Suspended Graphene Sensors with Improved Signal and Reduced Noise. *Nano Lett.* **10**, 1864 (2010). X. Duan, Y. Huang, Y. Cui, J. Wang, C. Lieber, Indium phosphide nanowires as building blocks for nanoscale electronic and optoelectronic devices. *Nature* **409**, 66 (2001). Y. Cui, C. Lieber, Functional nanoscale electronic devices assembled using silicon nanowire building blocks. *Science* **291**, 851 (2001). E. Stern, J. F. Klemic, D. A. Routenberg, P. N. Wyrembak, D. B. Turner-Evans, A. D. Hamilton, D. A. LaVan, T. M. Fahmy, and M. A. Reed, Label-free immunodetection with CMOS-compatible semiconductor nanowires. *Nature* **445**, 519 (2007). N. Elfström, R. Juhasz, I. Sychugov, T. Engveldt, A. Eriksson, J. Linnros, Surface Charge Sensitivity of Silicon Nanowires: Size Dependence. *Nano Lett.* **7**, 2608 (2007). N. K. Rajan, D. A. Routenberg, J. Chen, and M. Reed, 1/f Noise of Silicon Nanowire BioFETs. *IEEE Electr. Device L.* **31**, 615 (2010). I. Heller, J. Männik, S. G. Lemay, and C. Dekker, Optimizing the Signal-to-Noise Ratio for Biosensing with Carbon Nanotube Transistors. *Nano Lett.* **9**, 2268 (2009). X. A. Gao, G. Zheng, C. Lieber, Subthreshold Regime has the Optimal Sensitivity for Nanowire FET Biosensors. *Nano Lett.* **10**, 547 (2010). O. Elibol, B. Reddy Jr., R. Bashir, Nanoscale thickness double-gated field effect silicon sensors for sensitive pH detection in fluid. *Appl. Phys. Lett.* **92**, 193904 (2008). O. Knopfmacher, D. Keller, M. Calame, C. Schönenberger, Dual Gated Silicon Nanowire Field Effect Transistors. *Procedia Chem.* **1**, 678 (2009). O. Knopfmacher, A. Tarasov, W. Fu, M. Wipf, B. Niesen, M. Calame, C. Schönenberger, Nernst Limit in Dual-Gated Si-Nanowire FET Sensors. *Nano Lett.* **10**, 2268 (2010). Microsens SA, http://www.microsens.ch/products/pdf/ MSFET\_datasheet%20.pdf (retrieved 17 September 2010). F. N. Hooge, 1/f noise is no surface effect. *Phys. Lett. A* **29**, 139 (1969). F. N. Hooge, T. G. M. Kleinpenning, L. K. J. Vandamme, Experimental studies on 1/f noise. *Rep. Prog. Phys.* **44**, 479 (1981). ![\[fig1\](a) Optical image of a sample with four nanowires (NWs) (horizontal) and an enlarged SEM image of one of the NWs. The (vertical) liquid channel is the only part of the sample which is not covered with photoresist. (b) A schematic representation of the setup used for the measurements in liquid. There are two gates, a back-gate and a liquid-gate with applied gate voltages $V_{bg}$ and $V_{lg}$. The liquid potential is measured by a calomel reference electrode and denoted as $V_{ref}$. The NW-FETs are characterized by their small-signal conductance map $G(V_{bg},V_{ref})$, shown in (c), and by the noise obtained from the temporal dependence of the source-drain current $I_{sd}$ using fast Fourier transform. In (c) the horizontal dashed line marks the border between two regimes, the NW (upper) and contact (lower) dominated regime. Noise measurements were conducted along the three solid white lines. Short solid lines and numbers represent the slopes of the equiconductance lines.](fig1.pdf){width="85mm"} ![\[fig2\] The noise power spectral density of the voltage fluctuations $S_V$ obtained for a source-drain bias voltage of $V_{sd}=90$mV for different resistances of a NW measured (a) in air and (b) in a buffer solution (Titrisol pH 7, Merck). The dashed lines indicate a $1/f$ slope. The open symbols in (b) represent the thermal noise of the NW at $V_{sd}=0$V (). The calculated thermal noise of a 6.5M$\Omega$ resistor at 300K is shown for comparison (horizontal line). Inset: $S_V$ as a function of $V_{sd}$ at $7$Hz, measured in air (logarithmic scale). The solid red line indicates a power law with exponent two.](fig2.pdf){width="85mm"} ![\[fig3\] $S_V$ divided by the squared source-drain voltage $V_{sd}^2$ as a function of $R$ at 10Hz, measured in air and in buffer solution (logarithmic scale). Dashed lines are guides to the eye. Arrows indicate the transition between different regimes.](fig3.pdf){width="85mm"} ![\[fig4\] Threshold voltage fluctuations $\delta V_{th}$=$S_V/(\ln G(V_{bg}))'$ (a) and $S_V/(\ln G(V_{ref}))'$ (b), calculated from the data in Fig. \[fig3\] as a function of $R$. Dashed lines are guides to the eye. The right axis shows $\delta V_{th}$ relative to the Nernst limit of the pH sensitivity (59.5mV/pH at 300K).](fig4.pdf){width="85mm"}
\ \ An alternative Lagrangian definition of an integrable defect is provided and analyzed. The new approach is sufficiently broad to allow a description of defects within the Tzitzéica model, which was not possible in previous approaches, and may be generalizable. New, two-parameter, sine-Gordon defects are also described, which have characteristics resembling a pair of ‘fused’ defects of a previously considered type. The relationship between these defects and Bäcklund transformations is described and a Hamiltonian description of integrable defects is proposed. Introduction ============ It was noticed some years ago [@bczlandau; @bcztoda] that an integrable field theory in two-dimensional space-time can accommodate discontinuities yet remain integrable. The fields on either side of a discontinuity are related to each other by a set of defect’ conditions, including the influence of a defect’ potential whose form is required by integrability. The defect conditions themselves are interesting since they are related, at least in the examples investigated so far, to Bäcklund transformations frozen at the location of the defect. It has been found, possibly owing ultimately to the latter observation, that defects can be supported within the $a_n^{(1)}$ series of affine Toda models [@mikhailov79; @mikhailov80], of which the sine-Gordon model is the first member. Intriguingly, and despite translation invariance being explicitly broken by the prescribed location, defect conditions compatible with integrability are determined simply by demanding that the defect itself be able to contribute consistently to ensure the whole system supports a conserved energy and momentum. The defect may be located anywhere (or even move at a constant speed [@bczsg05]), but the defect conditions apparently compensate for the evident lack of translation invariance. One might regard the defect as a state within the model whose presence is indicated by a set of defect conditions described by an additional term in the Lagrangian description rather than being a field excitation or smooth field configuration. Typically, an integrable defect will be purely transmitting and its effect does not depend upon its location, meaning it is essentially ‘topological’. At a classical level this is exemplified by the passage of a sine-Gordon soliton through a defect where the soliton will be delayed (or advanced), but might alternatively, according to circumstances, be absorbed by the defect or flipped to an anti-soliton [@bczsg05]. Similar types of behaviour are observed for the complex solitons of the $a_n^{(1)}$ models [@cz07]. At a quantum level, defects also appear to play a role though again they are purely transmitting and described by a transmission matrix that is compatible with the bulk scattering matrix. The purely transmitting aspect of the setup was to be expected from observations by Delfino, Mussardo and Simonetti [@Delf94], but it is still of interest to see exactly how this transpires in detail. In the sine-Gordon case, the transmission matrix was anticipated by Konik and LeClair [@Konik97] but rederived and its properties explored in detail in [@bczsg05]; for other members of the $a_n^{(1)}$ series, the transmission matrices have been provided more recently [@cz09]. There are a number of related ideas and calculations, including perturbative checks of transmission factors for breathers, and an analysis of the interesting relationship between integrable boundary conditions and defects; some of these are explored in the article by Bajnok and Simon [@Bajnok]. The sine-Gordon Bäcklund transformation was generalised to $a_n^{(1)}$ affine Toda models by Fordy and Gibbons [@Fordy80] and it seems surprising there appear to be no similarly explicit Bäcklund transformations for the other series of Toda models. However, that fact is at least consistent with the apparent absence of defects in most of these models, at least of the kind previously considered [@cz09]. On the other hand, there are several types of Bäcklund transformation available in the literature for the Tzitzéica model [@Tzitzeica; @TzitzeicaB1; @TzitzeicaB2; @TzitzeicaB3][^1] and, therefore, one might suppose there should be a generalisation of the defect, at least for this model, and possibly for others. The purpose of this article is to propose a generalisation by allowing a defect to have its own degree of freedom in a certain well-defined manner, which is just general enough to encompass the Tzitzéica model. This is reminiscent of an idea of Baseilhac and Delius concerning dynamical boundaries [@Baseilhac] though it turns out to be rather different in practice. Applying the same idea to massive free fields and to the sine-Gordon model leads to new types of defect even there, encouraging the possibility of finding a more general framework that might be able to accommodate defects in all Toda models. It is interesting also to note that in the sine-Gordon model the new defects belong to a two-parameter family, which in a certain sense might be regarded as ‘bound states’ of the defects introduced in [@bczlandau]. As mentioned above, the requirement of overall energy-momentum conservation is surprisingly powerful and will be the main technique employed, although, clearly, further checks are needed to verify integrability. On the other hand, previous experience strongly suggests the conditions following from momentum conservation in the presence of a defect are more or less equivalent to the restrictions imposed by integrability: for example, even if the bulk models on either side of the defect are not specified in advance, they will be severely restricted by insisting on momentum conservation once the defect is taken into account. So far, unlike the cases within the older framework, where the integrability is underpinned by a generalised Lax pair [@bczlandau; @bcztoda], no suitable Lax pair description describing the new framework exists yet, and it is necessary to provide alternative arguments. A small step in this direction is provided in Appendix A where it is demonstrated that the new defect conditions for the sinh/sine-Gordon equation are enough to ensure the existence of a conserved spin three charge. Other, indirect, evidence is provided in section 5 where the relationships between defects of different types and Bäcklund transformations are elaborated. Finally, a sketch of a Hamiltonian approach is given in section 6 within which defect conditions are regarded as constraints imposed at the location of the defect on the fields to either side of it. Generalising the framework ========================== Consider a defect located at the origin $x=0$ and let $u$ and $v$ be the fields on either side of it in the regions $x<0$ and $x>0$, respectively. Typically, a defect defined by Bäcklund conditions will have a discontinuity, in the sense that while the conditions sewing the two fields at the origin constrain their derivatives the fields themselves are not prescribed. In other words, it is expected that the values of the fields approaching $x=0$ from their respective domains need not match and it should be expected that $u(0,t)-v(0,t)\ne 0.$ The basic idea to be explored here introduces a new variable $\lambda(t)$ associated with the defect itself. The simplest setup one might envisage does not directly associate dynamics to $\lambda$ but is linear in $\lambda_t$ having a Lagrangian description of the form: $$\label{lambdalagrangian} {\cal L}=\theta(-x){\cal L}_u +\theta(x){\cal L}_v+\delta(x) \left(\frac{uv_t-vu_t}{2}+\lambda(u-v)_t-\lambda_t (u-v) -{\cal D}(u,v,\lambda)\right),$$ where the Heaviside step function $\theta(x)$ and the Dirac delta function have been inserted to ensure the fields $u$, $v$ are restricted to their respected domains with the defect located at $x=0$. In a sense, $\lambda(t)$ plays the role of a Lagrange multiplier: if the potential were absent, integrating over $\lambda$ would require the discontinuity to be time-independent. However, because the potential also depends on $\lambda$ it has a more interesting effect. As we shall see, this is the case even if the potential is quadratic and the defect links two free massive fields. For the purposes of distinguishing the cases with and without the extra degree of freedom, defects of the original type ($\lambda\equiv 0$) will be called type I and those where $\lambda$ plays a role will be called type II. The defect conditions at $x=0$ implied by are: $$\begin{aligned} \label{defectcondition} u_x&=&v_t-2\lambda_t-\frac{\partial{\cal D}}{\partial u} \\ v_x&=&u_t-2\lambda_t+\frac{\partial{\cal D}}{\partial v} \\ \label{defectcondition3} u_t&=&v_t+\frac{1}{2}\frac{\partial{\cal D}}{\partial \lambda}.\end{aligned}$$ Then, it is not difficult to show directly that ${\cal E+D}$ is conserved, where ${\cal E}$ is the combined bulk contributions to the total energy from the fields $u$ and $v$. This was to be expected since time translation invariance has not been violated. On the other hand, as usual the contribution from the fields $u$ and $v$ to the total momentum is not conserved and the requirement of being able to construct a compensating contribution from the defect is highly constraining. Defining $${\cal P}=\int_{-\infty}^0 dx\, u_x u_t + \int_0^{\infty} dx\, v_x v_t,$$ differentiating with respect to time, and using the bulk equations of motion, gives $$\dot {\cal P}=\frac{1}{2}\left(u_t^2+u_x^2-2U(u)\right)_{x=0}- \frac{1}{2}\left(v_t^2+v_x^2-2V(v)\right)_{x=0}.$$ Using the defect conditions (and simplifying the notation on the understanding all field quantities are evaluated at $x=0$), the latter can be rewritten as $$\label{basicrelation} -v_t\frac{\partial{\cal D}}{\partial u}-u_t\frac{\partial{\cal D}}{\partial v}+2\lambda_t \left(\frac{\partial{\cal D}}{\partial u}+\frac{\partial{\cal D}}{\partial v} +\frac{1}{2}\frac{\partial{\cal D}}{\partial \lambda}\right) +\frac{1}{2}\left(\left( \frac{\partial{\cal D}}{\partial u}\right)^2-\left( \frac{\partial{\cal D}}{\partial v}\right)^2\right)-U+V.$$ For type I defects it would be natural to require the last piece (without any time-derivatives) to vanish and the first two pieces should be a total time derivative leading to equations for the potential ${\cal D}$: $$\label{typeIconditions} \frac{\partial^2{\cal D}}{\partial u^2}=\frac{\partial^2{\cal D}}{\partial v^2},\quad \frac{1}{2}\left(\left( \frac{\partial{\cal D}}{\partial u}\right)^2-\left( \frac{\partial{\cal D}}{\partial v}\right)^2\right)=U-V.$$ This was the setup originally considered in [@bczlandau]. In fact, as was recalled in the introduction, the conditions are highly constraining, effectively limiting $U,V$ (and ${{\cal D}}$) to the set of sine/sinh-Gordon, Liouville, massive or massless, free fields. In particular, the Tzitzéica equation is explicitly excluded. It is also worth recalling the well-known fact that the same selection of fields follows from insisting on the conservation of a spin three charge in the bulk (and that a careful analysis of the energy-like spin three charge is enough to provide the full set of integrable boundary conditions for the sine/sinh-Gordon model [@ghoshal]). The Tzitzéica equation does not allow the conservation of a spin three charge but is the one additional possibility that arises if one instead examines a bulk conserved charge of spin five. However, for type II defects, where $\lambda\ne0$, the condition on the part of containing no explicit derivatives is weaker because it need not be zero as was assumed in . Rather, it should be equated with $$\frac{1}{2}F(u,v,\lambda)\frac{\partial{\cal D}}{\partial \lambda}\equiv (u-v)_t\, F(u,v,\lambda),$$ for some function $F$ depending on $u,v$ and $\lambda$, but not their derivatives. In turn, this observation modifies the impact of the other terms. Taking it into account and assuming the result is a total time derivative of $-\Omega$, designed to be a functional of $u(0,t), v(0,t)$ and $\lambda(t)$, requires: $$\begin{aligned} {\nonumber}\frac{\partial\Omega}{\partial u}&=&\frac{\partial{\cal D}}{\partial v}-F \\ {\nonumber}\frac{\partial\Omega}{\partial v}&=&\frac{\partial{\cal D}}{\partial u}+F \\ \,\frac{\partial\Omega}{\partial \lambda}&=& -2\left( \frac{\partial{\cal D}}{\partial u}+\frac{\partial{\cal D}}{\partial v}+\frac{1}{2} {\nonumber}\frac{\partial{\cal D}}{\partial \lambda}\right),\\ \left(\frac{\partial {{\cal D}}}{\partial u}\right)^2-\left(\frac{\partial {{\cal D}}}{\partial v}\right)^2&=&2(U-V)+F\,{{\cal D}}_\lambda\, .\end{aligned}$$ This set of equations entails a number of compatibility relations and to examine these it is convenient to use new field coordinates defined at the defect location: $$p=\frac{u(0,t)+v(0,t)}{2},\quad q=\frac{u(0,t)-v(0,t)}{2}.$$ Then, after a few manipulations the conditions become (and hereafter subscripts will be used to denote partial derivatives): $$\begin{aligned} {\nonumber}\Omega_p&=&{{\cal D}}_p\\ {\nonumber}\Omega_q&=&-{{\cal D}}_q-2F\\ \Omega_\lambda&=&-{{\cal D}}_\lambda - 2{{\cal D}}_p.\end{aligned}$$ Eliminating $\Omega$ leads to $$\begin{aligned} {\nonumber}{{\cal D}}_{pq}&=&-F_p \\ {\nonumber}{{\cal D}}_{\lambda p} &=& -{{\cal D}}_{pp} \\ F_\lambda &=& -F_p,\end{aligned}$$ and, from these it follows that: $${{\cal D}}=f+g, \quad F=-f_q,\quad \Omega=f-g,$$ where $g$ depends only on $\lambda$ and $q$, and $f$ depends on $q$ and $p-\lambda$. Under these circumstances, the last, nonlinear, relation becomes $${{\cal D}}_p{{\cal D}}_q=2(U-V)+(f_\lambda +g_\lambda)\, F,$$ and this may also be rearranged and rewritten in terms of derivatives of $f$ and $g$: $$\label{fgrelation}\frac{1}{2}(f_q g_\lambda - f_\lambda g_q)=U-V.$$ Interestingly, the left hand side of is equal to the Poisson bracket of $f$ and $g$ regarded as functions of $\lambda$ and its conjugate momentum $\pi_\lambda =-(u-v)=-2q$. In terms of the defect energy and momentum, $\cal{D}$ and ${\Omega}$, the relationship is $$\{ {\cal{D}},\Omega\}=-2(U-V),$$ an intriguing equation that relates the Poisson bracket of the energy and momentum contributed by the defect, which is non-zero because of the lack of translation invariance, to the potential difference across the defect. Finally, it is worth noting that the equation is powerful because all the dependence on $\lambda$ contained in the left hand side of the equation must cancel out; this significantly constrains not only $f$ and $g$ but also the potentials $U(u)$ and $V(v)$. As will be seen below the list of possibilities will now include the Tzitzéica model that had been excluded previously. Examples ======== In this section, using natural ansätze, a number of possible solutions to are given. Besides the Tzitzéica equation these solutions provide generalisations of already known integrable defects. However, it is not clear that the examples given exhaust all possible solutions to . The sinh/sine-Gordon model -------------------------- For the sine-Gordon model, given the form of the potentials $$U(u) =e^{p+q}+e^{-p-q}\equiv e^u+e^{-u},\quad V(v)=e^{p-q}+e^{-p+q}\equiv e^v+e^{-v},$$ and bearing in mind the form of the constraint , the most general ansatz for $f$ and $g$ is $$f=Ae^{p-\lambda} +B e^{-p+\lambda}, \quad g=Ce^{-\lambda} +D e^{\lambda},$$ where the coefficients $A,B,C,D$ are functions only of $q$. In detail the constraint requires $$(AD)_q=2(e^q-e^{-q}),\quad (BC)_q=2(e^q-e^{-q}), \quad A_q C=AC_q,\quad B_qD=BD_q,$$ and hence $$C=\alpha A, \quad D=\alpha B, \quad \alpha AB=2(e^q+e^{-q})+2\gamma,$$ where $\alpha$ and $\gamma$ are constants. Since $\lambda$ can be shifted by a function of $q$ without causing an essential change, there is a family of equivalent solutions to these constraints and it is a matter of convenience which choice is most suitable. For future purposes, it also turns out to be useful to define $$\gamma = (e^{2\tau}+e^{-2\tau}).$$ A representative choice for $f$ and $g$ that will be used below is $$\begin{aligned} \label{sgfandg} {\nonumber}{\nonumber}f &=& \frac{1}{\sigma}\left(2e^{p-\lambda} + e^{-p+\lambda}\left( e^q+e^{-q}+\gamma\right)\right),\\ g &=& \sigma\left( e^{\lambda}\left(e^q+e^{-q}+\gamma\right)+2e^{-\lambda}\right).\end{aligned}$$ Using these, the defect conditions can be rewritten in terms of $p, q$ and $\lambda$ as follows: $$\begin{aligned} \label{sgpqconditions} p_x-p_t+2\lambda_t&=&-\frac{\sigma}{2}\, e^{\lambda}(e^{q}-e^{-q})-\frac{1}{2\sigma}e^{-p+\lambda} (e^{q}-e^{-q}),{\nonumber}\\ q_x-q_t&=&-\frac{\sigma}{2}\, \left(e^{\lambda} (e^{q}+e^{-q}+\gamma)-2e^{-\lambda}\right),{\nonumber}\\ q_x+q_t&=&\frac{1}{2\sigma}\,\left(e^{-p+\lambda} (e^{q}+e^{-q}+\gamma)-2e^{p-\lambda}\right).\end{aligned}$$ For the sinh-Gordon model, the static solution in the bulk is $u=v=0$ and this satisfies the defect conditions provided $${\nonumber}e^{2\lambda}=\frac{1}{2\cosh^2\tau}.$$ On the other hand, purely imaginary solutions to the sinh-Gordon model are the solutions to the sine-Gordon model, the least energy static solutions in the bulk correspond to $u=2\pi i a$ and $v=2\pi i b$ where $a$ and $b$ are integers, and the defect conditions permit $a\ne b$ provided $\lambda$ is chosen suitably. In fact, the conditions imply: $$\label{staticlambda} e^{2\lambda}=\left\{\begin{array}{cc} 1/2\cosh^2\tau & \hbox{if}\ a-b\ \hbox{is\ even} \\ 1/2\sinh^2\tau & \hbox{if}\ a-b\ \hbox{is\ odd}. \end{array}\right.$$ The Liouville equation ---------------------- The Liouville field theory fits into the same scheme by truncating the previous choices for $f$ and $g$ in the sinh/sine-Gordon model found in . Thus, for example, $$\begin{aligned} \label{Lpotential} {\nonumber}{\nonumber}f &=& 2e^{p-\lambda}\\ g &=& e^{\lambda}\left(e^q+e^{-q}+\gamma\right),\end{aligned}$$ is an adequate choice since $$\frac{1}{2}(f_q g_\lambda - f_\lambda g_q)=e^{p+q}-e^{p-q}.$$ In this case, there is no place for an arbitrary parameter to correspond to $\sigma$ since any such could be removed by a translation of $\lambda$. On the other hand, the parameter $\gamma$ can be chosen freely. Further, dropping one or other of the exponential pieces $e^q$ (or $e^{-q}$) in $g$ leads to a defect that couples the Liouville model for $u$ (or $v$) to free massless field theory for $v$ (or $u$). The Tzitzéica equation ---------------------- For the Tzitzéica model the bulk potentials are, $$U=e^{2p+2q}+2e^{-p-q}=e^{2u}+2e^{-u}, \quad V=e^{2p-2q}+2e^{-p+q}=e^{2v}+2e^{-v},$$ and the most general ansatz is $$f=A e^{2p-2\lambda}+B e^{-p+\lambda}, \quad g=C e^{2\lambda}+ D e^{-\lambda},$$ with the coefficients $A,B,C,D$ being functions only of $q$. The constraints following from are $$A_qD=2AD_q,\quad \quad 2B_qC=BC_q ,\quad (AC)_q=(e^{2q}-e^{-2q}),\quad (BD)_q=4(e^q-e^{-q}),$$ for which the general solution is $$BD=4(e^q-e^{-q}),\quad A=\alpha D^2, \quad C=\frac{B^2}{32\alpha}.$$ It is always possible to shift $\lambda$ by a function of $q$ and, for example, $A$ (and therefore $D$) can be chosen to be constants. Using a further shift one of these constants may be removed and a convenient expression for the most general solution up to these translations of $\lambda$ is: $$\begin{aligned} \label{Tpotential} {\nonumber}{\nonumber}f &=& \frac{1}{\sigma}\left(e^{2p-2\lambda} + e^{-p+\lambda}\left( e^q+e^{-q}\right)\right),\\ g &=& \frac{\sigma}{2}\left(8e^{-\lambda} + e^{2\lambda}\left(e^q+e^{-q}\right)^2\right).\end{aligned}$$ This contains one free parameter $\sigma$. Massive free fields ------------------- It is also instructive to consider the case where the fields to either side of the defect are free (and massive with mass parameter $m$). In this situation, similar considerations lead to $$\begin{aligned} \label{KGpotential} {\nonumber}{\nonumber}f &=& m\left(\frac{(p-\lambda)^2}{\beta} +\alpha q^2\right),\\ g &=&m\left(\frac{\lambda^2}{\alpha} +\beta q^2\right) ,\end{aligned}$$ where $\alpha$ and $\beta$ are undetermined parameters. One question is whether both of these parameters are effective after $\lambda$ is eliminated (or, equivalently, integrated out in a functional integral). After some algebra, the result for the defect part of the Lagrangian (after removing a total time derivative) is the following: $$\label{integratedpotential} {\cal L}_D=\delta(x)\left[\frac{4\alpha\beta}{m(\alpha+\beta)}\, q_t^2-{\frac{1}{2}}\left(\frac{\alpha- \beta}{\alpha+\beta}\right)(uv_t-vu_t)-m\left( \frac{p^2}{\alpha+\beta} +(\alpha+\beta) q^2\right)\right].$$ This still depends upon two parameters, yet in an interesting manner. For example, the limit $\alpha\rightarrow 0$ gives the free field type I defect considered in an earlier article [@bczlandau], as does the limit $\beta\rightarrow 0$, apart from an inessential sign change in the term linear in time derivatives. From this observation it is clear that the new framework does indeed engender an alternative type of defect to those considered previously. However, it is not straightforward to eliminate $\lambda$ in the other, nonlinear, examples. The expressions for $f$ and $g$ in the sinh/sine-Gordon model given in also contain two free parameters and it is to be expected these survive in the quadratic limit regarded as an expansion about a classical constant configuration. One way to facilitate the limit is to put $\sigma=e^{\eta}$, and note an alternative but quite symmetrical expression for ${{\cal D}}$: $${{\cal D}}=4\sqrt{2}\left(e^{-\lambda+p/2}\cosh\frac{p-2\eta}{2}\cosh\frac{q+2\tau}{2} +e^{\lambda-p/2}\cosh\frac{p+2\eta}{2}\cosh\frac{q-2\tau}{2}\right),$$ which may be expanded about the point $p=q=\lambda=0$. After shifting $$\lambda\rightarrow \lambda+\frac{q\tanh\tau}{2},$$ the quadratic form is diagonal and resembles ; putting $m=\sqrt{2}$, $\alpha$ and $\beta$ are given by $$\alpha=\frac{\sigma}{2\cosh\tau} ,\quad \beta=\frac{1}{2\sigma\cosh\tau}.$$ These parameters lie on the set of curves $$\alpha\beta=\frac{1}{4\cosh^2\tau}.$$ On the other hand, the quadratic limit of the expression giving the functions $f$ and $g$ for the Tzitzéica equation is a particular one parameter set within the general two parameter family. Thus, for the Tzitzéica equation ($m=\sqrt{6}$) one finds: $$\alpha=\frac{1}{\sqrt{6}\sigma},\quad \beta = \frac{2\sigma}{\sqrt{6}},$$ corresponding to points on the curve $ \alpha\beta = \frac{1}{3}$. If a plane travelling wave, $$u=e^{-i\omega t}(e^{ikx}+Re^{-ikx}),\quad v= e^{-i\omega t} \, Te^{ikx},\quad \omega =m\cosh\theta,\ k=m\sinh\theta,$$ encounters a defect with the potential then there is no reflection ($R=0$), and the transmission factor $T$ is given by: $$\label{freetransmission} T=\frac{i\left(\alpha e^\theta - \beta e^{-\theta}\right)+1}{i\left(\alpha e^\theta - \beta e^{-\theta}\right)-1}\, .$$ One difference from the previously considered cases (with $\alpha=0$ or $\beta=0$) is the possibility of a ‘bound state’ when $\alpha=\beta$, for example of the form $$u=u_0\cos\omega t\, e^{m\zeta x},\ x<0;\quad v=0,\ x>0,\ \zeta=-1/2\alpha,$$ with the constraint $\alpha<-1/2$. The contributions to the energy of this solution from the bulk and defect exactly cancel, though both are time-dependent, leading to a zero energy excitation degenerate with the constant ‘vacuum’ (in which all fields are zero everywhere). Since the present scheme can accommodate all the known single field integrable Toda systems one might be optimistic that a generalisation of the scheme will encompass all Toda models, conformal or affine, irrespective of the choice of root data. At this time, however, this generalisation, if it exists, is not known. A single soliton passing a defect ================================= So far, nothing has been said about integrability. Nevertheless, this new class of defect is thought to be integrable on the basis of some indirect evidence. For example, if this is the case, at the very least single solitons for both the sine-Gordon model and the complex Tzitzéica model are expected to pass safely through a defect suffering at most a delay. In this section, the behaviour of single soliton solutions for these two models will be explored. In addition, in appendix \[appendixA\] an energy-like spin 3 charge for the sine-Gordon model is calculated and found to be conserved on using the defect conditions . Ideally, a Lax pair formulation is needed to generalise the ideas presented in [@bczlandau]. The sine-Gordon soliton ----------------------- In the previous section the sinh/sine-Gordon model were considered together but solitons are real solutions of sine-Gordon or purely imaginary solutions of the sinh-Gordon equation. For ease of notation, and compatibility with earlier sections, the fields $u$ and $v$ will be pure imaginary. Then the defect conditions will determine how a soliton scatters with the defect. The defect parameters will be taken to be real. In a situation where the intial defect has either no discontinuity, or a discontinuity proportional to $4\pi$, a single soliton solution can be written as follows: $$e^{u/2}=\frac{1+ E}{1- E},\quad E=e^{ ax+bt+c}, \quad a=\sqrt{2}\cosh\theta,\quad b=-\sqrt{2}\sinh\theta, \quad e^{v/2}=\frac{1+z E}{1-z E},$$ where $z$ represents the delay, the rapidity $\theta>0$ indicates a soliton travelling from left to right along the $x$-axis, and $e^c$ is purely imaginary. Replacing $E\rightarrow -E$ (or equivalently shifting $c\rightarrow c+i\pi$) provides an expression for an anti-soliton. The final pair of defect conditions do not involve $\lambda_t$ and can be used to obtain two expressions for $\lambda$, $$\begin{aligned} e^{\lambda}&=&-\frac{2}{\sigma}\,\,\frac{\sigma^2(q_x+q_t)+e^p(q_x-q_t)}{(e^p-e^{-p}) (e^{q}+e^{-q}+\gamma)} \label{exp1}\\ {\nonumber}&&\\ e^{-\lambda}&=&-\frac{1}{\sigma}\,\,\frac{\sigma^2(q_x+q_t)+e^{-p}(q_x-q_t)}{(e^p-e^{-p}) }\label{exp2}.\end{aligned}$$ These two expressions must be consistent and will determine both $z$ and $\lambda$. In fact, there will be two possible choices for $z$ corresponding to the $i\pi$ ambiguity in the possible static solutions for $\lambda$ given by . Explicitly, the two possibilities for the delay are given by $z=z_1$ or $z=z_2$, where $$\label{sgdelay} z_1=\tanh\left(\frac{\theta-\eta+\tau}{2}\right)\,\tanh\left(\frac{\theta-\eta-\tau}{2}\right), \quad z_2=1/z_1, \quad \sigma = e^\eta.$$ For $z=z_1$, the companion expression for $\lambda$ is given by $$\label{sglambda} e^{\lambda_1}=\frac{1}{\sqrt{2}\cosh\tau}\,\, \frac{(1+E_0)(1+zE_0)}{(1+\rho E_0)(1+\tilde\rho E_0)},\quad \rho=\tanh\left(\frac{\theta-\eta+\tau}{2}\right), \ \tilde\rho=\tanh\left(\frac{\theta-\eta-\tau}{2}\right),$$ where $E_0 =E(0,t)$, and there is a similar expression for $\lambda_2$. Interestingly, the expression indicates that the delay is identical to a delay that would be experienced by a soliton passing through two defects of type I (see for example [@bczsg05]) with parameters $\eta\pm\tau $. Because $E_0$ is purely imaginary the expression for $\lambda_1$ indicates that $\lambda_1$ is complex and nowhere singular as a function of real $t$. In order to decide which of the two solutions should be chosen the starting value of $\lambda$ (that is, the value $\lambda$ has when the soliton is far away but approaching the defect) needs to be specified - effectively, the defect has two states associated with it even when the static field configurations to either side of it are $u=0$ and $v=0$. The modulus of $e^{\lambda_1}$, with $0<|z_1|\le 1$, grows to a maximum at $E_0^4=z_1^{-2}$ then falls to its initial value. On the other hand, the phase of $e^{\lambda_1}$ is more interesting since it is the product of four terms, each having a single soliton (or anti-soliton) form (but is a function only of time): $$e^{2i{\rm Im}\lambda_1}=\left(\frac{1+E_0}{1-E_0}\right)\left(\frac{1+z_1 E_0}{1-z_1 E_0}\right) \left(\frac{1-\rho E_0}{1+\rho E_0}\right)\left(\frac{1-\tilde\rho E_0}{1+\tilde\rho E_0}\right).$$ The first factor (provided $E_0$, which is pure imaginary, has a positive imaginary part) has a phase whose angle monotonically decreases by $\pi$ as $t$ runs over its range $(-\infty,\infty)$. On the other hand, if the imaginary part of $E_0$ had been negative, the phase angle would have increased by $\pi$. So, the total effect of the four terms will be either zero (if not more than one of $\rho$ or $\tilde\rho$ is negative), or $-4\pi$ (if both $\rho$ and $\tilde\rho$ are negative). Thus the phase angle of $e^{\lambda_1}$ either shifts by $0$ or $-2\pi$. The case where the imaginary part of $\lambda_1$ shifts by $-2\pi$ is quite interesting. There, the ingoing soliton emerges as a soliton but only after flipping to an anti-soliton and back again, in a virtual sense, since that is what would have happened had the soliton passed two separated defects with the chosen parameters. In other words, keeping track of $\lambda$ distinguishes the two possible cases ($z_1>0$) where a soliton emerges as a soliton. In the other two cases (one of $\rho$ or $\tilde\rho$ is negative), the soliton emerges as an antisoliton. As was the case with type I defects, and as indicated above, the delay $z$ can indicate a change in the character of the soliton as it passes (if $\eta - \tau<\theta<\eta+\tau$, then $z_1<0$, and an approaching soliton will emerge as an anti-soliton), or the soliton may be absorbed (if $\theta =\eta-\tau$ or $\theta=\eta +\tau$, meaning $z_1=0$). In the latter case, the expression for $\lambda$ interpolates ‘even’ and ‘odd’ static solutions given by , as it should since the defect stores the topological charge (and the energy-momentum and other charges) transported by the soliton. The limit $\tau\rightarrow 0$ is interesting because in that limit the defect (at least as far as the scattering property is concerned) is behaving like another soliton of rapidity $\eta$. This lends a little more credibility to the idea (already mentioned in [@bczsg05]) that a pair of defects with the same parameter behaves like a soliton. These results are very suggestive of the idea that at least for the sine-Gordon model the type II defects are ‘squeezed’, or ‘fused’, pairs of type I defects. Finally, it is not difficult to check directly that the first of the three defect conditions is satisfied by the soliton solution without any further constraints on $\lambda$ or $z$. A question that will not be addressed here is how the type II defect should be described by a transmission matrix in the quantum sine-Gordon field theory. Presumably, a generalisation of the Konik-LeClair transmission matrix (see [@bczsg05]) will need to be found and this will be postponed for a future investigation. The Tzitzéica equation ---------------------- The solitons for the Tzitzéica equation can be analysed similarly although in this case the soliton is complex (though its energy and momentum are real). Using the same conventions as before with the potential associated with the choice , the defect conditions are: $$\begin{aligned} \label{Tpqconditions} p_x-p_t+2\lambda_t&=&-\frac{\sigma}{2}\, e^{2\lambda}(e^{2q}-e^{-2q})-\frac{1}{2\sigma}\, e^{-p+\lambda}(e^{q}-e^{-q}),{\nonumber}\\ q_x-q_t&=&-\frac{\sigma}{2}\left(e^{2\lambda}(e^{q}+e^{-q})^2-4e^{-\lambda}\right),{\nonumber}\\ q_x+q_t&=&\frac{1}{2\sigma}\left(e^{-p+\lambda}(e^{q}+e^{-q})-2e^{2p-2\lambda}\right).\end{aligned}$$ Single soliton solutions in the bulk are given by the expressions [@Mikhailov81; @Cherdantzev90; @Mackay93] $$e^{u}=\frac{(1+ E)^2}{(1-4 E+E^2)},\quad e^{v}=\frac{(1+z E)^2}{(1-4 z E+z^2 E^2)},$$ with $$E=e^{ ax+bt+c}, \quad a=\sqrt{6}\cosh\theta,\quad b=-\sqrt{6}\sinh\theta,$$ where $z$ represents the delay of the outgoing soliton. The constant $e^c$ is chosen so that the expressions for $u$ and $v$ are nonsingular for all real choices of $t$ and $x$. The last two of the defect conditions can be regarded as a pair of cubic equations for $\Lambda\equiv e^\lambda$ of the form $$\alpha_1\Lambda^3 +\beta_1\Lambda^2+\gamma_1=0,\quad \alpha_2\Lambda^3+\beta_2\Lambda+\gamma_2=0,$$ where the coefficients depend upon $p,\ \sigma$, $q$ and the derivatives of $q$. Together, these may be solved to give $$\Lambda=\frac{\alpha_1\beta_2^2\gamma_1+\beta_1\gamma_2(\alpha_1\gamma_2-\alpha_2\gamma_1)}{\alpha_2\beta_1\beta_2\gamma_1-(\alpha_1\gamma_2-\alpha_2\gamma_1)^2},\quad \frac{1}{\Lambda}=\frac{\alpha_2\beta_1^2\gamma_2+\alpha_1\beta_2(\alpha_1\gamma_2-\alpha_2\gamma_1)}{\alpha_2\beta_1\beta_2\gamma_1-(\alpha_1\gamma_2-\alpha_2\gamma_1)^2}.$$ Demanding these two expressions are compatible and inserting the soliton solutions reveals, after some algebra, three possibilities for $z$: $$\begin{aligned} z_1&=& \frac{(e^{-\theta+\eta}+e^{i\pi/6})(e^{-\theta+\eta}+e^{-i\pi/6})} {(e^{-\theta+\eta}-e^{i\pi/6})(e^{-\theta+\eta}-e^{-i\pi/6})},\quad e^{\eta}=\sqrt{2}\sigma{\nonumber}\\ z_2=\bar z_3&=& \frac{(e^{-\theta+\eta}-e^{i\pi/6})(e^{-\theta+\eta}+e^{-i\pi/2})} {(e^{-\theta+\eta}+e^{i\pi/6})(e^{-\theta+\eta}-e^{-i\pi/2})}.\end{aligned}$$ These may also be rewritten more suggestively: $$\begin{aligned} z_1&=&\coth\left(\frac{\theta-\eta}{2}-\frac{i\pi}{12}\right) \coth\left(\frac{\theta-\eta}{2}+\frac{i\pi}{12}\right),\\ z_2=\bar z_3&=&\coth\left(\frac{\theta-\eta}{2}+\frac{i\pi}{4}\right) \tanh\left(\frac{\theta-\eta}{2}-\frac{i\pi}{12}\right),\end{aligned}$$ and $$z_1z_2z_3=1.$$ Finally, as examples, for the two cases $z=z_1$ or $z=z_2$ expressions for the field $\lambda$ are $$\begin{aligned} e^{\lambda_1}&=&\frac{(1+E_0)(1+z_1\, E_0)}{(1+2\,\rho_1\,E_0+z_1 \,E_0^2)},\phantom{\,e^{-2i\pi/3}} \quad \rho_1=\frac{(e^{-\theta+\eta}-\sqrt{2})(e^{-\theta+\eta}+\sqrt{2})} {(e^{-\theta+\eta}-e^{i\pi/6})(e^{-\theta+\eta}-e^{-i\pi/6})},\label{lambdaT1}\\ e^{\lambda_2}&=&\frac{(1+E_0)(1+z_2\, E_0)}{(1+2\,\rho_2\, E_0+z_2 \,E_0^2)}\,e^{-2i\pi/3}, \quad \rho_2=\frac{(e^{-\theta+\eta}-\sqrt{2}e^{i\pi/3})(e^{-\theta+\eta}+\sqrt{2}e^{i\pi/3})} {(e^{-\theta+\eta}+e^{i\pi/6})(e^{-\theta+\eta}-e^{-i\pi/2})}\label{lambdaT2}.\end{aligned}$$ For $z=z_3$ the corresponding formulae are the complex conjugates of the expressions in . The possible asymptotic values of $u$ and $v$ for soliton solutions are $u=2\pi ia$, $v=2\pi ib$, and the corresponding asymptotic values of $\lambda$ required by the defect conditions are $\lambda=2\pi ic$ or $\lambda=\pm 2\pi i/3+2\pi ic$ with $a,b,c$ integers. The formulae and for $\lambda$ provide examples of this. Once again, as was found to be the case for the sine-Gordon model, part of the specification of the defect must be the initial choice of $\lambda$ (essentially, for the soliton, one of three). Defects and Bäcklund transformations ==================================== In [@bczlandau] it was pointed out using several examples that integrable defect conditions for type I defects coincide with Bäcklund transformations ‘frozen’ at the defect location. This impression was strongly reinforced by subsequent analysis of the $a_n^{(1)}$ affine Toda models [@bcztoda; @cz07]. However, it was also found that while the $a_2^{(2)}$ Toda model has Bäcklund transformations these cannot be used directly to construct integrable defects within the type I scheme. At first sight this seemed puzzling and the purpose of this section is to show how a ‘folding’ procedure [@mikhailov79; @OT83fold] may be used to obtain a Bäcklund transformation for the Tzitzéica model, making use of two similar, yet different, sets of defect conditions obtained in [@bcztoda] for the $a_2^{(1)}$ Toda model. First a little background is necessary. The equation of motion for an $a_2^{(1)}$ Toda field $u$ is $$\label{a21em} \partial^2 u=-2\sum_{j=0}^2\,\alpha_j\,e^{\alpha_j\cdot u},$$ where, with respect to a basis of orthonormal vectors $\{e_0, e_1, e_2\}$ in a three dimensional Euclidean space, the $a_2^{(1)}$ roots are: $$\label{newnotation} \alpha_1=e_1-e_2,\quad \alpha_2=e_2-e_0, \quad \alpha_0=e_0-e_1.$$ The projections of the field $u$ onto the orthonormal basis are $u_0, u_1, u_2$ satisfying the constraint $u_0+u_1+u_2=0$. From it follows that the corresponding equations for the projections read $$\label{a21emp} \partial^2u_j=-2(e^{u_j-u_{j+1}}-e^{- u_j+u_{j-1}}),\quad j=0,1,2,$$ where the subscripts on the right hand side are to be understood modulo 3. Then, the folding procedure consists of setting one of the fields to zero, for instance $u_2=0$ (i.e. $u_1=-u_0$), to obtain the Tzitzéica equation of motion with the same normalisations as had been assumed when writing down the Tzitzéica potential in section 4.2. Note, the alternative choices $u_0=0$ or $u_1=0$ would lead to the same conclusion. The defect conditions that must hold at the defect ($x=x_0$) when sewing together two $a_2^{(1)}$ Toda fields $u$ and $\lambda$ are $$\label{a21dc} \partial_{x}u-A {\partial_{t}}u-B {\partial_{t}}\lambda+ {\cal D}_u=0, \quad \partial_{x}\lambda-B^{T}{\partial_{t}}u +A{\partial_{t}}\lambda-{\cal D}_\lambda=0, \quad B=(1-A),$$ with $$\label{a21dp} {\cal D}=\sqrt{2}\,\sum_{j=0}^2\left(\sigma\, e^{\alpha_j\cdot(B^T u+B \lambda)/2}+\frac{1}{\sigma}\, e^{\alpha_j \cdot B(u-\lambda)/2}\right), \quad B=2\sum_{a=0}^2 w_a\left( w_a-w_{a+1}\right)^T,$$ where $w_1, w_2$ ($w_3\equiv w_0=0$) are the fundamental highest weights of the Lie algebra $a_2^{(1)}$. By using similar notation as in for the two fields $u$ and $\lambda$ and light-cone coordinates $x_{\pm}=(t\pm x)/2$, the full set of defect conditions read $$\begin{aligned} \label{dctypeI} \partial_+(u_1-u_2)-\partial_+(\lambda_1-\lambda_2)&=&\sqrt{2}\,\sigma(e^{u_0-\lambda_1}-2e^{u_1-\lambda_2} +e^{u_2-\lambda_0}),{\nonumber}\\ \partial_+(u_2-u_0)-\partial_+(\lambda_2-\lambda_0)&=&\sqrt{2}\,\sigma(e^{u_1-\lambda_2}-2e^{u_2-\lambda_0}+e^{u_0-\lambda_1}),{\nonumber}\\ \partial_+(u_0-u_1)-\partial_+(\lambda_0-\lambda_1)&=&\sqrt{2}\,\sigma(e^{u_2-\lambda_0}-2e^{u_0-\lambda_1}+e^{u_1-\lambda_2}),{\nonumber}\\ &&\nonumber\\ \partial_-(u_1-u_2)-\partial_-(\lambda_2-\lambda_0)&=&\sqrt{2}\,\sigma^{-1}(2e^{-u_2+\lambda_2}-e^{-u_1+\lambda_1}-e^{-u_0+\lambda_0}),{\nonumber}\\ \partial_-(u_2-u_0)-\partial_-(\lambda_0-\lambda_1)&=&\sqrt{2}\,\sigma^{-1}(2e^{-u_0+\lambda_0}-e^{-u_2+\lambda_2}-e^{-u_1+\lambda_1}),{\nonumber}\\ \partial_-(u_0-u_1)-\partial_-(\lambda_1-\lambda_2)&=&\sqrt{2}\,\sigma^{-1}(2e^{-u_1+\lambda_1}-e^{-u_0+\lambda_0}-e^{-u_2+\lambda_2}).\end{aligned}$$ In the bulk, the expression would be the Bäcklund transformation discovered by Fordy and Gibbons [@Fordy80]. Unfortunately, the folding procedure cannot be applied directly to the defect conditions because they are simply incompatible with folding. This fact can be expressed heuristically by noting that the defect conditions do not treat solitons and antisolitons identically (a feature already pointed out in [@bcztoda; @cz07] and expected since solitons and anti-solitons are associated with different representations of the $a_2$ algebra ), because each type of soliton experiences a different delay on transmission through the defect. The soliton solution of the Tzitzéica model can be thought of as a particular soliton-antisoliton solution of the $a_2^{(1)}$ affine Toda model, and, since the components of a multi-soliton are delayed independently by the defect, its components will be treated differently by . Therefore, the Tzitzéica soliton cannot survive intact. A remedy is provided by observing that an alternative defect setting is available if the matrix $B$ in is replaced by its transpose. The resulting set of defect conditions describes a system, which is still integrable yet interchanges the delays experienced by a soliton or antisoliton when compared with the previous case. This suggests that two different types of defect, one built using $B$ (at $x=x_0$) and the other with $B^T$ (at $x=x_1$), then ‘squeezed’ together ($x_1\rightarrow x_0$), might allow the folding procedure to be applied successfully. The second set of defect conditions matches two $a_2^{(1)}$ fields $\lambda$ and $v$ and would be written in a similar manner to but using $B^T$ instead of $B$. Since the incoming ($u$) and outgoing ($v$) solitons are required to satisfy the Tzitzéica equation of motion, the projections $u_2, v_2$ can be set equal to zero. Consequently, the field $\lambda$ is forced to satisfy the following constraint (at $x=x_0$): $$\label{constraint} 2\,e^{-\lambda_0}=e^{-p_0+\lambda_2}\,(e^{q_0}+e^{-q_0}), \quad p_0=\frac{u_0+v_0}{2}, \quad q_0=\frac{u_0-v_0}{2}.$$ Setting $u_0\equiv u,\ v_0\equiv v,\ \lambda_2\equiv -\lambda$ and sending $\sigma\rightarrow1/(\sqrt{2}\,\sigma)$ the two sets of defect conditions lead to $$\begin{aligned} \label{BTTT} \partial_-(p-\lambda)&=&\frac{\sigma}{2}\,e^{2\lambda}(e^{2q}-e^{- 2q})\label{BTT1}\\ \partial_+\lambda&=&-\frac{1}{2\sigma} e^{-p+\lambda}(e^{ q}-e^{- q}),\label{BTT2}\\ \partial_-q&=&\frac{\sigma}{2}\,(e^{2\lambda}(e^{ q}+e^{- q})^2-4 e^{-\lambda}),\label{BTT3}\\ \partial_+q&=& \frac{1}{2\sigma}(e^{-p+\lambda}(e^{ q}+e^{- q})-2 e^{2p-2\lambda}).\label{BTT4}\end{aligned}$$ If, instead of being ‘frozen’ at $x=x_0$, equations - were required to hold in the bulk, they do, in fact, represent a Bäcklund transformation for the Tzitzéica equation. This can be seen by cross-differentiating the expressions and to eliminate $\lambda$, to find that if the field $u$ satisfies the Tzitzéica equation then the field $v$ also satisfies it. Also, cross-differentiating expressions and an equation of motion satisfied by the field $\lambda$ emerges: $$\label{a22em} \partial^2\lambda=-(e^{q}+e^{-q})\,e^{\lambda-p}(e^{2\lambda}-e^{-\lambda}).$$ Inevitably, this depends on the fields $u$ and $v$. The Bäcklund transformation - seems not to have been reported elsewhere in the literature [@TzitzeicaB1; @TzitzeicaB2; @TzitzeicaB3]. On the other hand, since equations (\[BTTT\]-\[BTT4\]) are supposed to hold only at $x=x_0$, and because the quantity $\lambda$ is confined at $x=x_0$ and depends only on $t$, the sum of the pair and , together with and are precisely the three defect conditions . Hence, for the type II defect, the number of defect conditions following from the Lagrangian is one less than the number of equations specifying the Bäcklund transformation described above. This is quite different to the previous situation where the Lagrangian description of a type I defect led directly to the frozen Bäcklund transformation (and hence to the Bäcklund transformation itself). Clearly, using the same idea, the defect conditions can be augmented to obtain an alternate Bäcklund transformation for the sine-Gordon model that depends on two parameters: $$\begin{aligned} \partial_-(p-\lambda)&=&\frac{\sigma}{2}\,e^{\lambda}(e^{q}-e^{- q}){\nonumber}\\ \partial_+\lambda&=&-\frac{1}{2\sigma} e^{-p+\lambda}(e^{ q}-e^{- q}),{\nonumber}\\ \partial_-q&=&\frac{\sigma}{2}\,(e^{\lambda}(e^{ q}+e^{- q}+\gamma)-2 e^{-\lambda}),{\nonumber}\\ \partial_+q&=& \frac{1}{2\sigma}(e^{-p+\lambda}(e^{ q}+e^{- q}+\gamma)-2 e^{(p-\lambda)}).\end{aligned}$$ From these relations, in a similar manner as previously, the equations of motion for the sine-Gordon fields $u$ and $v$ are recovered and the field $\lambda$ satisfies, $$\partial^2\lambda=-\frac{1}{4}\,e^{-p}\,(4\,e^{2\lambda}- (e^{q}+e^{-q})(2-\gamma\,e^{2\lambda})).$$ The fact that there appear to be generalisations of the defect conditions, which are only indirectly related to Bäcklund transformations, and yet likely to be integrable, generates a sense of optimism that the framework will generalise to encompass all affine Toda models. Defects as Hamiltonian constraints ================================== So far, properties of defects, and the relationship of the defect conditions to the conservation of a suitably defined momentum, have been derived from first principles from a Lagrangian starting point. It is interesting to ask if the framework can be formulated within a Hamiltonian setting. In this section this will be attempted, at least at a formal level, by explaining the main ideas, albeit sketchily. The setup demonstrates explicitly that the presence of a defect reduces the independent degrees of freedom of the system in the sense of providing defect conditions that can be regarded as a set of constraints on the fields $u$ and $v$ (for type I defects), or $u$, $v$ and $\lambda$ (for type II defects). This fact is highlighted by the emergence of second class constraints in the Hamiltonian (for a detailed description of these, see for example [@HenneauxConstraints]). The discussion can begin by considering a system with a type I defect. In this case, the starting point is the following Lagrangian density $$\mathcal{L}=\theta(-x)\mathcal{L}_u+\theta(x)\mathcal{L}_v+\delta(x)\left(\frac{u\, v_t-v\, u_t}{2}-\mathcal{D}(u, v)\right),$$ with $$\mathcal{L}_u=\frac{1}{2}\partial_\mu u\,\partial^\mu u-U(u),\quad \mathcal{L}_v=\frac{1}{2}\partial_\mu v\,\partial^\mu v-V(v).$$ According to the usual definitions, and treating the theta and delta functions formally, the canonical momenta conjugate to the fields $u$ and $v$ are, $$\begin{aligned} \label{momenta} \pi_u&=&\frac{\partial \mathcal{L}}{\partial u_t}=\theta(-x)\,u_t-\delta(x)\,\frac{v}{2},{\nonumber}\\ \pi_v&=&\frac{\partial \mathcal{L}}{\partial v_t}=\theta(x)\,v_t+\delta(x)\,\frac{u}{2}.\end{aligned}$$ By comparison with what happens within each half line, $x<0$ or $x>0$, the canonical momenta are not well-defined at the defect location. In other words, at $x=0$ it is not possible to write the time derivatives of the fields (Lagrangian variables) in terms of the canonical momenta (Hamiltonian variables). At $x=0$ the canonical momenta are not independent, and the definitions provide constraints amongst the canonical variables. These are $$\label{Hconstraints} \chi_1=\pi_u+\frac{v}{2}=0,\quad \chi_2=\pi_v-\frac{u}{2}=0;$$ these are primary constraints. The Hamiltonian is given by $$\label{H} H=\int_{-\infty}^\infty dx\,\mathcal{H}$$ with $$\label{Hd} \mathcal{H}=\theta(-x)\left(\frac{\pi_u^2+u_x^2}{2}+U\right)+ \theta(x)\left(\frac{\pi_v^2+v_x^2}{2}+V\right)+\delta(x)\left(\mathcal{D}+\mu_1\chi_1+\mu_2\chi_2\right),$$ where $\mu_1$ and $\mu_2$ are functions of the fields $u,v$ together with their momenta. They can be determined by using the fact that the constraints $\chi_1$ and $\chi_2$ must be preserved in time. In other words, the relations $$\label{constraintsintime} {\chi_1}\,_t=\{\chi_1,H\}=0,\quad {\chi_2}\,_t=\{\chi_2,H\}=0,$$ must hold. The Poisson bracket of two functionals $F=\int_{-\infty}^{\infty}dx\, \mathcal{F}$ and $G=\int_{-\infty}^{\infty}dx\, \mathcal{G}$ is defined formally as follows $$\label{PB} \{F,G\}=\int_{-\infty}^{\infty}\,dx\left(\frac{\delta F}{\delta u}\frac{\delta G}{\delta \pi_u}-\frac{\delta F}{\delta \pi_u}\frac{\delta G}{\delta u}\right)+\int_{-\infty}^\infty \,dx\left(\frac{\delta F}{\delta v}\frac{\delta G}{\delta \pi_v}-\frac{\delta F}{\delta \pi_v}\frac{\delta G}{\delta v}\right).$$ Using this definition and the Hamiltonian , for which, $$\begin{aligned} \label{HCanEqu} \frac{\delta H}{\delta \pi_u}&\equiv& u_t=\frac{\partial\mathcal{ H}}{\partial \pi_u}=\theta(-x)\pi_u+\delta(x)\mu_1,\ \ \frac{\delta H}{\delta \pi_v}\equiv v_t=\frac{\partial\mathcal{ H}}{\partial \pi_v}=\theta(x)\pi_v+\delta(x)\mu_2,{\nonumber}\\ \frac{\delta H}{\delta u}&\equiv& -\pi_u\,_t=\frac{\partial\mathcal{ H}}{\partial u}-\frac{\partial}{\partial x}\frac{\partial \mathcal{H}}{\partial u_x}= \theta(-x)(-u_{xx}+U^{'})+\delta(x)\left(\mathcal{D}_u-\frac{\mu_2}{2}+u_x\right),{\nonumber}\\ \frac{\delta H}{\delta v}&\equiv& -\pi_v\,_t=\frac{\partial\mathcal{ H}}{\partial v}-\frac{\partial}{\partial x}\frac{\partial \mathcal{H}}{\partial v_x}= \theta(x)(-v_{xx}+V^{'})+\delta(x)\left(\mathcal{D}_v+\frac{\mu_1}{2}-v_x\right),\end{aligned}$$ where $U^{'}=U_u$ and $V^{'}=V_v$, the Poisson brackets can be calculated. In consequence, leads to explicit expressions for the functions $\mu_1$ and $\mu_2$, which are $$\mu_1=-\mathcal{D}_v+v_x\quad \mu_2=\mathcal{D}_u+u_x.$$ Assembling all these ingredients, the Hamiltonian density becomes $$\begin{aligned} \label{Hdensity} \mathcal{H}&=&\theta(-x)\left(\frac{\pi_u^2+u_x^2}{2}+U\right)+ \theta(x)\left(\frac{\pi_v^2+v_x^2}{2}+U\right){\nonumber}\\ &&\ \ +\,\delta(x)\left[\left(\pi_u+\frac{v}{2}\right)(v_x-\mathcal{D}_v)+\left(\pi_v- \frac{u}{2}\right)(u_x+\mathcal{D}_u) + \mathcal{D}\right].\end{aligned}$$ Expressions are the canonical Hamilton equations and using the definitions of the canonical momenta they coincide with the defect conditions and equations of motion of the type I defect problem (the latter by performing a differentiation with respect to time). In principle, the conservation of any charge can be verified by calculating its Poisson bracket with the Hamiltonian. For example, consider the total momentum of the system, which is defined by $$\label{consm} P=\int_{-\infty}^{\infty} dx\, \mathcal{P}\quad \mbox{with}\quad \mathcal{P}=\theta(-x)\pi_u \,u_x+\theta(x)\pi_v\, v_x+\delta(x)\Omega(u,v).$$ It is straightforward to calculate the time derivative of $P$ using its Poisson bracket with the Hamiltonian to obtain, $$\begin{aligned} \dot{P}&=&\delta(x)\left[\frac{1}{2}\left({{\cal D}}_u^2-{{\cal D}}_v^2\right)-U+V+u_t(\Omega_u-{{\cal D}}_v)+v_t(\Omega_v-{{\cal D}}_u)\right]=0.\end{aligned}$$ The final step follows from the facts that $({{\cal D}}_u^2-{{\cal D}}_v^2)/2=(U-V)$, ${{\cal D}}=(f+g)$ and $\Omega=(f-g)$, with $f=f(p)$ and $g=g(q)$, as was described previously in [@bczlandau]. It should be noticed that the constraints are second class. Hence, as mentioned at the beginning of this section, they indicate that not all degrees of freedom are independent. By definition, a constraint is first class if its Poisson brackets with all other constraints are zero - the constraints themselves can be imposed, if needed - otherwise, it is second class. In the present case, it is straightforward to check that the Poisson brackets of the constraints are constant. In fact, $$C_{ij}\equiv\{\chi_i,\chi_j\},\quad C=\left( \begin{array}{cc} \phantom{-}0 & \phantom{-}1 \\ -1 & \phantom{-}0 \\ \end{array} \right).{\nonumber}$$ The matrix $C$ can be used to construct the Dirac brackets, the standard tool for dealing with second class constraints. Next, consider the type II defect and suppose the Lagrangian density is given by . Then, there are three fields $u$, $v$ and $\lambda$, whose canonical momenta are $$\pi_u=\frac{\partial \mathcal{L}}{\partial u_t}=\theta(-x)\,u_t-\delta(x)\,\left(\frac{v}{2}-\lambda\right),\ \pi_v=\frac{\partial \mathcal{L}}{\partial v_t}=\theta(x)\,v_t+\delta(x)\,\left(\frac{u}{2}-\lambda\right),{\nonumber}$$ $$\pi_\lambda=\frac{\partial \mathcal{L}}{\partial \lambda_t}=-\delta(x)(u-v).{\nonumber}$$ Consequently, the primary constraints are $$\chi_1=\pi_u+\frac{v}{2}-\lambda=0,\quad \chi_2=\pi_v-\frac{u}{2}+\lambda=0,\quad \chi_3=\pi_\lambda+(u-v)=0,$$ and the Hamiltonian density reads $$\label{Hdlambda} \mathcal{H}=\theta(-x)\left(\frac{\pi_u^2+u_x^2}{2}+U\right)+ \theta(x)\left(\frac{\pi_v^2+v_x^2}{2}+V\right)+\delta(x)(\mathcal{D}+\mu_1\chi_1+\mu_2\chi_2+\mu_3\chi_3).$$ Since these constraints must be consistent with the evolution equations, their time derivative must vanish. By using the following Poisson bracket $$\begin{aligned} \label{PBlambda} \{F,G\}&=&\int_{-\infty}^{\infty}\,dx\left(\frac{\delta F}{\delta u}\frac{\delta G}{\delta \pi_u}-\frac{\delta F}{\delta \pi_u}\frac{\delta G}{\delta u}\right)+\int_{-\infty}^\infty \,dx\left(\frac{\delta F}{\delta v}\frac{\delta G}{\delta \pi_v}-\frac{\delta F}{\delta \pi_v}\frac{\delta G}{\delta v}\right){\nonumber}\\ &&\hskip 2.5cm +\left(\frac{\delta F}{\delta \lambda}\frac{\delta G}{\delta \pi_\lambda}-\frac{\delta F}{\delta \pi_\lambda}\frac{\delta G}{\delta \lambda}\right)_{x=0},\end{aligned}$$ it is possible to verify that $$\chi_1\, _t=-\mathcal{D}_u-u_x+\mu_2-2\mu_3=0,\ \chi_2\, _t=-\mathcal{D}_v+v_x-\mu_1+2\mu_3=0,\ \chi_3\, _t=-\mathcal{D}_\lambda+2(\mu_1-\mu_2)=0.{\nonumber}$$ Unlike the previous case, this system of equations does not determine completely the functions $\mu_j$. In fact, requiring the constraints to be preserved with time forces $$\begin{aligned} \mu_1&=&-\mathcal{D}_v+ v_x+2\mu_3,\quad \mu_2=\mathcal{D}_u+ u_x+2\mu_3\label{priC}\\ &&(u-v)_x+\mathcal{D}_u+\mathcal{D}_v+\frac{1}{2}\mathcal{D}_\lambda=0\label{secC}.\end{aligned}$$ Expression is a secondary constraint. However, it is not genuinely new since it coincides with an algebraic sum of some of the canonical Hamiltonian equations, as can be verified by using the following Hamiltonian density $$\begin{aligned} \label{Hdensitylambda} \mathcal{H}&=&\theta(-x)\left(\frac{\pi_u^2+u_x^2}{2}+U\right)+ \theta(x)\left(\frac{\pi_v^2+v_x^2}{2}+V\right)+\delta(x)\mathcal{D}{\nonumber}\\ &+&\delta(x)\left[\left(\pi_u+\frac{v}{2}-\lambda\right)(v_x-\mathcal{D}_v)+ \left(\pi_v-\frac{u}{2}+\lambda\right)(u_x+\mathcal{D}_u)+\mu_3(2\pi_u+2\pi_v+\pi_\lambda)\right].{\nonumber}\\\end{aligned}$$ In fact, $$0=(\pi_\lambda+2\pi_u+2\pi_v)\, _t\equiv -(u-v)_x-\mathcal{D}_u-\mathcal{D}_v-\frac{1}{2}\mathcal{D}_\lambda,$$ which coincides with . As was shown in the previous case, all Hamilton equations can be obtained and they lead to the equations of motion and defect conditions (note that $\lambda_t\equiv\mu_3$). Finally, as mentioned before, the Poisson brackets may be used to verify the conservation of charges. For example, given the total momentum , it can be checked that $$\dot{P}=\delta(x)\left[\frac{1}{2}\left({{\cal D}}_u^2-{{\cal D}}_v^2\right)-U+V+\lambda_t(\mathcal{D}_\lambda+2\mathcal{D}_u+2\mathcal{D}_v+\Omega_\lambda) +u_t(\Omega_u-D_v)+v_t(\Omega_v-D_u)\right].{\nonumber}$$ Since $D=(f+g)$ and $\Omega=(f-g)$, with $f=f(p-\lambda,q)$ and $g=g(\lambda,q)$, the above expression becomes $$\dot{P}=\delta(x)\left[\frac{1}{2}\left({{\cal D}}_u^2-{{\cal D}}_v^2\right)-U+V+\frac{1}{2}f_q\mathcal{D}_\lambda\right]\equiv 0.$$ In summary, from the Hamiltonian density , it is possible to read off the final constraints, which are $$\chi_1,\quad \chi_2,\quad \gamma_1=2\pi_u+2\pi_v+\pi_\lambda,$$ where $\chi_1$, $\chi_2$ are second class, while $\gamma_1$ is first class. In fact, it can be checked that $\{\chi_1, \gamma_1\}=\{\chi_2, \gamma_1\}=\{\gamma_1, \gamma_1\}=0$. The first class constraints are usually related to the presence of a gauge freedom. In the type II defect framework, the existence of a first class constraint indicates the freedom to translate the field $\lambda$ by any function of $q$, as was pointed out in section 3. Comments and conclusions ======================== The main result of this paper has been to extend the framework within which an integrable defect may be described. The previous framework (referred to as type I in this article) seemed fairly natural yet even for a single scalar field was unable to accommodate all possible relativistic integrable models because the Tzitzéica, or $a_2^{(2)}$ affine Toda, model was conspicuously absent. For multiple scalar fields the possible type I defects are restricted to the $a_n^{(1)}$ series of affine Toda models. In all cases, the type I defects are intimately related to Bäcklund transformations, in the sense that the conditions relating the fields on either side of an integrable defect take the form of a Bäcklund transformation frozen at the location of the defect. At first sight, this relationship seemed attractive since it provided a use for Bäcklund transformations that had not been noticed before. On the other hand, the Tzitzéica equation has several Bäcklund transformations associated with it and none of them emerged naturally from within the type I framework. Moreover, the integrability of the type I defects is intimately related to momentum conservation, in the sense that insisting there should be a total momentum including a contribution from the defect itself leads to restrictions that would be associated normally with the requirements of having higher spin conserved quantities. It is a curious situation: certain integrable systems (those with type I defects) can violate translational invariance yet preserve momentum. The question is: can this phenomenon be extended to other integrable systems by changing the framework? It appears the answer is yes, and one particular different framework (referred to as type II) is described in this paper. In fact, only a slight change appears to be necessary, the Tzitzéica model is incorporated, and the relationship with frozen Bäcklund transformations is modified. The trick is to introduce a new degree of freedom located on the defect and couple it in a minimal manner to the discontinuity across the defect. In the absence of a generalised Lax pair for the type II system, momentum conservation becomes a tool for identifying the possibilities, backed up by other less direct evidence. Turning the argument around and starting from the defect conditions allows an apparently new Bäcklund transformation to be established for the Tzitzéica equation. The type II framework certainly contain all single field integrable systems of Toda type (or free fields) but it is not yet demonstrated these are the only possibilities. The latter appears reasonable since is highly constraining but a complete proof of integrability needs to be found in order to be sure. It is already known that the $a_n^{(1)}$ affine Toda models can support type I defects of several kinds and that defects are able to relate different $a_n$ conformal Toda models to each other (thereby generalising the relationship between the Liouville model and free fields [@cz09]). However, other affine Toda models based on the root data of the $b,c,d,e,f,g$ series of Lie algebras do not appear to fit in to the type I framework. This is surprising: in most respects, the affine Toda field theories at least in the bulk, have similar features, though it does appear from the literature that the $a_n^{(1)}$ series is special in having a Bäcklund transformation of a simple type. It remains to be seen if the type II framework can be adapted to all Toda models. The folding process cannot explain the apparent difficulties with the $d,e$ series. However, once these are understood the folding process might be an essential part of the story for the remaining cases. For that reason it would be natural to examine the $d,e$ series next. At this stage it is worth outlining a possible direction for a generalisation containing multi-component fields. Using the same notation as previously, taking as a starting point the defect contribution $$\label{multicomponentdefectlagrangian} {\cal L}_D=\delta(x)\left(q\cdot A q_t + 2 \lambda\cdot q_t -{\cal D}(\lambda,q,p)\right),$$ where $A$ is an antisymmetric matrix, then insisting on overall momentum conservation, leads to the following constraints on ${\cal D}$ and $\Omega$: $$\label{DandOmega} {\cal D}=f(p-\lambda,q) + g(p+\lambda,q),\quad \Omega = f(p-\lambda,q) - g(p+\lambda,q).$$ Further, the two functions $f$ and $g$ are constrained by a generalisation of the Poisson bracket relation that reads, $$\label{generalfgrelation} \nabla_q f\cdot\nabla_\lambda g - \nabla_q g\cdot\nabla_\lambda f +\nabla_\lambda f\cdot A\, \nabla_\lambda g = U(u) - V(v).$$ Here $A$ is the antisymmetric matrix occurring in and $U,\ V$ are the bulk potentials for the fields to either side of the defect. The left hand side of is a bona fide Poisson bracket since it is antisymmetric and satisfies the Jacobi relation, yet, as before, all dependence on $\lambda$ must cancel out. This provides severe constraints on $U$ and $V$, which will be explored elsewhere. At the quantum level, it was demonstrated in [@bczsg05; @cz07] that type I defects within the $a_n^{(1)}$ series are described by infinite-dimensional transmission matrices, which are determined up to a single parameter by a set of ‘triangle relations’ ensuring their compatibility with the bulk S-matrix. Moreover, arguments have been provided to demonstrate that the free parameter is essentially the same, though possibly renormalised, as the free parameter in the type I Lagrangian. Clearly, the next question concerns the transmission matrix in the context of type II defects. For the sine-Gordon model, the transmission matrix in this framework should depend on two independent parameters and there should be some evidence or influence of the confined field $\lambda$, at least recognising the $i\pi$ ambiguity mentioned in section 4.1. At a quantum level, the Tzitzéica model contains a triplet of equal mass states, reflecting its origin in $a_2^{(1)}$ affine Toda field theory under the folding process, only two of which correspond to classical solitons, and its S-matrix is known [@smirnov]. It is to be hoped there will be a transmission matrix based on an ansatz that takes into account the mysterious role of $\lambda$ (this time the ambiguity is threefold - see section 4.2). [**Acknowledgements**]{} We are grateful for conversations with colleagues in Durham, especially Peter Bowcock. In particular, we wish to thank him for discussions on the content of section 5, much of which he developed independently. We also wish to express our gratitude to the UK Engineering and Physical Sciences Research Council for its support under grant reference EP/F026498/1. Energy-like spin three charge for the sine-Gordon model {#appendixA} ======================================================= In this appendix it is shown that an energy-like spin three charge for the sine-Gordon model with a defect of type II is conserved. The bulk charge, which is not expected to be conserved in the presence of a defect, conveniently normalised, reads $$\begin{aligned} {\cal E}_3&=&\int^{0}_{-\infty}dx \,\left(\frac{u_t^4+u_x^4}{4}+\frac{3}{2}u_x^2 u_t^2+4 u^2_{tx}+(u_{tt}+u_{xx})^2 +(u_t^2+u_x^2)U^{''}\right) {\nonumber}\\ &&\ \ +\int^{\infty}_{0}dx \,\left(\frac{v_t^4+v_x^4}{4}+\frac{3}{2} v_x^2 v_t^2+4 v^2_{tx}+(v_{tt}+v_{xx})^2 +(v_t^2+v_x^2)V^{''}\right),{\nonumber}\end{aligned}$$ and its time derivative is $$\begin{aligned} \label{tdspin3d} \dot{\cal E}_3 &=&\left[(u_tu_x^3+u_t^3u_x)-(v_tv_x^3+v_t^3v_x) +4(2u_{tt}+U^{'})u_{tx}\right.{\nonumber}\\ &&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left.-4(2v_{tt}+V^{'})v_{tx}-2(u_tu_xU^{''}-v_tv_xV^{''})\right]_{x=0},\end{aligned}$$ where $U^{'}=U_u$ and $V^{'}=V_v$. This is not expected to be zero but the right hand side may turn out to be the total time derivative of a functional $-{{\cal D}}_3$ that depends only on the defect variables $p,\ q$ and $\lambda$. In that case, ${\cal E}_3+{{\cal D}}_3$ will be conserved. Since the expression is calculated at $x=0$, it is convenient to rewrite it by using the variables $p$ and $q$. Then, using the defect conditions - with the functions $f$ and $g$ given by , the expression becomes a total time derivative $$\begin{aligned} \label{spin3charge} \dot{\cal E}_3 &=&4\frac{d}{dt}\left(2(p_t-\lambda_t)q_tf_{\lambda q}-(p_t-\lambda_t)^2f-q_t{^2}(f+g)_{qq}-\lambda_t{^2}g -2q_t\lambda_tg_{\lambda q}\right){\nonumber}\\ &&+4\frac{d}{dt}\left((p_t-\lambda_t)(U^{'}-V^{'})-\lambda_t(U^{'}-V^{'})-q_t(U^{'}+V^{'})\right)-\frac{d}{dt}\Omega_3(p,q,\lambda),\end{aligned}$$ (where again on the right hand side all field quantities are evaluated at $x=0$), with $$\begin{aligned} \frac{\partial\Omega_3}{\partial q}&=&3f_\lambda(U-V)-\frac{3}{4}f_q(f+g)_\lambda{^2} +\frac{1}{4}(f+g)_q\left(3f_\lambda{^2}+(f+g)_q{^2}-12(U+V)\right),{\nonumber}\\ \frac{\partial\Omega_3}{\partial p}&=&-3(f+g)_q(U-V)+\frac{3}{4}f_{qq}(f+g)_\lambda{^2}-\frac{1}{4}f_\lambda \left(f_\lambda{^2}+3(f+g)_q{^2}-12(U+V)\right),{\nonumber}\\ \frac{\partial\Omega_3}{\partial \lambda}&=&\frac{1}{4}(f+g)_\lambda\left(f_\lambda{^2}+g_\lambda{^2}-f_\lambda g_\lambda- 3(f+g)_\lambda(f+g)_{qq}+3(f+g)_q{^2}-12(U+V)\right).{\nonumber}\end{aligned}$$ The formula has been obtain by making use of the following properties of the defect potential for the sine-Gordon model $$\label{helpformulae} f_p=-f_\lambda,\quad f_{\lambda\lambda}=f, \quad g_{\lambda\lambda}=g, \quad f_{qqq}=f_q, \quad g_{qqq}=g_q,\quad f_{\lambda q}=f_q, \quad g_{\lambda q}=g_q.$$ Finally, it has been verified that the cross derivatives of the function $\Omega_3$ are consistent, that is $$\frac{\partial^2\Omega_3}{\partial q\partial p}=\frac{\partial^2\Omega_3}{\partial p\partial q},\quad \frac{\partial^2\Omega_3}{\partial q\partial \lambda}=\frac{\partial^2\Omega_3}{\partial \lambda\partial q},\quad \frac{\partial^2\Omega_3}{\partial p\partial \lambda}=\frac{\partial^2\Omega_3}{\partial \lambda\partial p}.$$ For this task, in addition to , the following relations have been used $$(U\pm V)_p=(U\mp V)_q, \quad f_{qq}(f+g)_q=f_q(f+g)_{qq},\quad f_q(g+g_\lambda)=g_q(f+f_\lambda)$$ where $$(U-V)=\frac{1}{2}(f_q g_\lambda-f_\lambda g_q)=(U+V)_{pq}.$$ [10]{} P. Bowcock, E. Corrigan and C. Zambon, *Classically integrable field theories with defects*, Int. J. Mod. Physics **A19** (Supplement) (2004) 82; hep-th/0305022. P. Bowcock, E. Corrigan and C. Zambon, *Affine Toda field theories with defects*, JHEP **01** (2004) 056; hep-th/0401020 A. V. Mikhailov, *Integrability of the two-dimensional generalization of Toda chain*, JETP Letters [**30**]{} (1979) 414. A. V. Mikhailov, M. A. Olshanetsky and A. M. Perelomov, *Two-Dimensional Generalized Toda Lattice*, Commun. Math. Phys.  [**79**]{} (1981) 473. P. Bowcock, E. Corrigan and C. Zambon, *Some aspects of jump-defects in the quantum sine-Gordon model*, JHEP **08** (2005) 023; hep-th/0506169. E. Corrigan and C. Zambon, *On purely transmitting defects in affine Toda field theories*, JHEP **07** (2007) 001; arXiv:0705.1066 \[hep-th\] G. Delfino, G. Mussardo and P. Simonetti, *Statistical models with a line of defect*, Phys. Lett. **B328** (1994) 123; hep-th/9403049. G. Delfino, G. Mussardo and P. Simonetti, *Scattering theory and correlation functions in statistical models with a line of defect*, Nucl. Phys. **B432** (1994) 518; hep-th/9409076. R. Konik and A. LeClair, *Purely transmitting defect field theories*, Nucl. Phys. **B538** (1999) 587; hep-th/9703085. E. Corrigan and C. Zambon, *Comments on defects in the $a_r$ Toda field theories*, J. Phys. [**A42**]{} (2009) 304008; arXiv:0902.1307 \[hep-th\] Z. Bajnok and Z. Simon, *Solving topological defects via fusion*, Nucl. Phys.  B [**802**]{} (2008) 307; arXiv:0712.4292 \[hep-th\]. A. P. Fordy and J. Gibbons, *Integrable nonlinear Klein-Gordon equations and Toda lattices*, Commun. Math. Phys. **77** (1980) 21. M. G. Tzitzéica, *Sur une nouvelle classe de surfaces*, Rendiconti del Circolo Matematico di Palermo [**25**]{} (1908) 180. A. Yu. Boldin, S. S. Safin, and R. A. Sharipov, *On an old article of Tzitzéica and the inverse scattering method*, J. Math. Phys. **34** (1993) 5801. H-X. Yang and Y-Q. Li, *Prolongation approach to Bäcklund transformation of Zhiber-Mikhailov-Shabat equation*, J. Math. Phys. **37** (1996) 3491; arXiv:hep-th/9607014. R. Conte, M. Musette and A. M. Grundland, *Bäcklund transformation of partial differential equations from the Painlevé-Gambier classification II. Tzitzéica equation*, J. Math. Phys. **40** (1999) 2092. P. Baseilhac and G. W. Delius, *Coupling integrable field theories to mechanical systems at the boundary*, J. Phys. **A34** (2001) 8259; hep-th/0106275. P. Baseilhac and S. Belliard, *Generalized q-Onsager algebras and boundary affine Toda field theories*; arXiv:0906.1215 \[math-ph\]. S. Ghoshal and A. B. Zamolodchikov, *Boundary S matrix and boundary state in two-dimensional integrable quantum field theory*, Int. J. Mod. Phys. [**A9**]{} (1994) 3841 \[Erratum-ibid.  [**A9**]{} (1994) 4353\] \[arXiv:hep-th/9306002\]. A. V. Mikhailov, *The reduction problem and the inverse scattering method*, Physica [**D3**]{}, (1981) 73. I. Yu. Cherdantzev and R. A. Sharipov, *Solitons on a finite-gap background in Bullough-Dodd-Jiber-Shabat model*, Int. Journ. Mod. Phys. [**A5**]{} (1990) 3021; math-ph/0112045. N. J. MacKay and W. A. McGhee, *Affine Toda solitons and automorphisms of Dynkin diagrams*, Int. Journ. Mod. Phys. [**A8**]{} (1993) 2791; erratum-ibid [**A8**]{} 3830; hep-th/9208057. D. Olive and N. Turok, *The symmetries of Dynkin diagrams and the reduction of Toda field equations*, Nucl. Phys. **B215** (1983) 470. M. Henneaux and C. Teitelboim, *Quantization of Gauge Systems*, Princeton University Press (1992). F. A. Smirnov, *Exact S matrices for $\phi_{1,2}$-perturbated minimal models of conformal field theory*, Int. J. Mod. Phys.  [**A6**]{} (1991) 1407. [^1]:  Note: the model introduced by Tzitzéica is the $a_2^{(2)}$ member of the affine Toda collection of field theories and is also known as the Bullough-Dodd or Zhiber-Mikhailov-Shabat equation.
--- abstract: 'Extended numerical simulations of threshold models have been performed on a human brain network with $N=836733$ connected nodes available from the Open Connectome project. While in case of simple threshold models a sharp discontinuous phase transition without any critical dynamics arises, variable thresholds models exhibit extended power-law scaling regions. This is attributed to fact that Griffiths effects, stemming from the topological/interaction heterogeneity of the network, can become relevant if the input sensitivity of nodes is equalized. I have studied the effects of link directness, as well as the consequence of inhibitory connections. Non-universal power-law avalanche size and time distributions have been found with exponents agreeing with the values obtained in electrode experiments of the human brain. The dynamical critical region occurs in an extended control parameter space without the assumption of self organized criticality.' address: 'P. O. Box 49, H-1525 Budapest, Hungary' author: - Géza Ódor title: Critical dynamics on a large human Open Connectome network --- Introduction ============ Theoretical and experimental research provides many signals for the brain to operate in a critical state between sustained activity and an inactive phase [@BP03; @T10; @H10; @R10; @Hai]. Critical systems exhibit optimal computational properties, suggesting why the nervous system would benefit from such mode [@LM07]. For criticality, certain control parameters need to be tuned, leading to the obvious question why and how this is achieved. This question is well known in statistical physics, the theory of self-organized criticality (SOC) of homogeneous systems has a long history since the pioneering work of [@Bak]. In case of competing fast and slow processes SOC systems can self-tune themselves in the neighborhood of a phase transition point [@pruessner]. Many simple homogeneous models have been suggested to describe power-laws (PL) and various other critical phenomena, very often without identifying SOC responsible processes. Alternatively, it has recently been proposed that living systems might also self-tune to criticality as the consequence of evolution and adaptation [@adap]. Real systems, however, are highly inhomogeneous and one must consider if heterogeneity is weak enough to use homogeneous models to Heterogeneity is also called disorder in statistical physics and can lead to such rare-region (RR) effects that smear the phase transitions [@Vojta]. RR-s can have various effects depending on their relevancy. They can change a discontinuous transition to a continuous one [@round; @round2], or can generate so-called Griffiths Phases (GP) [@Griffiths], or can completely smear a singular phase transition. In case of GP-s critical-like power-law dynamics appears over an extended region around the critical point, causing slowly decaying auto-correlations and burstyness [@burstcikk]. This behavior was proposed to be the reason for the working memory in the brain [@Johnson]. Furthermore, in GP the susceptibility is infinite for an entire range of control parameters near the critical point, providing a high sensitivity to stimuli, beneficial for information processing. Therefore, studying the effects of heterogeneity is a very important issue in models of real system, in particular in neuroscience. It has been conjectured that network heterogeneity can cause GP-s if the topological (graph) dimension $D$, defined by $N_r \sim r^D$ , where $N_r$ is the number of ($j$) nodes within topological distance $r=d(i,j)$ from an arbitrary origin ($i$), is finite [@Ma2010]. This hypothesis was pronounced for the Contact Process (CP) [@harris74], but subsequent studies found numerical evidence for its validity in case of more general spreading models [@BAGPcikk; @wbacikk; @basiscikk]. Recently GP has been reported in synthetic brain networks [@MM; @Frus; @HMNcikk] with finite $D$. At first sight this seems to exclude relevant disorder effects in the so called small-world network models. However, in finite systems PLs are observable in finite time windows for large random sample averages [@Ferr1cikk]. Very recently we have studied the topological behavior of large human connectome networks and found that contrary to the small world network coefficients they exhibit topological dimension slightly above $D=3$ [@CCcikk]. This is suggests weak long-range connections, in addition to the $D=3$ dimensional embedding ones and warrants to see heterogeneity effects in dynamical models defined on them. These graphs contain link weight connection data, thus one can study the combined effect of topological and interaction disorder, assuming a quasi static network. This work provides a numerical analysis based on huge data sets of the Open Connectome project (OCP) [@OCP], obtained by Diffusion Tensor Imaging (DTI) [@DTI] to describe [*structural brain connectivity.*]{} Earlier studies of the structural networks were much smaller sized, for example the one obtained by Sporns and collaborators, using diffusion imaging techniques [@29; @30], consists of a highly coarse-grained mapping of anatomical connections in the human brain, comprising $N = 998$ brain areas and the fiber tract densities between them. The graph used here comprises $N=848848$ nodes, allowing one to run extensive dynamical simulations on present days CPU/GPU clusters that can provide strong evidences for possible scaling behavior. It is essential to consider large connectomes, based on real experimental data, even if they are coarse grained and suffer from systematic errors and artifacts, because synthetic networks always rely on some subjective assumptions of the topologies. Smaller systems near a singularity point of a phase transition, where the correlations may diverge suffer from finite size corrections, that can hide hide criticality or rare region effects. Models and methods ================== Currently, connectomes can be estimated in humans at 1 $mm^3$ scale, using a combination of diffusion weighted magnetic resonance imaging, functional magnetic resonance imaging and structural magnetic resonance imaging scans. The large graph “KKI-18” used here is generated by the MIGRAINE method as described in [@MIG]. Note that OCP graphs are symmetric, weighted networks, obtained by image processing techniques, where the weights measure the number of fiber tracts between nodes. The investigated graph exhibits a single giant component of size $N = 836733$ nodes (out of $N = 848848$) and several small sub-components, ignored here, to disregard an unrealistic brain network scenario. This graph has $8304786$ undirected edges, but to make it more realistic first I studied a diluted one, in which $20\%$ of the edges made directed by a random connection removal process. This directionality value is in between $5/126$ of [@unidir1] and $33\%$ reported in [@unidir2]. I shall discuss the relevancy of this edge asymmetry assumption. Weights between nodes $i$ and $j$ of this graph vary between $1$ and $854$ and the probability density function is shown on Fig. \[pAw\]. Following a sharp drop one can observe a PL region for $20 < w_{ij} < 200$ with cutoff at large weights. The average weight of the links is $\simeq 5$. Note, that the average degree of this graph is $\langle k \rangle=156$ [@CCcikk], while the average of the sum of the incoming weights of nodes is $\langle W_i\rangle = 1 / N \sum_i\sum_j w_{ij} = 448$. ![\[pAw\] Link weight PDF of the KKI-18 OCP graph. Dashed line: a PL fit for intermediate $w_{ij}$-s. Inset: Survival probability in the $k=6$ threshold model near the transition point for $\lambda=0.003$, $\nu=0.3$,$0.4$,$0.45$,$0.5$,$0.55$,$0.6$,$0.7$ (top to bottom curves).](fig1.eps){height="5.5cm"} A two-state ($x_i = 0 \ {\rm or} \ 1$) dynamical spreading model was used to describe the propagation, branching and annihilation of activity on the network. This threshold model is similar to those of Refs. [@KH; @Hai]. The dynamical process is started by activating a randomly selected node. At each network update every ($i$) node is visited and tested if the sum of incoming weights ($w_{i,j}$) of active neighbors reaches a given threshold value $$\sum_{j} x_j w_{i,j} > K \ .$$ If this condition is satisfied a node activation is attempted with probability $\lambda$. Alternatively, an active node is deactivated with probability $\nu$. New states of the nodes are overwritten only after a full network update, i.e. a synchronous update is performed at discrete time steps. The updating process continues as long as there are active sites or up to a maximum time limit $t = 10^5$ Monte Carlo sweeps (MCs). In case the system is fallen to inactivity the actual time step is stored in order to calculate the survival probability $P(t)$ of runs. The average activity: $\rho(t) = 1/N \sum_{i=1}^N x_i$ and the number of activated nodes during the avalanche $s = \sum_{i=1}^N \sum_{t=1}^T x_i$ of duration $T$ is calculated at the end of the simulations. This stochastic cellular automaton type of updating is not expected to affect the dynamical scaling behavior [@HMNcikk] and provides a possibility for network-wise parallel algorithms. Measurements on $10^6$ to $10^7$ independent runs, started from randomly selected, active initial sites were averaged over at each control parameter value. By varying the control parameters, $K$, $\lambda$ and $\nu$ I attempted to find a critical point between an active and an absorbing steady state. At a critical transition point the survival probability is expected to scale asymptotically as $$\label{Pscal} P(t) \propto t^{-\delta} \ ,$$ where $\delta$ is the survival probability exponent [@GrasTor]. This can be related to the avalanche (total number of active sites during the spreading experiment) duration scaling: $p(t) \propto t^{-\tau_t} $ , via the relation $\tau_t=1+\delta$ [@MAval]. In seed experiments the number of active sites initially grows as $$\label{Nscal} N(t) \propto t^{\eta} \ ,$$ with the exponent $\eta$, related to the avalanche size distribution $p(s) \propto s^{-\tau} $ , via the scaling law $$\label{tau-del} \tau=(1+\eta+2\delta)/(1+\eta+\delta)$$ [@MAval]. To see corrections to scaling I also determined the local slopes of the dynamical exponents $\delta$ and $\eta$ as the discretized, logarithmic derivative of (\[Pscal\]) and (\[Nscal\]). The effective exponent of $\delta$ is measured as $$\label{deff} \delta_\mathrm{eff}(t) = -\frac {\ln P(t) - \ln P(t') } {\ln(t) - \ln(t')} \ ,$$ using $t - t'=8$ and similarly can one define $\eta_\mathrm{eff}(t)$. This difference selection has been found to be optimal in noise reduction versus effective exponent range [@rmp]. As the OCP graph is very inhomogeneous it appears that for a given set of control parameters only the hub nodes can be activated and the weakly coupled ones do not play any role. This is rather unrealistic and is against the local sustained activity requirement for the brain [@KH]. Indeed there is some evidence that neurons have certain adaptation to their input excitation levels [@neuroadap] and can be modeled by variable thresholds [@thres]. This adaptation leading to some homeostasis can be assumed to be valid on the coarse grained, node level too. To model nodes with variable thresholds of equalized sensitivity, I modified the incoming weights by normalizing them as $w'_{i,j} = w_{i,j}/\sum_{j \in {\rm neighb. of} \ i} w_{i,j}$. Although the update rules are unchanged I shall call such simulations “variable threshold model” studies. Dynamical simulation results ============================ First I summarize results for related homogeneous system. It is well known that in branching and annihilating models with multi-particle ($A$) reactions $m A \to (m+k) A$, $nA \to (n-l)A$ for $m>n$ the phase transition in the high dimensional, mean-field limit is first order type [@tripcikk]. Considering the sum of occupied neighbors as the incoming activation potential, there is a sudden change in the balance of activation/deactivation possibilities as we approach the absorbing phase, since annihilation can occur unconditionally. Therefore, it can be expected that for threshold models with $K > 1$, near and above the the upper critical dimension, which is expected to be $d_c\le 4$, we observe discontinuous transitions. First I run the threshold model on an unweighted $3$-dimensional lattice with $N=10^6$ nodes and periodic boundary conditions. I tested the low: $K=2,3,6$ threshold cases, being the most possible candidates for the occurrence of PL dynamics. However, for high branching probability ($\lambda=1$), where an efficient neural network should work, an exponentially fast evolution to the inactive state occurred for any $\nu > 0.001$. On the other hand for $\nu\to 0$ the survival probability remains finite, but the transition is very sharp, in agreement with the results of [@tripcikk] and we cannot find PL dynamics. In case of various quenched heterogenities a similar model, the CP has recently been studied on 3-dimensional lattices with diluted disorder in [@3dirft]. Extensive computer simulations gave numerical evidence for nonuniversal power laws typical of a Griffiths phase as well as activated scaling at criticality. The disorder, generated by the addition of long-range connections has also been found to be relevant in case of CP and threshold models in one [@Ma2010] and two dimensions [@HMNcikk], provided the probability of connections fell quickly enough i. e.: $p \propto r^{-s}$, with $s \ge 2D$. This means that in case of a $D=3$ systems the graph dimension remains finite and GP may be expected if $s \ge 6$. However, in a real connectome we cannot investigate these heterogenities separately. Threshold model --------------- Next, I performed simulations of the threshold model on the KKI-18 graph with $K = 1, 2, 6$, since for larger $K$-s we don’t expect criticality. Again, temporal functions did not show PL behavior. Instead, one can observe an exponentially fast drop of $P(t)$ to zero or to some finite value, depending on the control parameters. Discontinuous transition occurs at very low $\lambda$-s, as shown in the inset of Fig. \[pAw\]. Therefore, the heterogeneity of the OCP graph is not strong enough to round the discontinuous transition, observed on the homogeneous lattice unlike in case of the models in [@round; @round2]. It appears that hubs, with large $W_i=\sum_j w_{ij}$, determine the behavior of the whole system, while other nodes do not play a role. These hubs keep the system active or inactive, ruling out the occurrence of local RR effects as in case of infinite dimensional networks [@Ferr1cikk]. Variable Threshold Model ------------------------ To test this, I turned towards the simulations of variable threshold models. The control parameter space was restricted by fixing $\lambda\simeq 1$, which mimics an efficient brain model. Transitions could be found for $K < 0.5$, for higher thresholds the models evolve to inactive phase for any $\nu$-s. For the time being I set $K=0.25$. Fig. \[s2wlsW\] suggests a phase transition at $\nu = 0.95$ and $\lambda = 0.88(2)$, above which $P(t)$ curves evolve to finite constant values. It is very hard to locate the transition clearly, since the evolution slows down and log-periodic oscillations also add (see inset of Fig. \[s2wlsW\]). The straight lines on the log. plot of $\delta_{eff}$ at $\lambda \simeq 1$ suggest ultra slow dynamics as in case of a strong disorder fixed point [@Vojta]. Indeed, a logarithmic fitting at $\lambda = 0.88$ results in $P(t) \simeq \ln(t)^{-3.5(3)}$, which is rather close to the the $3$-dimensional strong disorder universal behavior [@3dsdrg; @3dirft]. Simulations started from fully active sites show analogous decay curves for the density of active sites $\rho(t)$, expressing a rapidity reversal symmetry, characteristic of the Directed Percolation universality class [@rmp], governing the critical behavior of such models [@Dickmar]. However, for the graph dimension of this network $D\simeq 3.2$ one should see $\delta > 0.73$ in case of DP universality [@rmp], that can be excluded by the present simulations. Below the transition point for fixed $\nu=0.95$ we can find $P(t)$ decay curves with PL tails, characterized by the exponents $0< \delta < 0.5$, as we vary $\lambda$ between $0.845$ and $0.88$. ![\[s2wlsW\] Avalanche survival distribution of the relative threshold model with $K=0.25$, for $\nu=0.95$ and $\lambda=0.8$,$0,81$,$0.82$,$0.83$,$0.835$,$0.84$,$0.845$,$0.85$, $0.86$,$0.87$,$0.9$,$0.95$,$1$ (bottom to top curves). Inset: Local slopes of the same from $\lambda=0.835$ to $\lambda=1$ (top to bottom curves). Griffiths effect manifests by slopes reaching a constant value as $1/t\to 0$.](fig2.eps){height="5.5cm"} In this region the avalanche size distributions also show PL decay (Fig. \[elo-t2wlsw\]), modulated by some oscillations due to the modular network structure, but the exponents of curves is around $\tau=1.26(2)$, a smaller value than obtained by the brain experiments: $\tau\simeq 1.5$ [@BP03]. Avalanche average shape ----------------------- I have also tested the collapse of averaged avalanche distributions $\Pi(t)$ of fixed temporal sizes $T$ as in [@Fried]. The inset of Fig. \[elo-t2wlsw\] shows good a collapse, obtained for avalanches of temporal sizes $T=25,63,218,404$ and using a vertical scaling $\Pi(t)/T^{0.34}$, which is near to the experimental findings reported is [@Fried]. Note, the asymmetric shapes, which are also in agreement with the experiments and could not be reproduced by the model of ref. [@Fried]. ![\[elo-t2wlsw\] Avalanche size distribution of the relative threshold model with $K=0.25$, for $\nu=1$ and $\lambda=1,0.9,0.8$. Dashed line: PL fit to the $\lambda=0.8$ case. Inset: Avalanche shape collapse for $T=25,63,218,404$ at $\lambda=0.86$ and $\nu = 0.95$](fig3.eps){height="5.5cm"} Undirected links ---------------- To test the robustness of the results simulations were also run on the KKI-18 network with unidirectional edges at $K=0.25$. Similar PL tails have been obtained as before, but for the same control parameters the slopes of $\ln[P(\ln{t})]$ curves were bigger, meaning that in the symmetric networks: $\tau_t=1.4 - 1.7$ (see Fig. \[swlsW\]) and the avalanche size distributions also fell down faster, characterized by $1.5 < \tau < 2$. ![\[swlsW\] The same as Fig. \[s2wlsW\] in case of the undirected graph. Inset: Local slopes of the curves.](fig4.eps){height="5.5cm"} Inhibitory connections ---------------------- In real brain networks inhibitory connections also happen. To model this I changed the sign of certain portion of the weights randomly at the beginning of the simulations, i.e. $w'_{i,j} = -w'_{i,j}$. This produces further heterogeneity, thus stronger RR effects. Figure \[s2wlsWi3\] shows the survival probabilities, when $30\%$ of the links turned to inhibitory for $K=0.1$ and $\lambda=0.95$. The critical point, above which $P(t)$ signals persistent activity, is around $\nu=0.57$, very hard to locate clearly, since the evolution slows down and exhibit strong (oscillating) corrections. Below the transition point the survival exponent changes continuously in the range $0 < \delta < 0.5$ as a response for the variation of $\nu$ between $0.5$ and $0.57$ (inset of Fig. \[s2wlsWi3\]). ![\[s2wlsWi3\] Avalanche survival distribution of the relative threshold model with $30\%$ inhibitory links at $K=0.1$, for $\lambda=0.95$ and $\nu=0.4$,$0.45$,$0.49$,$0.5$,$0.51$,$0.52$,$0.55$ $0.57$,$0.7$ (bottom to top curves). Inset: Local slopes of the same curves in opposite order.](fig5.eps){height="5.5cm"} The corresponding avalanche size distributions (Fig. \[elo-t2wlsWi3\]) exhibit PL tails with the exponent $\tau\simeq 1.5$, close to the experimental value for brain [@BP03]. A slight change of $\tau$ can also be observed by varying the control parameter below the critical point. This variation can be seen even better on the exponent $\eta$, related to $\tau$ via Eq. \[tau-del\] (inset of Fig. \[elo-t2wlsWi3\]), suggesting Griffiths effects. ![\[elo-t2wlsWi3\] Avalanche size distribution of the relative threshold model with $30\%$ inhibitory links at $K=0.1$, $\nu=0.95$ and $\lambda=0.49,0.5,0.55$. Dashed lines: PL fits. Inset: Effective $\eta$ exponent for $\nu=0.95$ and $\lambda=0.49,0.5,0.51,0.51$,$0.55$ (bottom to top curves).](fig6.eps){height="5.5cm"} For 20% of inhibitory links the same $\tau$-s were obtained, while 10% of inhibition resulted in $\tau\simeq 1.3$ near the critical point. For higher threshold values ($K=0.2,0.25$) the critical point shifts to smaller $\nu$ parameters but Griffiths effects are still visible. However, avalanche size distributions exhibit faster decay, characterized by: $\tau\simeq 1.7 - 2$. Discussion and Conclusions ========================== Neural variability make the brain more efficient [@Orb16], therefore one must consider its effect in modelling. To study this large scale dynamical simulations have been performed on a human connectome model. The heterogenities of an OCP graph are too weak to change the dynamical behavior of the threshold model of a homogeneous 3D lattice. This seems to be true for other spreading models, like the Contact Process [@Funp]. In relative threshold models, defined by normalizing the incoming weights, Griffiths effects have been found for extended control parameter spaces. The inclusion of a $20\%$ edge directness does not affect the results qualitatively, reflecting insensitivity to some potential artifacts of DTI, like polarity detection. Random removal of connections emulates the effect of (unknown) noise in the data generation and since the majority of edges is short, this procedure results in a relative enhancement of long connections, which is known to be underestimated by the DTI [@Tractrev]. Scaling exponents on undirected OCPs vary in the range $1.4 < \tau_t < 1.7$ and $1.5 < \tau < 2$, close to neural experimental values [@BP03]. Effects of other less relevant systematic errors and artifacts have not been investigated here. Radial accuracy affects, for example, the end point of the tracts, thus it influences the hierarchical levels of cortical organization. [@Hilg2000]. The present OCP exhibits hierarchical levels by construction from the Desikan regions with (at least) two quite different scales. Preliminary studies [@Funp] suggest that RR effects are enhanced by modularity, but not too much by the hierarchy. Transverse accuracy determines which cortical area is connected to which other. However, achieving fine-grained transverse accuracy is difficult for DTI, not only because of limited spatial resolution, but also because present measures are noisy and indirect. We may expect that lack of precise connection pathways are not relevant for Griffiths effects as long as they do not affect the graph dimension for example. The introduction of $20-30\%$ of inhibitory links, selected randomly, results in Griffiths effects with avalanche size and time exponents, which scatter around the experimental figures. The exponents depend slightly on the control parameters as the consequence of RR effects. Strong and oscillating corrections to scaling are also observable as the result of the modular structure of the connectome. As an earlier study [@CCcikk] showed certain level of universality in the topological features: degree distributions, graph dimensions, clustering and small world coefficients of the OCP graphs, one can expect the same dynamical behavior and Griffiths effects of these models on OCP graphs in general. This expectation is supported further by the robustness of the results for random changing of the network details: inhibitory links, directedness or loss of connections up to $20\%$. Therefore, one can safely take it granted that the investigated connectome describes well similar ones available currently and can be considered as a useful prototype for numerical analysis. It is important to note that while some rough tuning of the control parameters might be necessary to get closer to the critical point, one can see dynamical criticality even [*below a phase transition point*]{} without external activation, which is a safe expectation for brain systems [@Pris]. Recent experiments suggest slightly sub-critical brain states in vivo, devoid of dangerous over-activity linked to epilepsy. One may debate the assumption of relative thresholds, that was found to be necessary to see slow dynamics. This introduces disassortativity, enhancing RR effects [@wbacikk], besides the modularity [@HMNcikk]. However, inhibitory links increase the heterogeneity so drastically, that a full equalization of the internal sensitivity may not be obligatory condition for finding Griffiths effects. This will be the target of further studies. Probably the most important result of this study is that negative weights enable local sustained activity and promote strong RR effects without network fragmentation. Thus connectomes with high graph dimensions can be subject to RR effects and can show measurable Griffiths effects. Another important observation is that PL-s may occur in a single network, without sample averaging, due to the modular topological structure. The codes and the graph used here are available on request from the author. Acknowledgments {#acknowledgments .unnumbered} =============== I thank useful discussions to C.C. Hilgetag, R. Juhász and comments to M. A. Muñoz, M. T. Gastner. Support from the Hungarian research fund OTKA (K109577) is acknowledged. [50]{} J. Beggs and D. Plenz, [*Neuronal avalanches in neocortical circuits*]{}, J. Neurosci., [**23**]{}, (2003) 11167. C. Tetzlaff et al., [*Self-Organized Criticality in Developing Neuronal Networks*]{}, PLoS Comput. Biol. [**6**]{}, (2010) e1001013. G. Hahn et al., [*Neuronal avalanches in spontaneous activity in vivo*]{}, J. Neurophysiol. [**104**]{}, (2010) 3312. T. L. Riberio et al., [*Spike Avalanches Exhibit Universal Dynamics across the Sleep-Wake Cycle*]{}, PLoS ONE [**5**]{}, (2010), e14129 A. Haimovici, E. Tagliazucchi, P. Balenzuela, and D. R. Chialvo, Phys. Rev. Lett. [**110**]{}, 178101 (2013). for a review see R. Legenstein and W. Maass, [*New Directions in Statistical Signal Processing: From Systems to Brain*]{}, eds S. Haykin, J. C. Principe, T. Sejnowski, J. McWhirter (Cambridge, MIT Press), 127–154. P. Bak, C. Tang and K. Wiesenfeld, Phys. Rev. A [**38**]{}, (1988) 364. G. Pruessner, [*Self Organized Criticality*]{}, Cambridge University Press, Cambridge 2012. J. Hidalgo et al., PNAS [**111**]{}, 10095-10100 (2014). T. Vojta, [*Rare region effects at classical, quantum and nonequilibrium phase transitions*]{}, J. Physics A: Math. and Gen. [**39**]{}, R143 (2006). P. M. Villa Martin, J. A. Bonachela and M.A. Muñoz, [*Quenched disorder forbids discontinuous transitions in nonequilibrium low-dimensional systems*]{}, Phys. Rev. E [**89**]{}, (2014) 012145. P. M. Villa Martin, M. Moretti and and M.A. Muñoz, [*Rounding of abrupt phase transitions in brain networks*]{}, J. Stat. Mech. (2015) P01003. R. B. Griffiths, [*Nonanalytic Behavior Above the Critical Point in a Random Ising Ferromagnet*]{}, Phys. Rev. Lett. [**23**]{}, 17 (1969). G. Ódor, [*Slow, bursty dynamics as a consequence of quenched network topologies*]{}, Phys. Rev. E [**89**]{}, 042102 (2014) S. Johnson, J. J. Torres, and J. Marro, [*Robust Short-Term Memory without Synaptic Learning*]{}, PLoS ONE [**8(1)**]{}: e50276 (2013) M. A. Muñoz, R. Juhász, C. Castellano and G. Ódor, [*Griffiths phases on complex networks*]{}, Phys. Rev. Lett. [**105**]{} (2010) 128701. Harris T. E., Contact Interactions on a Lattice, [*Ann. Prob.* ]{} [**2,**]{} 969-988 (1974). G. Ódor and R. Pastor-Satorras, [*Slow dynamics and rare-region effects in the contact process on weighted tree networks*]{}, Phys. Rev. E [**86**]{}, (2012) 026117. G. Ódor, [*Rare regions of the susceptible-infected-susceptible model on Barab´asi-Albert networks*]{}, Phys. Rev. E [**87**]{}, (2013) 042132. G.Ódor, [*Spectral analysis and slow spreading dynamics on complex networks*]{}, Phys. Rev. E [**88**]{}, (2013) 032109. P. Moretti, M. A. Muñoz, [*Griffiths phases and the stretching of criticality in brain networks*]{}, Nature Communications [**4**]{}, (2013) 2521. P. Villegas, P. Moretti and Miguel A. Muñoz, Scientific Reports [**4**]{}, 5990 (2014) G.Ódor, R. Dickman and G.Ódor, [*Griffiths phases and localization in hierarchical modular networks*]{}, Sci. Rep. [**5**]{}, (2015) 14451. W. Cota, S. C. Ferreira, G. Ódor, [*Griffiths effects of the susceptible-infected-susceptible epidemic model on random power-law networks* ]{}, Phys. Rev. E [**93**]{}, 032322 (2016). M. T. Gastner and G. Ódor, [*The topology of large Open Connectome networks for the human brain*]{}, Sci. Rep. [**6**]{}, (2016) 27249. See: http://www.openconnectomeproject.org Bennett A. Landman et al, NeuroImage [**54**]{} (2011) 2854–2866. P. Hagmann, et al. [*Mapping the Structural Core of Human Cerebral Cortex.*]{} PLoS Biol. [**6**]{}, e159 (2008). C. J. Honey, et al. [*Predicting human resting-state functional connectivity from structural connectivity*]{}, Proc. Natl. Acad. Sci. [**106**]{}, 2035–2040 (2009). W. G. Roncal et al., [*MIGRAINE: MRI Graph Reliability Analysis and Inference for Connectomics*]{}, prepint: arXiv.org:1312.4875. D. J. Felleman and D. C. Van Essen, [*Distributed hierarchical processing in the primate cerebral cortex*]{}, Cerebr. Cortex, [**1**]{}, (1991), 1–47. N. T. Markov et al., Cerebr. Cortex, [**24**]{}, (2012), 17-36. M. Kaiser and C. C. Hilgetag, [*Optimal hierarchical modular topologies for producing limited sustained activation of neural networks*]{}, Front. in Neuroinf., [**4**]{} (2010) 8. R. Azouz and C. M. Gray, PNAS [**97**]{}, 8110 (2000). M.-T. Hütt, M. K. Jain , C. C. Hilgetag, A. Lesne, Chaos, Solitons & Fractals, [**45**]{}, 611 (2012). P. Grassberger and A. de la Torre, Ann. Phys. [**122**]{}, 373 (1979). M. A. Muñoz, R. Dickman, A. Vespignani and S. Zapperi, Phys. Rev. E [**59**]{}, 6175 (1999). G. Ódor, Phys. Rev. E [**67**]{}, 056114 (2003). I. A. Kovács, F. Iglói, Phys. Rev. B [**83**]{}, 174207 (2011). T. Vojta, Phys. Rev. E [**86**]{}, 051137 (2012). G. Ódor, Rev. Mod. Phys. [**76**]{}, 663 (2004). J. Marro and R. Dickman, , Cambridge University Press, Cambridge, 1999. N. Friedman, et al, Phys. Rev. Lett. [**108**]{}, 208102 (2012). G.ő Orbán, P. Berkes, J. Fiser and M. Lengyel, [*Neural Variability and Sampling-Based Probabilistic Representations in the Visual Cortex*]{}, Neuron [**92**]{} (2016) 530–543. G. Ódor, to be published. Saad Jbabdi and Heidi Johansen-Berg, Brain Connectivity [**1**]{}, 169 (2011). C.C. Hilgetag, M.A. O’Neill and M.P. Young MP. 2000, [*Hierarchical organization of macaque and cat cortical sensory systems explored with a novel network processor*]{}, Philos Trans R Soc Lond B Biol Sci [**355**]{}, (2000) 71–89. V. Priesemann et al, Front. in Sys. NeuroSci. [**8**]{} (2014) 108.
--- abstract: 'Given an infinitesimal perturbation of a discrete-time finite Markov chain, we seek the states that are stable despite the perturbation, *i.e.* the states whose weights in the stationary distributions can be bounded away from $0$ as the noise fades away. Chemists, economists, and computer scientists have been studying irreducible perturbations built with exponential maps. Under these assumptions, Young proved the existence of and computed the stable states in cubic time. We fully drop these assumptions, generalize Young’s technique, and show that stability is decidable as long as $f\in O(g)$ is. Furthermore, if the perturbation maps (and their multiplications) satisfy $f\in O(g)$ or $g\in O(f)$, we prove the existence of and compute the stable states and the metastable dynamics at all time scales where some states vanish. Conversely, if the big-$O$ assumption does not hold, we build a perturbation with these maps and no stable state. Our algorithm also runs in cubic time despite the general assumptions and the additional work. Proving the correctness of the algorithm relies on new or rephrased results in Markov chain theory, and on algebraic abstractions thereof.' bibliography: - 'article.bib' title: Stable states of Perturbed Markov Chains --- evolution, learning, metastability, tropical algebra, shortest path, SCC, cubic time algorithm Introduction {#sect:intro} ============ Motivated by the dynamics of chemical reactions, Eyring [@Eyring35] and Kramers [@Kramers40] studied how infinitesimal perturbations of a Markov chain affects its stationary distributions. This topic has been further investigated by several academic communities including probability theorists, economists, and computer scientists. In several fields of application, such as learning and game theory, it is sometimes unnecessary to describe the exact values of the limit stationary distributions: it suffices to know whether these values are zero or not. Thus, the *stochastically stable states* ([@FY90], [@KMR93], [@Young93]) were defined in different contexts as the states that have positive probability in the limit. We rephrase a definition below. \[defn:mcp-ss\] Let $I$ be a subset of positive real numbers with $0$ as a limit point for the usual topology[^1]. A perturbation is a family $((X_n^{({\epsilon})})_{n\in{\mathbb{N}}})_{{\epsilon}\in I}$ of discrete-time Markov chains sharing the same finite state space. If the chain $(X_n^{({\epsilon})})_{n\in{\mathbb{N}}}$ is irreducible for all ${\epsilon}\in I$, then $((X_n^{({\epsilon})})_{n\in{\mathbb{N}}})_{{\epsilon}\in I}$ is said to be an irreducible perturbation. A state $x$ of $((X_n^{({\epsilon})})_{n\in{\mathbb{N}}})_{{\epsilon}\in I}$ is stochastically stable if there exists a family of corresponding stationary distributions $(\mu_{\epsilon})_{{\epsilon}\in I}$ such that $\liminf_{{\epsilon}\to 0} \mu_{{\epsilon}}(x) > 0$. It is stochastically fully vanishing if $\limsup_{{\epsilon}\to 0} \mu_{{\epsilon}}(x) = 0$ for all $(\mu_{\epsilon})_{{\epsilon}\in I}$. Non-stable states are called vanishing. Definition \[defn:mcp-ss\] may be motivated in at least two ways. First, a dynamical system (*e.g.* modeled by a Markov chain) has been perturbed from the outside, and the laws governing the systems (*e.g* the transition probability matrix) have been changed. As time elapses (*i.e.* as ${\epsilon}$ approaches zero), the laws slowly go back to normal. What are the almost sure states of the system after infinite time? Second, a very complex Markov chain is the sum of a simple chain and a complex perturbation matrix that is described *via* a small, fixed ${\epsilon}_0$. The stationary distributions of the complex chain are hard to compute, but which states have significantly positive probability after infinite time? Our main result below answers these questions. \[thm:teaser\] Consider a perturbation such that $f \in O(g)$ or $g \in O(f)$ for all $f$ and $g$ in the multiplicative closure of the transition probability functions ${\epsilon}\mapsto p_{\epsilon}(x,y)$ with $x \neq y$. Then the perturbation has stable states, and stability can be decided in $O(n^3)$, where $n$ is the number of states. Note that by finiteness of the state space it is easy to prove that every perturbation has a state that is not fully vanishing. Related works and comparisons ----------------------------- In 1990 Foster and Young [@FY90] defined the stochastically stable states of a general (continuous) evolutionary process, as an alternative to the evolutionary stable strategies [@MP73]. Stochastically stable states were soon adapted by Kandori, Mailath, and Rob [@KMR93] for evolutionary game theory with $2\times 2$ games. Then Young [@Young93 Theorem 4] proved “a finite version of results obtained by Freidlin and Wentzel” in [@FW98]. Namely, he characterized the stochastically stable states if the perturbation satisfies the following assumptions: 1) the perturbed matrices $P^{{\epsilon}}$ are aperiodic and irreducible; 2) the $P^{\epsilon}$ converge to the unperturbed matrix $P^0$ when ${\epsilon}$ approaches zero; 3) every transition probability is a function of ${\epsilon}$ that is equivalent to $c \cdot {\epsilon}^{\alpha}$ for some non-negative real numbers $c$ and $\alpha$. The main tool in Young’s proof was proved by Kohler and Vollmerhaus [@KV80] and is the special case for irreducible chains of the Markov chain tree theorem (see [@LR83] or [@FW98]). Young’s characterization involves minimum directed spanning trees, which can be computed in $O(n^2)$ [@GGST86] for graphs with $n$ vertices. Since there are at most $n$ roots for directed spanning trees in a graph with $n$ vertices, Young can compute the stable states in $O(n^3)$. In 2000, Ellison [@Ellison00] characterized the stable states *via* the alternative notion of the radius of a basin of attraction. The major drawback of his characterization compared to Young’s is that it is “not universally applicable” [@Ellison00]; the advantages are that it provides “a bound on the convergence rate as well as a long-run limit” and “intuition for why the long-run stochastically stable set of a model is what it is”. In 2005, Wicks and Greenwald [@WG05] designed an algorithm to express the exact values of the limit stationary distribution of a perturbation, which, as a byproduct, also computes the set of the stable states. Like [@Young93] they consider perturbations that are related to the functions ${\epsilon}\mapsto {\epsilon}^{\alpha}$, but they only require that the functions converge exponentially fast. Also, instead of requiring that the $P^{\epsilon}$ be irreducible for ${\epsilon}> 0$, they only require that they have exactly one essential class. They do not analyze the complexity of their algorithm but it might be polynomial time. We improve upon [@Young93], [@Ellison00], and [@WG05] in several ways. 1. \[improve1\] The perturbation maps in the literature relate to the maps ${\epsilon}\mapsto {\epsilon}^{\alpha}$. Their specific form and their continuity, especially at $0$, are used in the existing proofs. Theorem \[thm:teaser\] dramatically relaxes this assumption. Continuity, even at $0$, is irrelevant, which allows for aggressive, *i.e.*, non-continuous perturbations. We show that our assumption is (almost) unavoidable. 2. The perturbations in the literature are irreducible (but [@WG05] slightly weakened this assumption). It is general enough for perturbations relating to the maps ${\epsilon}\mapsto {\epsilon}^\alpha$, since it suffices to process each sink (aka bottom) irreducible component independently, and gather the results. Although this trick does not work for general perturbation maps, Theorem \[thm:teaser\] manages not to assume irreducibility. 3. The perturbation is abstracted into a weighted graph and shrunk by combining recursively a shortest-path algorithm (w.r.t. some tropical-like semiring) and a strongly-connected-component algorithm. Using tropical-like algebra to abstract over Markov chains has already been done before, but not to solve the stable state problem. ([@GKMS15] did it to prove an algebraic version of the Markov chain tree theorem.) 4. Our algorithm computes the stable states in $O(n^3)$, as in [@Young93], which is the best known complexity. In addition, the computation itself is a summary of the asymptotic behavior of the perturbation: it says at which time scales the vanishing states vanish, and the intermediate graph obtained at each recursive stage of the algorithm accounts for the metastable dynamics of the perturbation at this vanishing time scale. Section \[sect:nota\] sets some notations; Section \[sect:tga\] analyses which assumptions are relevant for the existence of stable states; Section \[sect:pp-ess\] proves the existential part of Theorem \[thm:teaser\], *i.e.* it develops the probabilistic machinery to prove the existence of stable states; hinging on this, Section \[sect:aqa\] proves the algorithmic part of Theorem \[thm:teaser\], *i.e.* it abstracts the relevant objects using a new algebraic structure, presents the algorithm, and proves its correctness and complexity; Section \[sect:disc\] discusses two important special cases and an induction proof principle related to the termination of our algorithm. Notations {#sect:nota} --------- - The set $\mathbb{N}$ of the natural numbers contains $0$. For a set $S$ and $n\in\mathbb{N}$, let $S^n$ be the words $\gamma$ over $S$ of length $|\gamma| = n$. Let $S^* := \cup_{n\in\mathbb{N}}S^n$ be the finite words over $S$. The set-theoretical notation $\cup E := \cup_{x\in E}x$ is used in some occasions. - Let $(X_n)_{n\in{\mathbb{N}}}$ be a Markov chain with state space $S$. For all $A\subseteq S$ let $\tau_A := \inf \{n \geq 0: X_n \in A\}$ ($\tau^+_A := \inf \{n > 0: X_n \in A\}$) be the first time (first positive time) that the chain hits a state inside $A$. Usually $\tau_{\{x\}}$ and $\tau^+_{\{x\}}$ are written $\tau_x$ and $\tau^+_x$, respectively. - Given a Markov chain $(X_n)_{n\in{\mathbb{N}}}$, the corresponding matrix representation, law of the chain when started at state $x$, expectation when started at state $x$, and possible stationary distributions are respectively denoted $p$, ${\mathbb{P}}^x$, $\mathbb{E}^x$, and $\mu$. When considering other Markov chains $(\tilde{X}_n)_{n\in{\mathbb{N}}}$ or $(\widehat{X}_n)_{n\in{\mathbb{N}}}$, the derived notions are denoted with tilde or circumflex, as in $\tilde{p}$ or $\widehat{\mu}$. - A perturbation $((X_n^{({\epsilon})})_{n\in{\mathbb{N}}})_{{\epsilon}\in I}$ will often be denoted $X$ for short, and when it is clear from the context that we refer to a perturbation, $p$ will denote the function $({\epsilon},x,y)\mapsto p_{\epsilon}(x,y)$ (instead of $(x,y)\mapsto p(x,y)$), and $p(x,y)$ will denote ${\epsilon}\mapsto p_{\epsilon}(x,y)$ (instead of a mere real number). The other derived notions are treated likewise. - The probability of a path is defined inductively by $p(xy) := p(x,y)$ and $p(xy\gamma) := p(x,y)p(y\gamma)$ for all $x,y\in S$ and $\gamma\in S\times S^*$. - Given $x$, $y$, and a set $A$, a simple $A$-path from $x$ to $y$ is a repetition-free (unless $x = y$) word $\gamma$ starting with $x$ and ending with $y$, and using beside $x$ and $y$ only elements in $A$. Formally, $\Gamma_A(x,y) := \{\gamma\in \{x\}\times A^* \times\{y\} \,|\, (1 \leq i < j \leq |\gamma| \wedge \gamma_i = \gamma_j) \Rightarrow (i = 1 \wedge j = |\gamma|)\}.$ Towards general assumptions {#sect:tga} --------------------------- A state $x$ of a perturbation is stable if there exists a related family $(\mu_{\epsilon})_{{\epsilon}\in I}$ of stationary distributions such that $\mu(x) = O(1)$, but even continuous perturbations that converge when ${\epsilon}$ approaches $0$ may fail to have stable states. For instance let $S := \{x,y\}$ and for all ${\epsilon}\in]0,1]$ let $p_{\epsilon}(x,y) := {\epsilon}^2$ and $p_{\epsilon}(y,x) := {\epsilon}^{2+\cos({\epsilon}^{-1})}$ as in Figure \[fig:no-stable1\], where the self-loops are omitted. In the unique stationary distribution $x$ has a weight $\mu_{\epsilon}(x) = (1+{\epsilon}^{-\cos({\epsilon}^{-1})})^{-1}$. Since $\mu_{(2n\pi)^{-1}}(x) = \frac{2n\pi}{1 + 2n\pi} \to_{n \to \infty} 1$ and $\mu_{(2(n+1)\pi)^{-1}}(x) = \frac{1}{1 + 2(n+1)\pi} \to_{n \to \infty} 0$, neither $x$ nor $y$ is stable. As mentioned above, the perturbations in the literature are related to the functions ${\epsilon}\mapsto {\epsilon}^\alpha$ with $\alpha \geq 0$, which rules out the example from Figure \[fig:no-stable1\] and implies the existence of a stable state [@Young93]. Here, however, we want to assume as little as possible about the perturbations, while still guaranteeing the existence of stable states. Towards it let us first rephrase the big $O$ notation as a binary relation. It is well-known that big $O$ enjoys various algebraic properties. The ones we need are mentioned in the appendix. [0.15]{} \(x) [$x$]{}; (y) \[right of = x\] [$y$]{}; \(x) edge \[bend right\] node \[below\] [${\epsilon}^2$]{} (y) (y) edge \[bend right\] node \[above\] [${\epsilon}^{2+\cos({\epsilon}^{-1})}$]{} (x) ; [0.15]{} \(y) [$y$]{}; (z) \[right of = y\] [$z$]{}; (x) \[above of = z\][$x$]{}; \(x) edge \[bend right\] node \[above left\] [${\epsilon}^6$]{} (y) (z) edge node \[right\] [${\epsilon}^4$]{} (x) (z) edge \[bend right\] node \[above\] [$1 - {\epsilon}^4$]{} (y) (y) edge \[bend right \] node \[below\] [${\epsilon}^{2+\cos({\epsilon}^{-1})}$]{} (z) ; [0.1]{} \(x) [$x$]{}; (y) \[right of = x\] [$y$]{}; (z) \[above of = y\] [$z$]{}; \(x) edge \[bend left\] node \[above\] [$\frac{1+\cos ({\epsilon}^{-1})}{2}$]{} (y) (y) edge \[bend left\] node \[below\] [${\epsilon}$]{} (x) (z) edge\[bend left\] node [$\frac{1}{2}$]{} (y); [0.45]{} (x1) [$x_1$]{}; (x2) \[right of = x1\] [$x_2$]{}; (xn1) \[right of = x2\] [$x_{n-1}$]{}; (xn) \[right of = xn1\] [$x_n$]{}; (y1) \[below of = xn\] [$y_1$]{}; (y2) \[left of = y1\] [$y_2$]{}; (ym1) \[left of = y2\] [$y_{m-1}$]{}; (ym) \[left of = ym1\] [$y_m$]{}; (x1) edge node [$f_1$]{} (x2) (xn1) edge node [$f_{n-1}$]{} (xn) (xn) edge node [$f_n$]{} (y1) (y1) edge node [$g_1$]{} (y2) (ym1) edge node [$g_{m-1}$]{} (ym) (ym) edge node [$g_m$]{} (x1) (x1) edge \[loop above\] node [$1-f_1$]{} () (x2) edge \[bend left = 20\] node [$1-f_2$]{} (x1) (xn1) edge \[bend left = 50\] node \[above\] [$1-f_{n-1}$]{} (x1) (xn) edge \[bend right\] node [$1-f_n$]{} (x1) (y1) edge \[loop below\] node [$1-g_1$]{} () (y2) edge \[bend left = 20\] node [$1-g_2$]{} (y1) (ym1) edge \[bend left = 50\] node \[below\] [$1-g_{m-1}$]{} (y1) (ym) edge \[bend right\] node [$1-g_m$]{} (y1); (x2) edge node (xn1) (y2) edge node (ym1); [0.15]{} \(x) [$x$]{}; (y) \[right of = x\] [$y$]{}; (z) \[above of = y\] [$z$]{}; \(x) edge \[bend left\] node \[below\] [${\epsilon}$]{} (y) (y) edge \[bend left\] node \[below\] [${\epsilon}^2$]{} (x) (z) edge node \[left\][$\frac{1-{\epsilon}}{3}$]{} (y); [0.15]{} \(x) [$x$]{}; (y) \[right of = x\] [$y$]{}; \(x) edge \[bend right\] node \[below\] [${\epsilon}(2-\cos({\epsilon}^{-1}))$]{} (y) (y) edge \[bend right\] node \[above\] [${\epsilon}(2+\cos({\epsilon}^{-1}))$]{} (x) ; [0.15]{} \(x) [$x$]{}; (y) \[right of = x\] [$y$]{}; (z) \[above of = y\] [$z$]{}; \(x) edge \[bend left\] node \[above\] [$\frac{1+\cos ({\epsilon}^{-1})}{2}$]{} (y) (y) edge \[bend left\] node \[below\] [$\frac{1+\cos ({\epsilon}^{-1})}{2}$]{} (x) (z) edge \[bend left\]node [$1$]{} (y); \[def:cong\] For $f,g: I\to[0,1]$, let us write $f \precsim g$ if there exist positive $b$ and ${\epsilon}$ such that $f({\epsilon}') \leq b \cdot g({\epsilon}')$ for all ${\epsilon}' < {\epsilon}$; let $f \cong g$ stand for $f \precsim g \,\wedge\, g \precsim f$. Requiring that every two transition probability maps $f$ and $g$ occurring in the perturbation satisfy $f \precsim g$ or $g \precsim f$ rules out the example from Figure \[fig:no-stable1\], but not the one from Figure \[fig:no-stable2\]. There $\mu_{\epsilon}(z) \leq \mu_{\epsilon}(x) = \frac{{\epsilon}^{\cos({\epsilon}^{-1})}}{1 + {\epsilon}^{\cos({\epsilon}^{-1})} (1 + {\epsilon}^2)}$ and $\mu_{\epsilon}(y) = \frac{1}{1 + {\epsilon}^{\cos({\epsilon}^{-1})} (1 + {\epsilon}^2)}$. So $\mu_{\epsilon}(z) \to_{{\epsilon}\to 0}0$ and $\mu_{2n\pi}(y) \to_{n\to \infty}0$ and $\mu_{2(n+1)\pi}(x) \to_{n\to \infty}0$, no state is stable. Informally, $z$ is not stable because it gives everything but receives at most ${\epsilon}$; neither $x$ nor $y$ is stable since their interaction resembles Figure \[fig:no-stable1\] due to ${\epsilon}^6$ and ${\epsilon}^4 \cdot {\epsilon}^{2+\cos({\epsilon}^{-1})}$. This remark is turned into a general Observation \[obs:unstable-constr\] below. \[obs:unstable-constr\] For $1 \leq i \leq n$ and $1 \leq j \leq m$ let $f_i,g_j : I \to [0,1]$ be such that $\prod_i f_i$ and $\prod_j g_j$ are not $\precsim$-comparable. Then there exists a perturbation without stable states that is built only with the $f_1,\dots,f_n,g_1,\dots,g_m$ and the $1-f_1,\dots,1-f_n,1-g_1,\dots,1-g_m$. See Figure \[fig:no-stable4\]. Observation \[obs:unstable-constr\] motivates the following “unavoidable” assumption. \[Assum1\] The multiplicative closure of the maps ${\epsilon}\mapsto p_{\epsilon}(x,y)$ with $x \neq y$ is totally preordered by $\precsim$. For example, the classical maps ${\epsilon}\mapsto c\cdot {\epsilon}^{\alpha}$ with $c > 0$ and $\alpha\in\mathbb{R}$ constitute a multiplicative group totally preordered by $\precsim$. One reason why we can afford such a weak Assumption \[Assum1\] is that we are not interested in the exact weights of some putative limit stationary distribution, but only whether the weights are bounded away from zero. Let us show the significance of Assumption \[Assum1\], which is satisfied by the perturbations in Figure \[fig:stable\] and \[fig:trans-del5\]: Young’s result shows that $y$ is the unique stable state of the perturbation in Figure \[fig:stable1\], but it cannot say anything about Figures \[fig:stable2\], \[fig:stable3\], and \[fig:trans-del5\]: Figure \[fig:stable2\] is not regular, *i.e.*, $\frac{2+\cos ({\epsilon}^{-1})}{2-\cos ({\epsilon}^{-1})}$ does not converge, and neither do the weights $\mu_{\epsilon}(x)$ and $\mu_{\epsilon}(y)$, but it is possible to show that both limits inferior are $1/4$ nonetheless, so both $x$ and $y$ are stable; the transition probabilities in Figure \[fig:stable3\] do not converge, and $\frac{1 + \cos({\epsilon}^{-1})}{2}$ and $1-\frac{1 + \cos({\epsilon}^{-1})}{2}$ are not even comparable, but it is easy to see that $\mu_{\epsilon}(x)=\mu_{\epsilon}(y) =\frac{1}{2}$; and in Figure \[fig:trans-del5\] $x$ is the only stable state since its weight oscillates between $\frac{1}{2}$ and $1$. Note that Assumption \[Assum1\] rules out the perturbations in Figure \[fig:no-stable\], which have no stable state. Existence of stable states {#sect:pp-ess} ========================== This section presents three transformations that simplify perturbations while retaining the relevant information about the stable states. Two of them are defined *via* the dynamics of the original perturbation. The relevance of these two transformations relies on the close relation between the stationary distributions and the dynamics of Markov chains. Lemma \[lem:hsr\] below pinpoints this relation. \[lem:hsr\] A distribution $\mu$ of a finite Markov chain is stationary iff its support involves only essential states and for all states $x$ and $y$ we have $\mu(x){\mathbb{P}}^x(\tau^+_y < \tau^+_x) = \mu(y){\mathbb{P}}^y(\tau^+_x < \tau^+_y)$. Lemma \[lem:hsr\] can already help us find the stable states of small examples such as in Figures \[fig:no-stable\] and \[fig:stable\]. In Figure \[fig:no-stable1\] it says that $\mu_{\epsilon}(x) {\epsilon}^2 = \mu_{\epsilon}(y) {\epsilon}^{2 + \cos({\epsilon}^{-1})}$ so we find $\liminf \mu_{\epsilon}(x) = \liminf \mu_{\epsilon}(y) = 0$ without calculating the stationary distributions. In Figure \[fig:stable2\] it says that $\mu_{\epsilon}(x)(2 - \cos({\epsilon}^{-1})) = \mu_{\epsilon}(y)(2 + \cos({\epsilon}^{-1}))$, so $\mu_{\epsilon}(x) \leq 3 \mu_{\epsilon}(y)$ and $\frac{1}{4} \leq \mu_{\epsilon}(y)$, and likewise for $x$. Lemma \[lem:state-del\] below shows further connections between the stationary distributions and the dynamics of Markov chains. Its proof involves Lemma \[lem:hsr\], and its irreducible case is used in Section \[sect:td\]. \[lem:state-del\] Let $p$ be a Markov chain with state space $S$, and let $\tilde{p}$ be defined over $\tilde{S} \subseteq S$ by $\tilde{p}(x,y) := {\mathbb{P}}^x(X_{\tau^+_{\tilde{S}}} = y)$. 1. \[lem:state-del1\] Then ${\mathbb{P}}^x(\tau_y < \tau^+_x) = \tilde{{\mathbb{P}}}^x(\tau_y < \tau^+_x)$ for all $x,y \in \tilde{S}$. 2. \[lem:state-del2\] Let $\mu$ ($\tilde{\mu}$) be a stationary distribution for $p$ ($\tilde{p}$). If $\tilde{S}$ are essential states, there exists $\tilde{\mu}$ ($\mu$) a stationary distribution for $\tilde{p}$ ($p$) such that $\mu(x) = \tilde{\mu}(x) \cdot \sum_{y\in \tilde{S}}\mu(y)$ for all $x\in \tilde{S}$. The dynamics, *i.e.*, terms like ${\mathbb{P}}^x(\tau^+_y < \tau^+_x)$ or ${\mathbb{P}}^x(X_{\tau^+} = y)$ are usually hard to compute, and so will be the two transformations that are defined *via* the dynamics, but Lemma \[lem:congp-congmu\] below shows that approximating them is safe as far as the stable states are concerned. \[lem:congp-congmu\] Let $p$ and $\tilde{p}$ be perturbations with the same state space, such that $x\neq y \Rightarrow p (x,y)\cong \tilde{p}(x,y)$. For all stationary distribution maps $\mu$ for $p$, there exists $\tilde{\mu}$ for $\tilde{p}$ such that $\mu \cong \tilde{\mu}$. *E.g.*, both coefficients in Figure \[fig:stable2\] (\[fig:out-scale2\]) can safely be replaced with ${\epsilon}$ ($1$), and Figure \[fig:ess-coll2\] can be replaced with Figure \[fig:ess-coll3\]. Lemma \[lem:congp-congmu\] will dramatically simplify the computation of the stable states. Essential graph {#sect:eg} --------------- The *essential graph* of a perturbation captures the non-infinitesimal flow between different states at the normal time scale. It is a very coarse description of the perturbation. \[defn:essential-class\] Given a perturbation with state space $S$, the essential graph is a binary relation over $S$ and possesses the arc $(x,y)$ if $x \neq y$ and $p(z,t) \precsim p(x,y)$ for all $z,t\in S$. The essential classes are the sink (aka bottom) strongly connected components of the graph. The other SCCs are the transient classes. A state in an essential class is essential, the others are transient. The essential classes will be named $E_1,\dots, E_k$. Observation \[obs:inf-bound\] below implies that the essential graph is made of the arcs $(x,y)$ such that $x \neq y$ and $p(x,y) \cong 1$, as expected. \[obs:inf-bound\] Let $p$ be a perturbation. There exist positive $c$ and ${\epsilon}_0$ such that for all ${\epsilon}< {\epsilon}_0$, for all simple paths $\gamma$ in the essential graph, $c < p_{\epsilon}(\gamma)$. For example, the perturbations (with $I = ]0,1]$) that are described in Figures \[fig:no-stable2\], \[fig:no-stable3\], \[fig:stable1\], and \[fig:stable3\] all have Figure \[fig:ess-graph1\] as essential graph, and $\{x\}$ and $\{y\}$ as essential class. Figure \[fig:ess-graph2\] (\[fig:ess-graph3\]) is the essential graph of Figure \[fig:ess-coll1\] (\[fig:trans-del1\]), and $\{x,y\}$ and $\{t\}$ are its essential classes. Note that the essential states of a perturbation and the essential states of a Markov chain are two distinct (yet related) concepts: *e.g.*, all states from Figure \[fig:ess-coll1\] are essential for the Markov chain for all ${\epsilon}\in ]0,1]$. [0.15]{} \(x) [$x$]{}; (y) \[right of = x\] [$y$]{}; (z) \[above of = y\] [$z$]{}; (z) edge node (y); [0.15]{} \(x) [$x$]{}; (y) \[right of = x\] [$y$]{}; (t) \[above of = x\] [$t$]{}; (z) \[above of = y\] [$z$]{}; \(x) edge node (y); (z) edge node (y); [0.15]{} \(x) [$x$]{}; (y) \[right of = x\] [$y$]{}; (z) \[right of = y\] [$z$]{}; (x’) \[above of = x\] ; (y’) \[right of = x’\] ; (z’) \[right of = y’\] ; (x’) edge node (x) (y’) edge node (y) (z’) edge node (z) (z’) edge node (y’); (x’) edge node (y’); The essential graph alone cannot tell which states are stable: *e.g.*, swapping ${\epsilon}$ and ${\epsilon}^2$ in Figure \[fig:stable1\] yields the same essential graph but Lemma \[lem:hsr\] shows that the only stable state is then $x$ instead of $y$. The graph allows us to make the following case disjunction nonetheless, along which we will either say that all states are stable, or perform one of the transformations from the next subsections. 1. Either the graph is empty (*i.e.* totally disconnected) and the perturbation is zero, or 2. it is empty and the perturbation is non-zero, or 3. it is non-empty and has a non-singleton essential class, or 4. it is non-empty and has only singleton essential classes. Observation \[obs:inf-bound\] motivates the following convenient assumption. \[Assum2\] There exists $c > 0$ such that $p(\gamma) > c$ for every simple path $\gamma$ in the essential graph. The two assumptions above do not have the same status: Assumption \[Assum1\] is a key condition that will appear explicitly in our final result, whereas Assumption \[Assum2\] is just made wlog, *i.e.*, up to focusing on a smaller neighborhood of $0$ inside $I$. Lemma \[lem:ess-weight\] shows the usefulness of Assumption \[Assum2\]. It is proved by Lemma \[lem:hsr\], and is used later to strengthen Lemma \[lem:state-del\]. \[lem:state-del2\] into $\mu \cong \tilde{\mu}$. \[lem:ess-weight\] Let a perturbation $p$ with state space $S$ and transient states $T$ satisfy Assumption \[Assum2\]. Then $\frac{c}{c + |S|} \leq \sum_{x\in S\setminus T}\mu(x)$. Essential collapse {#sect:ec} ------------------ The essential collapse, defined below, amounts to merging one essential class of a perturbation into one meta-state and letting this state represent faithfully the whole class in terms of dynamics between the whole class and each of the outside states. \[defn:ec\] Let $p$ be a perturbation on state space $S$. Let $x$ be a state in an essential class $E$, and let $\tilde{S} := (S\setminus E) \cup \{\cup E\}$. The essential collapse $\kappa(p,x): I\times\tilde{S}\times\tilde{S}\to[0,1]$ of $p$ around $x$ is defined below. $$\begin{aligned} \kappa(p,x)(\cup E,\cup E) & := {\mathbb{P}}^x(X_{\tau^+_{S\setminus E \cup \{x\}}} = x)\\ \kappa(p,x)(\cup E,y) & := {\mathbb{P}}^x(X_{\tau^+_{S\setminus E \cup \{x\}}} = y) & \mbox{\quad for all } y\in S\setminus E\\ \kappa(p,x)(y,\cup E) & := \sum_{z\in E}p(y,z) & \mbox{\quad for all } y\in S\setminus E\\ \kappa(p,x)(y,z) & := p(y,z) & \mbox{\quad for all } y,z\in S\setminus E\end{aligned}$$ \[obs:ess-coll\] $\kappa(p,x)$ is again a perturbation, $\kappa$ preserves irreducibility, and if $\{x\}$ is an essential class, $\kappa(p,x) = p$. [0.15]{} \(x) [$x$]{}; (y) \[right of = x\] [$y$]{}; (t) \[below of = y\] [$t$]{}; (z) \[above of = y\] [$z$]{}; \(x) edge node \[above\] [$\frac{1}{2}$]{} (y) (x) edge node [$\frac{{\epsilon}^5}{4}$]{} (z); (z) edge \[bend right\] node \[right\] [$\frac{1}{2}$]{} (y) (y) edge node \[right\] [$\frac{{\epsilon}}{2}$]{} (z) (x) edge node \[below left\] [$\frac{{\epsilon}^3}{4}$]{} (t) (t) edge node \[left\] [${\epsilon}^7$]{} (y); [0.2]{} (xy) [$x \cup y$]{}; (t) \[below of = xy\] [$t$]{}; (z) \[above of = xy\] [$z$]{}; \(z) edge \[bend right\] node \[left\] [$\frac{2+{\epsilon}^5}{4}$]{} (xy) (xy) edge node \[right\] [$\frac{{\epsilon}}{2(1+{\epsilon})} + \frac{{\epsilon}^5}{4}$]{} (z) (xy) edge \[bend right\] node \[right\] [$\frac{{\epsilon}^3}{4}$]{} (t) (t) edge \[bend right\] node \[right\] [${\epsilon}^7$]{} (xy); [0]{} (xy) [$x \cup y$]{}; (t) \[below of = xy\] [$t$]{}; (z) \[above of = xy\] [$z$]{}; \(z) edge node \[left\] [$\frac{1}{2}$]{} (xy) (xy) edge \[bend right\] node \[right\] [$\frac{{\epsilon}}{2}$]{} (z) (xy) edge \[bend right\] node \[right\] [$\frac{{\epsilon}^3}{4}$]{} (t) (t) edge \[bend right\] node \[right\] [${\epsilon}^7$]{} (xy); For example, collapsing around $x$ or $y$ in Figure \[fig:stable2\] has no effect. The perturbation in Figure \[fig:ess-coll1\] has two essential classes, *i.e.*, its essential graph has two sink SCCs, namely $\{x,y\}$ and $\{t\}$. Figure \[fig:ess-coll2\] displays its essential collapse around $x$. It was calculated by noticing that ${\mathbb{P}}^x(X_{\tau^+_{\{x,z,t\}}} = t) = \frac{{\epsilon}^3}{4}$, and ${\mathbb{P}}^x(X_{\tau^+_{\{x,z,t\}}} = x) = \frac{1}{2} - \frac{{\epsilon}^3}{4} - \frac{{\epsilon}^5}{4}+ \frac{1}{2} \cdot {\mathbb{P}}^y(X_{\tau^+_{\{x,z,t\}}} = x)$, and ${\mathbb{P}}^y(X_{\tau^+_{\{x,z,t\}}} = x) = \frac{1}{2} + \frac{1-{\epsilon}}{2} \cdot {\mathbb{P}}^y(X_{\tau^+_{\{x,z,t\}}} = x)$. Proposition \[prop:essential-trans\] will show that it suffices to compute the stable states of Figure \[fig:ess-coll2\] to compute those of Figure \[fig:ess-coll1\], and by Lemma \[lem:congp-congmu\] it suffices to compute those of the simpler Figure \[fig:ess-coll3\]. However, computing the exact values ${\mathbb{P}}^x(X_{\tau^+_{S\setminus E \cup \{x\}}} = y)$ can be difficult even on simple examples like above. Fortunately, Lemma \[lem:p-cong-max\] shows that they are $\cong$-equivalent to maxima that are easy to compute. *E.g.*, using Lemma \[lem:p-cong-max\] to approximate the essential collapse of Figure \[fig:ess-coll1\] around $x$ yields Figure \[fig:ess-coll3\], but without having to compute the intermediate Figure \[fig:ess-coll2\]. \[lem:p-cong-max\] Let $p$ be a perturbation with state space $S$ satisfy Assumption \[Assum1\], and let $\tilde{p}$ be the essential collapse $\kappa(p,x)$ of $p$ around $x$ in some essential class $E$. For all $y\in S\setminus E$, we have $\tilde{p}(\cup E,y) \cong \max_{z\in E} p(z,y)$ and $\tilde{p} (y,\cup E) \cong \max_{z\in E} p(y,z)$. Note that by Lemma \[lem:p-cong-max\], only the essential class is relevant during the essential collapse up to $\cong$, the exact state is irrelevant. Lemma \[lem:p-cong-max\] is also a tool that is used to prove, *e.g.*, Proposition \[prop:essential-graph\] below which shows that the essential graph may contain useful information about the stable states. \[prop:essential-graph\] Let a perturbation $p$ with state space $S$ satisfy Assumption \[Assum1\], let $\mu$ be a corresponding stationary distribution map. 1. \[prop:essential-graph1\] If $y$ is a transient state, $\liminf_{{\epsilon}\to 0} \mu_{{\epsilon}}(y) = 0$. 2. \[prop:essential-graph2\] If two states $x$ and $y$ belong to the same essential or transient class, $\mu(x) \cong \mu(y)$. Proposition \[prop:essential-graph\].\[prop:essential-graph1\] says that the transient states are vanishing, *e.g.* the nameless states in Figure \[fig:ess-graph3\]. Proposition \[prop:essential-graph\].\[prop:essential-graph2\] says that two states in the same class are either both stable or both vanishing, *e.g.* $\{x\}$ and $\{y\}$ in Figure \[fig:ess-graph2\]. The usefulness of the essential collapse comes from its preserving and reflecting stability, as stated in Proposition \[prop:essential-trans\]. Its proof invokes Lemma \[lem:preserve-stable\] below, which shows that the essential collapse preserves the dynamics up to $\cong$, and Lemma \[lem:hsr\], which relates the dynamics and the stationary distributions. \[lem:preserve-stable\] Given a perturbation $p$ with state space $S$, let $\tilde{p}$ be the essential collapse of $p$ around $x$ in some essential class $E$, and let $\tilde{x} := \cup E$. The following holds for all $y\in S\setminus E$. $${\mathbb{P}}^y(\tau_x < \tau_y) \cong \tilde{{\mathbb{P}}}^y(\tau_{\tilde{x}} < \tau_y) \quad \wedge \quad {\mathbb{P}}^x(\tau_y < \tau_x) \cong \tilde{{\mathbb{P}}}^{\tilde{x}}(\tau_y < \tau_{\tilde{x}})$$ \[prop:essential-trans\] Let a perturbation $p$ with state space $S$ satisfy Assumption \[Assum1\], and let $x$ be in an essential class $E$. 1. \[prop:essential-trans3\] Let $\tilde{p}$ be the chain after the essential collapse of $p$ around $x$. Let $\mu$ ($\tilde{\mu}$) be a stationary distribution map of $p$ ($\tilde{p}$). There exists a stationary distribution map $\tilde{\mu}$ for $\tilde{p}$ ($\mu$ for $p$) such that $\tilde{\mu}(\cup E) \cong \mu(x)$ and $\tilde{\mu}(y) \cong \mu(y)$ for all $y\in S\setminus E$. 2. \[prop:essential-trans4\] A state $y\in S$ is stable for $p$ iff either $y\in E$ and $\cup E$ is stable for $\kappa(p,x)$, or $y\notin E$ and $y$ is stable for $\kappa(p,x)$. By definition, collapsing an essential class preserves the structure of the perturbation outside of the class, so Proposition \[prop:essential-trans\] implies that the essential collapse commutes up to $\cong$. Especially, the order in which the essential collapses are performed is irrelevant as far as the stable states are concerned. Transient deletion {#sect:td} ------------------ If all the essential classes of a perturbation are singletons, Observation \[obs:ess-coll\] says that the essential collapse is useless. If in addition the essential graph has arcs, there are transient states, and Definition \[defn:td\] below deletes them to shrink the perturbation further. \[defn:td\] Let a perturbation $p$ with state space $S$, transient states $T$, and singleton essential classes, satisfy Assumption \[Assum1\]. The function $\delta(p)$ over $S\setminus T$ is derived from $p$ by transient deletion: for all distinct $x,y\in S\setminus T$ let $$\delta(p)(x,y) := {\mathbb{P}}^x(X_{\tau^+_{S\setminus T}} = y)$$ \[obs:trans-del\] $\delta(p)$ is again a perturbation, $\delta$ preserves irreducibility, and if all states are essential, $\delta(p) = p$. For example, in Figure \[fig:stable1\] the essential classes are $\{x\}$ and $\{y\}$, $z$ is transient, and the transient deletion yields Figure \[fig:trans-del4\]. Also, in Figure \[fig:trans-del1\], the essential classes are $\{x\}$, $\{y\}$, and $\{z\}$, the transient states are nameless, and the transient deletion yields Figure \[fig:trans-del2\]. [0.25]{} \(x) [$x$]{}; (y) \[right of = x\] [$y$]{}; (z) \[right of = y\] [$z$]{}; (x’) \[above of = x\] ; (y’) \[right of = x’\] ; (z’) \[right of = y’\] ; (x’) edge node \[left\] [$\frac{1}{2}$]{} (x) (y’) edge node [$\frac{1}{2}$]{} (y) (z’) edge \[bend left\] node [$\frac{1}{2}$]{} (z) (z’) edge node \[above\] [$\frac{1}{2}$]{} (y’) (x) edge node \[above\] [${\epsilon}^2$]{} (y) (z) edge \[bend left\] node \[below\] [${\epsilon}$]{} (x) (x) edge\[bend right\] node \[right\] [${\epsilon}$]{} (x’) (z) edge \[bend left\] node \[right\] [${\epsilon}^4$]{} (z’) (y) edge node \[above\] [${\epsilon}^2$]{} (z’); (x’) edge node [$\frac{1}{2}$]{} (y’); [0.2]{} \(x) [$x$]{}; (y) \[right of = x\] [$y$]{}; (z) \[above of = y\] [$z$]{}; \(x) edge \[bend left\] node \[below\] [${\epsilon}^2 + \frac{{\epsilon}}{3}$]{} (y) (z) edge \[bend right\] node \[above left\] [${\epsilon}+ \frac{{\epsilon}^4}{6}$]{} (x) (y) edge \[bend left\] node \[below\] [$\frac{{\epsilon}^2}{6}$]{} (x) (z) edge node \[left\] [$\frac{{\epsilon}^4}{3}$]{} (y) (y) edge \[bend right\] node \[right\] [$\frac{{\epsilon}^2}{2}$]{} (z); [0.2]{} \(x) [$x$]{}; (y) \[right of = x\] [$y$]{}; (z) \[above of = y\] [$z$]{}; \(x) edge \[bend right\] node \[below\] [$\max({\epsilon}^2,\frac{{\epsilon}}{4})$]{} (y) (z) edge \[bend right\] node \[above left\] [${\epsilon}$]{} (x) (y) edge \[bend right\] node \[below\] [$\frac{{\epsilon}^2}{8}$]{} (x) (z) edge node \[left\] [$\frac{{\epsilon}^4}{4}$]{} (y) (y) edge \[bend right\] node \[right\] [$\frac{{\epsilon}^2}{2}$]{} (z); [0.1]{} \(x) [$x$]{}; (y) \[below of = x\] [$y$]{}; \(x) edge \[bend left\] node [${\epsilon}$]{} (y) (y) edge \[bend left\] node [${\epsilon}^2$]{} (x); [0.15]{} \(x) [$x$]{}; (y) \[right of = x\] [$y$]{}; \(x) edge \[bend left\] node [$(2^{\epsilon}-1)\frac{1+\cos({\epsilon}^{-1})}{2}$]{} (y) (y) edge \[bend left\] node [$2^{\epsilon}-1$]{} (x); The transient deletion is useful thanks to Proposition \[prop:trans-del\] below, whose proof relies on Lemmas \[lem:state-del\].\[lem:state-del2\] and \[lem:ess-weight\]. \[prop:trans-del\] If a perturbation $p$ satisfy Assumption \[Assum1\] and has singleton essential classes, $p$ and $\delta(p)$ have the same stable states. Like the essential collapse, the transient deletion is defined *via* the dynamics and is hard to compute. Like Lemma \[lem:p-cong-max\] did for the essential collapse, Lemma \[lem:singleton-essential\] approximates the transient deletion by an expression that is easy to compute. \[lem:singleton-essential\] If a perturbation $p$ with state space $S$ and transient states $T$ satisfies Assumption \[Assum1\] and has singleton essential classes, $${\mathbb{P}}^x(X_{\tau^+_{S\setminus T}} = y) \cong \max\{p(\gamma) : \gamma\in \Gamma_T(x,y)\} \mbox{\quad for all } x,y\in S\setminus T.$$ *E.g.*, Figure \[fig:trans-del1\] yields Figure \[fig:trans-del3\] without computing Figure \[fig:trans-del2\]. Note that $\max({\epsilon}^2,\frac{{\epsilon}}{4})$ in Figure \[fig:trans-del3\] may be simplified into ${\epsilon}$ by Lemma \[lem:congp-congmu\]. Outgoing scaling and existence of stable states {#sect:os} ----------------------------------------------- If the essential graph has no arc, the essential collapse and the transient deletion are useless to compute the stable states. This section says how to transform a non-zero perturbation with empty (*i.e.* totally disconnected) essential graph into a perturbation with the same stable states but a non-empty essential graph, so that collapse or deletion may be applied. Roughly speaking, it is done by speeding up time until a first non-infinitesimal flow is observable between different states, *i.e.* until the new essential graph has arcs. Towards it, the *ordered division* is defined in Definition \[defn:div-fct\]. It allows us to divide a function by a function with zeros by returning a default value in the zero case. It is named ordered because we will “divide” $f$ by $g$ only if $f \precsim g$, so that only $0$ may be “divided” by $0$. Then Observation \[obs:prec-div\] further justifies the terminology. \[defn:div-fct\] For $f,g: I \to [0,1]$ and $n > 1$ let us define $(f \div_n g) : I \to [0,1]$ by $(f\div_n g)(x) := \frac{f(x)}{g(x)}$ if $0 < g(x)$ and otherwise $(f\div_n g)(x) := \frac{1}{n}$. \[obs:prec-div\] $(f \div_n g)\cdot g = f$ for all $n$ and $f,g: I \to [0,1]$ such that $f \precsim g$. \[defn:os\] Let a perturbation $p$ with state space $S$ satisfy Assumption \[Assum1\], let $m := |S|\cdot \max\{p(z,t)\,\mid\, z,t\in S \wedge z \neq t\}$, and let us define the following. - $\sigma(p)(x,y) := p(x,y) \div_{|S|} m$ for all $x \neq y$ - $\sigma(p)(x,x) := (p(x,x) + m - 1)\div_{|S|} m$. For example, Figure \[fig:stable2\] satisfies Assumption \[Assum1\] and its essential graph is empty, *i.e.* totally disconnected. Applying outgoing scaling to it yields Figure \[fig:out-scale2\], which satisfies Assumption \[Assum1\] and whose essential graph has two arcs. Note that collapsing around $x$ or $y$ in Figure \[fig:stable2\] has no effect, but in Figure \[fig:out-scale2\] it yields a one-state perturbation. Also, Figure \[fig:out-scale1\] does not satisfy Assumption \[Assum1\] and its essential graph is empty. Applying outgoing scaling to it yields Figure \[fig:out-scale3\], which does not satisfy Assumption \[Assum1\] and whose essential graph has one arc. Applying it again to Figure \[fig:out-scale3\] would only divide the non-self-loop coefficients by $3$. More generally, Proposition \[prop:ogs\] below states how well the outgoing scaling behaves. [0.45]{} \(x) [$x$]{}; (y) \[right of = x\] [$y$]{}; (z) \[right of = y\] [$z$]{}; \(x) edge \[bend left\] node \[above\] [${\epsilon}^2 \cdot \frac{1+\cos ({\epsilon}^{-1})}{2}$]{} (y) (y) edge \[bend left\] node \[below\] [${\epsilon}^3 \cdot \frac{1+\cos ({\epsilon}^{-1})}{4}$]{} (x) edge \[bend left\] node \[above\] [${\epsilon}^4 \cdot \frac{(1+\cos ({\epsilon}^{-1}))^2}{4}$]{} (z) (z) edge \[bend left\]node \[below\] [${\epsilon}^4 \cdot \frac{1+\cos ({\epsilon}^{-1})}{2}$]{} (y); [0.2]{} \(x) [$x$]{}; (y) \[right of = x\] [$y$]{}; \(x) edge \[bend right\] node \[below\] [$\frac{2-\cos({\epsilon}^{-1})}{4 + 2|\cos({\epsilon}^{-1}|}$]{} (y) (y) edge \[bend right\] node \[above\] [$\frac{2+\cos({\epsilon}^{-1})}{4 + 2|\cos({\epsilon}^{-1}|}$]{} (x) ; [0.2]{} \(x) [$x$]{}; (y) \[right of = x\] [$y$]{}; (z) \[right of = y\] [$z$]{}; \(x) edge \[bend left\] node \[above\] [$\frac{1}{3}$]{} (y) (y) edge \[bend left\] node \[below\] [$\frac{{\epsilon}}{6}$]{} (x) edge \[bend left\] node \[above\] [${\epsilon}^2 \cdot \frac{1+\cos ({\epsilon}^{-1})}{6}$]{} (z) (z) edge\[bend left\] node \[below\] [$\frac{{\epsilon}^2}{3}$]{} (y); \[prop:ogs\] 1. \[prop:ogs1\] If a perturbation $p$ satisfies Assumption \[Assum1\], so does $\sigma(p)$, and the essential graph of $\sigma(p)$ is non-empty . 2. \[prop:ogs2\] A state is stable for $p$ iff it is stable for $\sigma(p)$. The outgoing scaling divides the weights of the proper arcs by $m$, as if time were sped up by $m^{-1}$. The self-loops thus lose their meaning, but Proposition \[prop:ogs\] proves it harmless. Note that the self-loops are also ignored in Assumption \[Assum1\], Lemma \[lem:congp-congmu\], and Definition \[defn:essential-class\]. Let us now describe a recursive procedure computing the stable states: if the perturbation is zero, all its states are stable; else, if the essential graph is empty, apply the outgoing scaling; else, apply one essential collapse or the transient deletion. This procedure is correct by Propositions \[prop:ogs\].\[prop:ogs2\], \[prop:essential-trans\].\[prop:essential-trans4\], and \[prop:trans-del\], hence Theorem \[thm:stable-states\] below, which is the existential part of Theorem \[thm:teaser\]. \[thm:stable-states\] Let $p$ be a perturbation such that $f \precsim g$ or $g \precsim f$ for all $f$ and $g$ in the multiplicative closure of the $p(x,y)$ with $x \neq y$. Then $p$ has stable states. Abstract and quick algorithm {#sect:aqa} ============================ The procedure described before Theorem \[thm:stable-states\] computes the stable states, but a very rough analysis of its algorithmic complexity shows that it runs in $O(n^7)$, where $n$ is the number of states. (A better analysis might find $O(n^5)$.) This bad complexity comes from the difficulty to analyze the procedure precisely and from some redundant operations done by the transformations, especially the successive essential collapses. Instead we will perform the successive collapses followed by one transient deletion as a single transformation. Applying alternately the outgoing scaling and the new transformation, both up to $\cong$, is the base of our algorithm. Section \[sect:att\] abstracts the relevant notions up to $\cong$ and gives useful algebraic properties that they satisfy. Based on these abstractions, Section \[sect:algo\] presents the algorithm (computing the stable states and more), its correctness, and its complexity in $O(n^3)$. Abstractions {#sect:att} ------------ Ensuring that the essential collapse and the transient deletion can be safely performed up to $\cong$ is a straightforward sanity check, by Lemma \[lem:congp-congmu\]. However, the proof for the outgoing scaling involves a new algebraic structure to accommodate the ordered division, and handling the combination of the successive collapses and one deletion requires particular attention. It would have been cumbersome to define this combination directly *via* the dynamics in Section \[sect:pp-ess\], and more difficult to prove its correctness *via* probilistic techniques, hence the usefulness of the rather atomic collapse and deletion. Our firsts step below is to consider functions up to $\cong$. \[defn:cq\] For $f: I \to [0,1]$ let ${[f]}$ be its $\cong$ equivalence class; for a matrix $A = (a_{ij})_{1\leq,i,j\leq n}$ with elements in $ I \to [0,1]$, let ${[A]}$ be the matrix where ${[A]}_{ij} :={[a_{ij}]}$ for all $0\leq i,j\leq n$. For a set $F$ of functions from $I$ to $[0,1]$, let ${[F]}$ be the quotient set $F/\cong$. Finally, it is possible to lift over ${[F]}$ both $\cdot$ to ${[\cdot]}$ and $\precsim$ to ${[\precsim]}$. \[obs:ccc-loc\] For $(G,\cdot)$ a semigroup totally preordered by $\precsim$, 1. \[obs:ccc-loc1\] ${[\precsim]}$ orders ${[G]}$ linearly, so $\max_{{[\precsim]}}$ is well-defined. 2. \[obs:ccc-loc2\] $({[G]}\cup\{{[{\epsilon}\mapsto 0]}\},{[{\epsilon}\mapsto 0]},{[{\epsilon}\mapsto 1]},\max_{{[\precsim]}},{[\cdot]})$ is a commutative semiring. (See, *e.g.*, [@GKMS15] for the related definitions.) The good behavior of $\cdot$ and $\precsim$ up to $\cong$ is expressed above within an existing algebraic framework, but for $\div_n$ we introduce a new algebraic structure below. \[defn:div-semiring\] An ordered-division semiring is a tuple $(F,0,1,\cdot, \leq, \div)$ such that $(F,\leq)$ is a linear order with maximum $1$, and $(F,0,1,\max_{\leq},\cdot)$ is a commutative semiring, and for all $f \leq g$ we have $f \div g$ is in $F$ and $(f \div g) \cdot g = f$. \[obs:div-semiring\] Let $(F,0,1,\cdot,\div, \leq)$ be an ordered-division semiring. Then $0 = \min_{\leq} F$ and $f \div 1 = f$ for all $f$. Lemma \[lem:odsm-f\] below shows that the functions $I\to [0,1]$ up to $\cong$ form an ordered-division semiring. \[lem:odsm-f\] 1. Let $n > 1$ and $f,f',g,g': I \to [0,1]$ be such that $f \cong f' \precsim g \cong g'$. Then ${[f \div_1 g]} = {[f' \div_{n} g']}$, which we then write ${[f]} {[\div]} {[g]}$. 2. For all sets $G$ of functions from $I$ to $[0,1]$ closed under multiplication, the tuple $({[G\cup\{{\epsilon}\mapsto 0\}]}, {[{\epsilon}\mapsto 0]},{[{\epsilon}\mapsto 1]},{[\cdot]}, {[\div]}, {[\precsim]})$ is an ordered-division semiring. For example, the set containing ${[{\epsilon}\mapsto 0]}$ and all the ${[{\epsilon}\mapsto {\epsilon}^{\alpha}]}$ for non-positive $\alpha$ is an ordered-division semiring, where ${[{\epsilon}\mapsto {\epsilon}^{\alpha}]}{[\cdot]}{[{\epsilon}\mapsto {\epsilon}^{\beta}]} = {[{\epsilon}\mapsto {\epsilon}^{\alpha+\beta}]}$ and ${[{\epsilon}\mapsto {\epsilon}^{\alpha}]} {[\precsim]}{[{\epsilon}\mapsto {\epsilon}^{\beta}]}$ iff $\beta \leq \alpha$, and ${[{\epsilon}\mapsto 0]} {[\precsim]}{[{\epsilon}\mapsto {\epsilon}^{\alpha}]}$, and ${[{\epsilon}\mapsto {\epsilon}^{\alpha}]}{[\div]}{[{\epsilon}\mapsto {\epsilon}^{\beta}]} = {[{\epsilon}\mapsto {\epsilon}^{\alpha-\beta}]}$ for $\beta \leq \alpha$. To handle $\sigma$, $\kappa$, and $\delta$ up to $\cong$ we define below transformations of weighted graphs with weights in an ordered-division semiring. \[defn:ao\] Let $P: S \times S \to F$, where $(F,0,1,\cdot, \leq, \div)$ is an ordered-division semiring. 1. \[defn:ao1\] Let $\{(z,t)\in S^2\,|\, P(z,t) = 1 \wedge z \neq t\}$ be the essential graph of $P$, and let the sink SCCs $E_1,\dots,E_k$ be its essential classes. 2. Outgoing scaling: for $x \neq y$ let ${[\sigma]}(P)(x,y) := P(x,y) \div M$, where $M := \max_{\leq}\{P(z,t) : (z,t)\in S\times S \wedge z\neq t\}$, and ${[\sigma]}(P)(x,x) := 1$. 3. Essential collapse: let ${[\kappa]}(P,E_i)$ be the matrix with states $\{\cup E_i\}\cup S\setminus E_i$ such that for all $x,y\in S\setminus E_i$ we set ${[\kappa]}(P, E_i)(x,y) := P(x,y)$ and ${[\kappa]}(P, E_i)(\cup E_i,y) := \max_{\leq}\{P(x_i,y) : x_i\in E_i\}$ and ${[\kappa]}(P, E_i)(x,\cup E_i) := \max_{\leq}\{P(x,x_i) : x_i\in E_i\}$ and ${[\kappa]}(P, E_i)(\cup E_i,\cup E_i) := 1$. 4. Shrinking: let ${[\chi]}(P)$ be the matrix with state space $\{\cup E_1,\dots,\cup E_k\}$ such that for all $i,j$ ${[\chi]}(P)(\cup E_i,\cup E_j) := \max_{\leq}\{P(\gamma) : \gamma\in \Gamma_T(E_i,E_j)\}$. In Definition \[defn:ao\], the weights $P(x,x)$ occur only in the definitions of the self-loops of the transformed graphs, whence Observation \[obs:ao-refl-irrel\] below. \[obs:ao-refl-irrel\] Let $(F,0,1,\cdot, \leq, \div)$ be an ordered-division semiring, let $P,P': S \times S \to F$ be such that $P(x,y) = P'(x,y)$ for all $x \neq y$. Then $P$ and $P'$ have the same essential graph and classes $E_1,\dots,E_k$; ${[\sigma]}(P)(x,y) = {[\sigma]}(P')(x,y)$ for all $x \neq y$; and for all $l$ and $i \neq j$ we have ${[\chi]}(P)(\cup E_i,\cup E_j) = {[\chi]}(P')(\cup E_i,\cup E_j)$ and ${[\kappa]}(P)(P,E_l)(\cup E_i,\cup E_j) = {[\kappa]}(P')(P',E_l)(\cup E_i,\cup E_j)$. Lemma \[lem:pop\] below shows that the transformations from Definition \[defn:ao\] are faithful abstractions of $\sigma$, $\kappa$, and $\delta$. Some proofs come with examples, which also highlight the benefits of abstraction. \[lem:pop\] Let a perturbation $p$ with state space $S$ satisfy Assumption \[Assum1\], let $E_1,\dots, E_k$ be its essential classes, and for all $i$ let $x_i \in E_i$. 1. \[lem:pop0\] $p$ and ${[p]}$ have the same essential graph. 2. \[lem:pop1\] ${[\sigma]}({[p]})(x,y) = {[\sigma(p)]}(x,y)$ for all $x \neq y$. 3. \[lem:pop2\] ${[\chi]}({[p]})(\{x\},\{y\}) = {[\delta(p)]}(x,y)$ whenever $\delta(p)$ is well-defined. 4. \[lem:pop3\] ${[\kappa]}({[p]},E_i) = {[\kappa(p,x_i)]}$. 5. \[lem:pop4\] ${[\chi]}({[p]}) = {[\chi]}\circ{[\kappa]}({[p]},E_1)$. 6. \[lem:pop5\] ${[\delta\circ\kappa(\dots\kappa(\kappa(\sigma(p),x_1),x_2)\dots,x_{k})]}(\cup E_i,\cup E_j) = {[\chi]}\circ{[\sigma]}({[p]})(\cup E_i,\cup E_j)$ for all $i \neq j$. By the algorithm underlying Theorem \[thm:stable-states\] and Lemma \[lem:pop\].\[lem:pop5\] we are now able to state the following. \[prop:abs-algo\] Let a perturbation $p$ satisfy Assumption \[Assum1\]. There exists $n\in\mathbb{N}$ such that $({[\chi]}\circ{[\sigma]})^n({[p]})(x,y) = 0$ for all $x \neq y$ in its state space. Furthermore, the states of $({[\chi]}\circ{[\sigma]})^n({[p]})$ correspond to the stable states of $p$. The algorithm {#sect:algo} ------------- Algorithm \[algo:br\] mainly consists in applying recursively the function ${[\chi]}\circ{[\sigma]}$ occurring in Proposition \[prop:abs-algo\] until an empty (*i.e.* totally disconnected) graph is produced. It does not explicitly refer to perturbations since this notion was abstracted on purpose. Instead, the algorithm manipulates digraphs with arcs labeled in an ordered-division semiring, in which inequality, multiplication and ordered division are implicitly assumed to be computable. One call to the recursive function corresponds to ${[\chi]}\circ{[\sigma]}$, *i.e.* Lines \[line:max-label\] and \[line:norm-label\] correspond to ${[\sigma]}$, and Lines \[line:max-label-edges\] till \[line:global-max\] correspond to ${[\chi]}$. Before calling Lines \[line:type-vertex\] and \[line:type-edge\] produce an isomorphic copy of the input, which will be easier to handle when making unions and keeping track of the remaining vertices. Note that Line \[line:norm-label\] does not update the $P(x,x)$. It would be useless indeed, since in Definition \[defn:ao\] the self-loops of the original graph occur only in the definition of the self-loops of the transformed graphs, and since self-loops are irrelevant by Obsevation \[obs:ao-refl-irrel\]. Line \[line:max-label-edges\] computes the essential graph, up to self-loops, and Line \[line:Tarjan-SSCC\] computes the essential classes by a modified Tarjan’s algorithm, as detailed in Algorithm \[algo:MT\]. The computation of ${[\chi]}(P)(\cup E_i,\cup E_j) := \max_{\leq}\{P(\gamma) : \gamma\in \Gamma_T(E_i,E_j)\}$ is performed in two stages: the first stage at Line \[line:partial-max\] considers only paths of length one; the second stage at Line \[line:global-max\] considers only paths of length greater than one, and therefore having their second vertex in $T$. This case disjunction reduces the size of the graph on which the shortest path algorithm from Line \[line:Dijkstra\] is run, and thus reduces the complexity of from $O(n^4)$ to $O(n^3)$, as will be detailed in Proposition \[prop:complexity\]. Note that the shortest path algorithm is called with laws $\cdot$ and $\max$ instead of $+$ and $\min$. Moreover, since our weights are at most $1$ we can use [@LGJLMPS57] or [@Dijkstra59] (which assume non-negative weights) to implement Line \[line:Dijkstra\]. Proposition \[prop:complexity\] below shows that our algorithm is fast. \[prop:complexity\] The algorithm terminates within $O(n^3)$ computation steps, where $n$ is the number of vertices of the input graph. By Propositions \[prop:abs-algo\] and \[prop:complexity\] we now state our main algorithmic result, which is the algorithmic part of Theorem \[thm:teaser\]. \[thm:vanish-time\] Let a perturbation $p$ satisfy Assumption \[Assum1\]. A state is stochastically stable iff it belongs to $\KwHub(S,{[p]})$. Provided that inequality, multiplication, and ordered division between equivalence classes of perturbation maps can be computed in constant time, stability can be decided in $O(n^3)$, where $n$ is the number of states. One of the achievements of our algorithm is that it processes all weighted digraphs (*i.e.* abstractions of perturbations) uniformly. Especially, neither irreducibility nor any kind of connectedness is required. For example in Figure \[fig:disc-pert\], the four-state perturbation is the disjoint union of two smaller perturbations. As expected the stable states of the union are the union of the stable states, *i.e.* $\{x,z\}$, but whereas the outgoing scaling applied to the bottom of Figure \[fig:disc-pert2\] (the perturbation restricted to $\{z,t\}$) would yield the bottom of Figure \[fig:disc-pert5\] directly by division by ${[{\epsilon}^6]}$, two rounds of ougtoing scaling lead to this stage when processing the four-state perturbations. [0.15]{} \(x) [$x$]{}; (y) \[right of = x\] [$y$]{}; (z) \[below of = x\] [$z$]{}; (t) \[right of = z\] [$t$]{}; \(x) edge \[bend right\] node [${\epsilon}^3$]{} (y) (y) edge \[bend right\] node \[above\][${\epsilon}^2$]{} (x) (z) edge \[bend right\] node [${\epsilon}^9$]{} (t) (t) edge \[bend right\] node \[above\][${\epsilon}^6$]{} (z); [0.15]{} \(x) [$x$]{}; (y) \[right of = x\] [$y$]{}; (z) \[below of = x\] [$z$]{}; (t) \[right of = z\] [$t$]{}; \(x) edge \[bend right\] node [${[{\epsilon}^3]}$]{} (y) (y) edge \[bend right\] node \[above\][${[{\epsilon}^2]}$]{} (x) (z) edge \[bend right\] node [${[{\epsilon}^9]}$]{} (t) (t) edge \[bend right\] node \[above\][${[{\epsilon}^6]}$]{} (z); [0.1]{} \(x) [$x$]{}; (y) \[right of = x\] [$y$]{}; (z) \[below of = x\] [$z$]{}; (t) \[right of = z\] [$t$]{}; \(x) edge \[bend right\] node [${[{\epsilon}]}$]{} (y) (y) edge \[bend right\] node \[above\][${[1]}$]{} (x) (z) edge \[bend right\] node [${[{\epsilon}^7]}$]{} (t) (t) edge \[bend right\] node \[above\][${[{\epsilon}^4]}$]{} (z); [0.15]{} \(x) [$x$]{}; (z) \[below of = x\] [$z$]{}; (t) \[right of = z\] [$t$]{}; \(z) edge \[bend right\] node [${[{\epsilon}^7]}$]{} (t) (t) edge \[bend right\] node \[above\][${[{\epsilon}^4]}$]{} (z); [0.15]{} \(x) [$x$]{}; (z) \[below of = x\] [$z$]{}; (t) \[right of = z\] [$t$]{}; \(z) edge \[bend right\] node [${[{\epsilon}^3]}$]{} (t) (t) edge \[bend right\] node \[above\][${[1]}$]{} (z); [0.1]{} \(x) [$x$]{}; (z) \[below of = x\] [$z$]{}; Discussion {#sect:disc} ========== This section studies two special cases of our setting: first, how assumptions that are stronger than Assumption \[Assum1\] make not only some proofs easier but also one result stronger; second, how far Young’s technique can be generalized. Then we notice that the termination of our algorithm defines an induction proof principle, which is used to show that the algorithm computes a well-known object when fed a strongly connected graph. Eventually, we discuss how to give the so-far-informal notion of time scale a formal flavor. Stronger assumption ------------------- Let us consider Assumption \[Assum3\], which is stronger version of Assumption \[Assum1\]. Assumption \[Assum3\] yields Proposition \[prop:transient-vanish\], which is stronger version of Proposition \[prop:essential-graph\].\[prop:essential-graph1\]. (The proofs are similar but the new one is simpler.) \[Assum3\] If $x\neq y$ and $p(x,y)$ is non-zero, it is positive; and $f \cong g$ or $f\in o(g)$ or $g\in o(f)$ for all $f$ and $g$ in the multiplicative closure of the ${\epsilon}\mapsto p_{\epsilon}(x,y)$ with $x \neq y$. \[prop:transient-vanish\] Let a perturbation $p$ with state space $S$ satisfy Assumption \[Assum3\], and let $\mu$ be a stationary distribution map for $p$. If $y$ is a transient state, $\lim_{{\epsilon}\to 0} \mu_{{\epsilon}}(y) = 0$. Under Assumption \[Assum1\] some states may be neither stable nor fully vanishing: $y$ in Figure \[fig:trans-del5\] and $x$ in Figure \[fig:no-stable1\] where the bottom ${\epsilon}^2$ is replaced with ${\epsilon}$. Assumption \[Assum3\] rules out such cases, as below. \[cor:strong-assum-stable\] If a perturbation $p$ satisfies Assumption \[Assum3\], every state is either stable or fully vanishing. Generalization of Young’s technique ----------------------------------- Our approach to prove the existence of and compute the stable states of a perturbation is different from Young’s approach [@Young93] which uses a finite version of the Markov chain tree theorem. In this section we investigate how far Young’s technique can be generalized. This will suggest that we were right to change approaches, but it will also yield a decidability result in Proposition \[prop:stable-dec\]. Lemma \[lem:gen-Young\] below is a generalization of [@Young93 Lemma 1]. Both proofs use the Markov chain tree theorem, but they are significantly different nonetheless. Let $p$ be a perturbation with state space $S$. As in [@Young93] or [@GKMS15], for all $x\in S$ let $\mathcal{T}_x$ be the set of the spanning trees of (the complete graph of) $S \times S$ that are directed towards $x$. For all $x\in S$ let $\beta^x_{\epsilon}:= \max_{T\in \mathcal{T}_x} \prod_{(z,t)\in T}p_{\epsilon}(z,t)$. \[lem:gen-Young\] A state $x$ of an irreducible perturbation with state space $S$ is stable iff $\beta^{y} \precsim \beta^{x}$ for all $y\in S$. Assumption \[Assum1\] and Lemma \[lem:gen-Young\] together yield Observation \[obs:gen-pos-ss\], a generalization of existing results about existence of stable states, such as [@Young93 Theorem 4]. The underlying algorithm runs in time $O(n^3)$ where $n$ is the number of states, just like Young’s. \[obs:gen-pos-ss\] Let a perturbation $p$ on state space $S$ satisfy Assumption \[Assum1\]. If for all $x \neq y$ the map $p(x,y)$ is either identically zero or strictly positive, $p$ has stable states. The stable states of a perturbation are computable even without the positivity assumption from Observation \[obs:gen-pos-ss\], but their existence is no longer guaranteed by the same proof. In this way, Observation \[obs:gen-ss\] is like the existential part of Theorem \[thm:teaser\], but with a bad complexity. \[obs:gen-ss\] Let $F$ be a set of perturbation maps of type $I\to [0,1]$ for some $I$. Let us assume that $F$ is closed under multiplication by elements in $F$ and by characteristic functions of decidable subsets of $I$, that $\precsim$ is decidable on $F\times F$, and that the supports of the functions in $F$ are uniformly decidable. If $f\precsim g$ or $g\precsim f$ for all $f,g\in F$, stability is decidable in $O(n^5)$ for the perturbations $p$ such that $x \neq y \Rightarrow p(x,y) \in F$. The assumption $f\precsim g$ or $g\precsim f$ for all $f,g\in F$ from Observation \[obs:gen-ss\] corresponds to Assumption \[Assum1\]. Proposition \[prop:stable-dec\] below drops it while preserving decidability of stability, but at the cost of an exponential blow-up because the supports of the maps are no longer ordered by inclusion. \[prop:stable-dec\] Let $F$ be a set of perturbation maps of type $I\to [0,1]$ for some $I$. Let us assume that $F$ is closed under multiplication by elements in $F$ and by characteristic functions of decidable subsets of $I$, that $\precsim$ is decidable on $F\times F$, and that the supports of the functions in $F$ are uniformly decidable. Then stability is decidable for the perturbations $p$ such that $x \neq y \Rightarrow p(x,y) \in F$. What does Algorithm \[algo:br\] compute? ---------------------------------------- Applying sequentially outgoing scaling, essential collapse, and transient deletion terminates. So it amounts to an *induction proof principle* for finite graphs with arcs labeled in an ordered-division semiring. Observation \[obs:span-tree\] is proved along this principle. It can also be proved by a very indirect argument using Lemma \[lem:gen-Young\] and Theorem \[thm:vanish-time\], but the proof using induction is simple and from scratch. \[obs:span-tree\] Let $(F,0,1,\cdot, \leq,\div)$ be an ordered-division semiring, and let $P:S\times S \to F$ correspond to a strongly connected digraph, where an arc is absent iff its weight is $0$. Then $\KwHub(S,P)$ returns the roots of the maximum directed spanning trees. Note that finding the roots from Observation \[obs:span-tree\] is also doable in $O(n^3)$ by computing the maximum spanning trees rooted at each vertex, by [@GGST86] which uses the notion of *heap*, whereas $\KwHub$ uses a less advanced algorithm. Observation \[obs:span-tree\] may be extended to non strongly connected digraphs by considering the sink SCCs independently, but alternatively it is not obvious how to generalize the notion of maximal spanning tree into a notion that is meaningful for non-strongly connected graphs. Nevertheless, the vertices returned by $\KwHub(S,P)$ are the one in $S$ that somehow attract the more flow/traffic according to $P$, hence the name $\KwHub$. One last algorithmic remark: from the proof of Proposition \[prop:complexity\] we see that Tarjan’s algorithm is an overkill to get a complexity of $O(n^3)$. Indeed, combining several basic shortest path-algorithms would have done the trick, but using Tarjan’s algorithm should make the computation of $\KwHub$ faster by a constant factor. Vanishing time scales --------------------- Under Assumption \[Assum1\], computing $\KwHub$ and considering the intermediate weighted graphs shows the order in which the states are found to be vanishing. Under the stronger Assumption \[Assum3\], a notion of *vanishing time scale* may be defined, with the flavor of non-standard analysis [@Robinson74]. Let $(\mathcal{T},\cdot)$ be a group of functions $I \to ]0,+\infty[$ such that $f \cong g$ or $f\in o(g)$ or $g\in o(f)$ for all $f$ and $g$ in $\mathcal{T}$. The elements of ${[\mathcal{T}]}$ are called the time scales. Let a perturbation $p$ on state space $S$ satisfy Assumption \[Assum3\] and let $x \in S$ be deleted at the $d$-th recursive call of $\KwHub(S,{[p]})$. Let $M_1,\dots,M_d$ be the maxima (*i.e.* $M$) from Line \[line:max-label\] in Algorithm \[algo:br\] at the 1st,...,$d$-th recursive calls, respectively. We say that $x$ vanishes at time scale $\prod_{1 \leq i \leq d} M_i^{-1}$. Figure \[fig:vanish-ts\] suggests that a similar account of vanishing times scales, even just a qualitative one, would be much more difficult to obtain by invoking the Markov chain tree theorem as in [@Young93]. The only stable states is $t$; the state $z$ vanishes at time scale ${[{\epsilon}]}^{-2}$; and $x$ and $y$ vanish at the same time scale ${[1]}$ although the maximum spanning trees rooteed at $x$ and $y$ have different weights: ${\epsilon}^4$ and ${\epsilon}^3$, respectively. \(z) [$z$]{}; (x) \[right of = z\] [$x$]{}; (y) \[right of = x\] [$y$]{}; (t) \[right of = y\] [$t$]{}; \(z) edge \[loop left\] node [$1-{\epsilon}$]{} () edge \[bend left\]node [${\epsilon}$]{} (x) (x) edge \[bend left\]node [$1-{\epsilon}$]{} (z) edge \[bend left\] node [${\epsilon}$]{} (y) (y) edge \[bend left\] node [$1-{\epsilon}^2$]{} (t) edge \[bend left\] node [${\epsilon}^2$]{} (x) (t) edge \[loop right\] node [$1-{\epsilon}$]{} () edge \[bend left\] node [${\epsilon}$]{} (y); We thank Ocan Sankur for useful discussions. Tarjan Modified =============== The function *TarjanSinkSCC* is written in Algorithm \[algo:MT\]. It consists of Tarjan’s algorithm [@Tarjan72],[@wiki:TarjanSCC], which normally returns all the SCCs of a directed graph, plus a few newly added lines (as mentioned in comments) so that it returns the sink SCCs only. It is not difficult to see that the newly added lines do not change the complexity of the algorithm, which is $O(|S|+|A|)$ where $|S|$ and $|A|$ are the numbers of vertices and arcs in the graph, respectively. The new lines only deal with the new boolean values $v.sink$. These lines are designed so that when popping an SCC with root $v$ from the stack , the value $v.sink$ is true iff the SCC is a sink, hence the test at Line \[line:actual-sink\]. All the $v.sink$ are initialized with $true$ at Line \[line:apriori-sink\], and $v.sink$ is set to false at two occasions: at Line \[line:above-new-sink\] before a sink SCC with root $v$ is output; and at Line \[line:above-old-SCC\] when one successor $w$ of $v$ has already been popped from the stack (since $w.index$ is defined), which means that there is one SCC below that of $v$. These are then propagated upwards at Line \[line:no-sink-up\]. The conjunction reflects the facts that a vertex is not in a sink SCC iff one of its successors in the same SCC is not. Proofs and Lemmas ================= Lemma \[lem:cong\] below relates to Definition \[def:cong\]. \[lem:cong\] 1. \[lem:cong1\] $\precsim$ is a preorder and $\cong$ an equivalence relation. 2. \[lem:cong2\] For all $f,g: I\to ]0,1]$, we have $f \precsim g$ iff $\frac{1}{g} \precsim \frac{1}{f}$, so $f \cong g$ iff $\frac{1}{f} \cong \frac{1}{g}$. 3. \[lem:cong3\] $f \precsim g$ and $f' \precsim g'$ implies $f+f' \precsim g+g'$ and $f\cdot f' \precsim g\cdot g'$. 4. \[lem:cong4\] $f \cong g$ and $f' \cong g'$ and $f \precsim f'$ implies $g \precsim g'$. 5. \[lem:cong5\] $f+f' \cong \max(f,f') := x \mapsto \max(f(x),f'(x))$. 6. \[lem:cong6\] $f \precsim f'$ implies $\max(f,f') \cong f'$. 7. \[lem:cong7\] $f\mid_J \precsim g\mid_J$ and $f\mid_{I\setminus J} \precsim g\mid_{I\setminus J}$ implies $f \precsim g$. 8. \[lem:cong8\] Let $0$ be a limit point of both $J \subseteq I$ and $I \setminus J$. A state $x$ is stable (fully vanishing) for a perturbation $p$ iff it is stable (fully vanishing) for both $p\mid_J$ and $p\mid_{I\setminus J}$. Let $S := \{x_1,\dots,x_n, y_1,\dots,y_m\}$, for all $i < n-1$ let $p(x_i,x_{i+1}) := f_i$, let $p(x_n,y_0) := f_n$, for all $i$ let $p(x_i,x_1) := 1 - f_i$, for all $j < m-1$ let $p(y_j,y_{j+1}) := g_j$, let $p(y_m,x_0) := g_m$, for all $j$ let $p(y_j,y_1) := 1 - g_j$. It is easy to check that $\beta^{x_i} = \prod_j g_j \cdot \prod_{1 \leq k < i} f_i \cdot \prod_{i \leq k} \max(f_k,1-f_k) \cong \prod_j g_j \cdot \prod_{1 \leq k < i} f_i $ for all $i$. Let us first assume that the functions in $F$ are positive So by Lemma \[lem:gen-Young\], if one $x_i$ is stable, so is $x_1$. Likewise for the $y_j$. But $\beta^{x_1}$ and $\beta^{y_1}$ are non comparable by assumption, so by Lemma \[lem:gen-Young\] there are no stable state. Let us now prove the claim in the general case, which holds if $n = m = 1$ for the same reason as in Figure \[fig:no-stable1\], so let us assume that $n +m > 2$. Wlog let us also assume that any two products of $n'$ and $m'$ functions from $F$ with $n' + m' < n + m$ are comparable. So the functions in $F$ are pairwise comparable; (up to intersection of $I$ with a neighborhood of $0$) their supports constitute a linearly ordered set for the inclusion; and $\prod_i f_i \precsim \prod_{1 < j} g_j$ since $g_1 \leq 1$ and $\neg( \prod_j g_j \precsim \prod_{i} f_i)$. Up to renaming let us assume that $g_1$ has the smallest support $J$. Up to restriction of $I$ to the support of $\prod_i f_i$, which does not change stability nor non-comparability of $\prod_i f_i$ and $\prod_j g_j$ by Lemmas \[lem:cong\].\[lem:cong8\] and \[lem:cong\].\[lem:cong7\], let us assume that the $f_i$ are positive, and so is $g_j$ for $j > 1$ since $\prod_i f_i \precsim \prod_{1 < j} g_j$. If $g_1$ is also positive, we are back to the special case above, so let us assume that it is not. On the one hand, restricting $p$ to $I \setminus J$ shows that only $y_1$ might be stable, by Lemma \[lem:cong\].\[lem:cong8\]; on the other hand $\neg( \prod_j g_j\mid_J \precsim \prod_{i} f_i \mid_J)$, so let us make a case disjunction: if $\prod_i f_i\mid_J \precsim \prod_{j} g_j \mid_J$ then $\beta^{y_1} \in o(\beta^{x_1})$, so $y_1$ cannot be stable by Lemma \[lem:gen-Young\]; if they are not comparable, the special case above says that the $p\mid_J$ has no stable state, so neither has $p$ by Lemma \[lem:cong\].\[lem:cong8\]. Let us assume that $\mu$ is stationary, so it is well-known that its support involves only essential states. To prove that the equation holds let us make a case disjunction: if $x$ and $y$ are transient states, $\mu(x) = \mu(y) = 0$; if $x$ is transient and $y$ is essential, $\mu(x) = 0$ and ${\mathbb{P}}^y(\tau^+_x < \tau^+_y) = 0$; if $x$ and $y$ belong to distinct essential classes, ${\mathbb{P}}^x(\tau^+_y < \tau^+_x) = {\mathbb{P}}^y(\tau^+_x < \tau^+_y) = 0$; if $x$ and $y$ belong to the same essential class $E$, let $E_1,\dots,E_k$ be the essential classes. Then for all $i \in\{1,\dots,k\}$ let $\mu_{E_i}$ be the extension to $S$ (by the zero-function outside of $E_i$) of the unique stationary distribution of $p\mid_{E_i \times E_i}$. So $\frac{\mu(x)}{\mu(y)} = \frac{\mu_E(x)}{\mu_E(y)}$ by [@BL14 Proposition 2.1] and since $\mu$ is a convex combination of the $\mu_{E_1},\dots,\mu_{E_k}$. Conversely, let us assume that the support of $\mu$ involves only essential states and that the equation holds. Let $E'_1,\dots,E'_{k'}$ be the essential classes with positive $\mu$-measure. Let $i < k'$, and for all $x\in E'_i$ let $\mu_{i}(x) := \frac{\mu\mid_{E'_i}(x)}{\sum_{y\in E'_i}\mu(y)}$ define a distribution for $p\mid_{E'_i\times E'_i}$. Since $\mu_i$ also satisfies the equation and that $p\mid_{E'_i\times E'_i}$ is irreducible, $\mu_i$ is the unique stationary distribution for $p\mid_{E'_i\times E'_i}$. Since $\mu$ is a convex combination of the $\mu_{1},\dots,\mu_{k'}$, it is stationary for $p$. 1. Let $\sigma_N := \sup\{n\in{\mathbb{N}}: |\{ k \leq n: X_k\in \tilde{S}\}| = N\}$ be the supremum of the first time that $(X_n)_{n\in{\mathbb{N}}}$ has visited $N$ states in $\tilde{S}$. Clearly $\sigma_N\stackrel{N\to\infty}{\longrightarrow} \infty$, so ${\mathbb{P}}^x(\tau^+_y < \tau^+_x) = \lim_{N\to\infty}{\mathbb{P}}^x(\tau^+_y < \min(\tau^+_x,\sigma_N))$. On the other hand, ${\mathbb{P}}^x(\tau^+_y < \min(\tau^+_x,\sigma_N))$ $$\begin{aligned} & = {\mathbb{P}}^x(X_{\tau^+_{S\setminus T}} = y)\\ & + \sum_{z\in S\setminus (T\cup\{x,y\})} {\mathbb{P}}^x(X_{\tau^+_{S\setminus T}} = z){\mathbb{P}}^x(\tau^+_y < \min(\tau^+_x,\sigma_{N-1}))\\ & = \tilde{p}(x,y) + \sum_{z\in S\setminus (T\cup\{x,y\})} \tilde{p}(x,z){\mathbb{P}}^x(\tau^+_y < \min(\tau^+_x,\sigma_{N-1}))\\ &\mbox{thus by iteration we obtain}\\ & = \sum_{k=1}^{N-1}\sum_{z_1.\dots,z_k\in S\setminus(T\cup\{x,y\})} \tilde{p}(x,z_1)\tilde{p}(z_1\dots z_k)\tilde{p}(z_k,y)\\ & =\tilde{{\mathbb{P}}}^x(\tau^+_y < \min(\tau^+_x,\sigma_N)) \stackrel{N\to\infty}{\longrightarrow}\tilde{{\mathbb{P}}}^x(\tau^+_y < \tau^+_x)\end{aligned}$$ Thus ${\mathbb{P}}^x(\tau_y < \tau^+_x) = \tilde{{\mathbb{P}}}^x(\tau_y < \tau^+_x)$. 2. Let us first assume that $p$ is irreducible, and so is $\tilde{p}$. Let $\mu$ and $\tilde{\mu}$ be their respective unique, positive stationary distributions. By Lemmas \[lem:hsr\] and Lemma \[lem:state-del\].\[lem:state-del1\] we find $$\label{eq:ratio-mu-p-2} \frac{\mu(y)}{\mu(x)} = \frac{{\mathbb{P}}^x(\tau^+_y < \tau^+_x)}{{\mathbb{P}}^y(\tau^+_x < \tau^+_y)} = \frac{\tilde{{\mathbb{P}}}^{x}(\tau^+_y < \tau^+_{x})}{\tilde{{\mathbb{P}}}^y(\tau^+_{x} < \tau^+_y)} = \frac{\tilde{\mu}(y)}{\tilde{\mu}(x)}$$ Summing this equation over $y\in \tilde{S}$ proves the irreducible case. To prove the general claim, let $E_1\dots,E_{k}$ be the essential classes of $p$, so the essential classes of $\tilde{p}$ are the non empty sets among $E_1 \cap\tilde{S},\dots, E_k \cap\tilde{S}$. For $i \leq k$ let $\mu_i$ ($\tilde{\mu}_i$) be the extension to $S$ ($\tilde{S}$) of the unique stationary distribution of the irreducible $p\mid_{E_i\times E_i}$ ($\tilde{p}\mid_{E_i\cap \tilde{S}\times E_i\cap\tilde{S}}$). Let $\mu$ be a stationary distribution for $p$, it is well-known that $\mu$ is then a convex combination $\sum_{1\leq i \leq k}\alpha_i\mu_i$, and it is straightforward to check that the convex combination $\tilde{\mu} := \sum_{1\leq i \leq k}\frac{\sum_{y\in \tilde{S} \cap E_i}\mu(y)}{\sum_{y\in \tilde{S}}\mu(y)}\cdot \tilde{\mu}_i$ witnesses the claim. Conversely, let $\tilde{\mu}$ be a stationary distribution of $\tilde{p}$, so it is a convex combination $\sum_{1\leq i \leq k}\beta_i\tilde{\mu}_i$, and it is straightforward to check that the convex combination $\mu := \sum_{1 \leq i\leq k}\frac{L_i}{\sum_{j}L_j}\mu_i$ witnesses the claim, where $L_i := \beta_i \cdot \prod_{j\neq i} \frac{\mu_j(x_j)}{\tilde{\mu}_j(x_j)}$ for any $x_j\in \tilde{S}\cap E_j$. Let us first prove the claim for irreducible perturbations, and even in the following simpler case: let $x\in S$ be such that for all $y$ and all $z\neq x$ we have $p(x,z) \cong \tilde{p}(x,z)$ and $p(z,y) = \tilde{p}(z,y)$; so ${\mathbb{P}}^{z}(\tau_y < \tau^{+}_x) = \tilde{{\mathbb{P}}}^{z}(\tau_y < \tau^{+}_x)$ for all $z \neq x$ since the paths leading from $z$ to $y$ without hitting $x$ do not involve any step from $x$ to another state. So $$\begin{aligned} {\mathbb{P}}^{x}(\tau^{+}_y < \tau^{+}_x) &= \sum_{z\in S\setminus\{x\}}p(x,z){\mathbb{P}}^{z}(\tau_y < \tau^{+}_x) \\ & \cong \sum_{z\in S\setminus\{x\}}\tilde{p}(x,z)\tilde{{\mathbb{P}}}^{z}(\tau_y < \tau^{+}_x) = \tilde{{\mathbb{P}}}^{x}(\tau^{+}_y < \tau^{+}_x) \end{aligned}$$ So by Lemmas \[lem:hsr\], \[lem:cong\].\[lem:cong2\], and \[lem:cong\].\[lem:cong3\], and since the unique $\mu$ and $\tilde{\mu}$ are positive, $$\frac{\mu(x)}{\mu(y)} = \frac{{\mathbb{P}}^{y}(\tau^{+}_x < \tau^{+}_y)}{{\mathbb{P}}^{x}(\tau^{+}_y < \tau^{+}_x)} \cong \frac{\tilde{{\mathbb{P}}}^{y}(\tau^{+}_x < \tau^{+}_y)}{\tilde{{\mathbb{P}}}^{x}(\tau^{+}_y < \tau^{+}_x)} = \frac{\tilde{\mu}(x)}{\tilde{\mu}(y)}\mbox{\quad for all }y\in S;$$ invoking this equation above twice shows that $$\frac{\mu(z)}{\mu(y)} = \frac{\mu(z)}{\mu(x)} \cdot \frac{\mu(x)}{\mu(y)} \cong \frac{\tilde{\mu}(z)}{\tilde{\mu}(x)} \cdot \frac{\tilde{\mu}(x)}{\tilde{\mu}(y)} = \frac{\tilde{\mu}(z)}{\tilde{\mu}(y)}\mbox{\quad for all }z,y\in S;$$ and summing this second equation over $z\in S$ yields $\frac{1}{\mu(y)} \cong \frac{1}{\tilde{\mu}(y)}$, *i.e.*, $\mu(y) \cong \tilde{\mu}(y)$ for all $y\in S$. The irreducible case is then proved by induction on $n := |\{x\in S : \exists y\in S,x\neq y \wedge p(x,y)\neq \tilde{p}(x,y)\}|$, which trivially holds for $n = 0$. For $0 < n$ let distinct $x,y\in S$ be such that $p(x,y)\neq \tilde{p}(x,y)$, and for all $y,z\in S \times S\setminus\{x\}$ let $\hat{p}(z,y) := p(z,y)$, and $\hat{p}(x,y) := \tilde{p}(x,y)$. By the simpler case $\mu \cong \hat{\mu}$; by induction hypothesis $\hat{\mu} \cong \tilde{\mu}$; so $\mu \cong \tilde{\mu}$ by transitivity of $\cong$. Let us now prove the general claim by induction on the number of the non-zero transition maps of $p$ that have zeros. Base case, all the non-zero maps are positive. Let $E'_1\dots,E'_{k'}$ be the sink SCCs of the graph on $S$ with arc $(x,y)$ if $p(x,y)$ is non-zero. For $i \leq k'$ let $\mu_i$ ($\tilde{\mu}_i$) be the unique stationary distribution map of the irreducible $p\mid_{E'_i\times E'_i}$ ($\tilde{p}\mid_{E'_i\times E'_i}$). Since $p(x,y)\mid_{E'_i\times E'_i} \cong \tilde{p}(x,y)\mid_{E'_i\times E'_i}$ for all $x \neq y$, the irreducible case implies $\mu_i \cong \tilde{\mu}_i$. Clearly $\mu$ is a convex combination of the $\mu_i$,and the convex combination $\tilde{\mu}$ of the $\tilde{\mu}_i$ with the same coefficients is a stationary distribution map for $\tilde{p}$, and $\mu \cong \tilde{\mu}$ by Lemma \[lem:cong\].\[lem:cong3\]. Inductive case. Let $p(z,t)$ be a non-zero function with support $J \subsetneq I$. Up to focusing we may assume that $0$ is a limit point of both $J$ and $I\setminus J$. By induction hypothesis on $p\mid_{J} \cong \tilde{p}\mid_{J}$ and $p\mid_{I\setminus J} \cong \tilde{p}\mid_{I\setminus J}$ we obtain two distribution maps $\tilde{\mu}_I$ and $\tilde{\mu}_{I\setminus J}$ that can be combined to witness the claim. Let $p(x,y)$ be in the essential graph. By the definition of $\precsim$ and finiteness of the state space, let positive $b_{xy}$ and ${\epsilon}_{xy}$ be such that $p(x,z) \leq b_{xy} \cdot p(x,y)$ for all ${\epsilon}< {\epsilon}_{xy}$ and $z\in S$. So for all ${\epsilon}< {\epsilon}_{xy}$ we have $1 = \sum_{z\in S}p_{\epsilon}(x,z) \leq |S| \cdot b_{xy} \cdot p_{\epsilon}(x,y)$. Now let $b$ (${\epsilon}_0$) be the maximum (minimum) of the $b_{xy}$ ($e_{xy}$) for $(x,y)$ in the essential graph. Thus $0 < (b \cdot |S|)^{-|S|} \leq p_{\epsilon}(\gamma)$ for all ${\epsilon}< {\epsilon}_0$. For all $y\in T$ let $x_y$ be an essential state reachable from $y$ in the essential graph. So $c\cdot \mu(y) \leq \mu(x_y)$ for all $y\in T$ by Lemma \[lem:hsr\]. Therefore $1 \leq c^{-1} \sum_{y\in T}\mu(x_y) + \sum_{x\in S\setminus T}\mu(x)$, and the claim is proved by further approximation. Lemma \[lem:et\] below is a technical tool proved by a standard argument in Markov chain theory. \[lem:et\] Let a perturbation with state space $S$ satisfy Assumption \[Assum2\], and let $x$ be in some essential class $E$. Then for all $n\in\mathbb{N}$ $${\mathbb{P}}^x(\tau^+_{(S\setminus E)\cup\{x\}} > n) \leq (1-c)^{\lfloor\frac{n}{|S|}\rfloor}$$ Let $\tau^* := \tau^+_{(S\setminus E)\cup\{x\}}$. For every $y\in E$ let $\gamma_y\in\Gamma_E(y,x)$ be in the essential graph. $c < p(\gamma_y)$ by Assumption \[Assum2\], so $\max_{y\in E} {\mathbb{P}}_{{\epsilon}}^y (\tau^* > |S|) \leq 1 - c < 1$, so for all $k\in\mathbb{N}$ $$\begin{aligned} {\mathbb{P}}^x (\tau^* > k|S|) & \leq (\max_{y\in E} {\mathbb{P}}^y (\tau^* > |S|)^k \\ & \leq (1-c)^k \textrm{ by the strong Markov property, so}\\ {\mathbb{P}}^x (\tau^* > n) &\leq {\mathbb{P}}^x (\tau^* > |S|\cdot\lfloor\frac{n}{|S|}\rfloor) \leq (1-c)^{\lfloor\frac{n}{|S|}\rfloor} \textrm{ for all }n\in\mathbb{N}.\end{aligned}$$ Up to focusing let $p$ satisfy Assumption \[Assum2\]. The second statement boils down to Lemma \[lem:cong\].\[lem:cong3\]. For the first statement, for every $z\in E$ let $\gamma_z\in \Gamma_E(x,z)$ be a path in the essential graph, so $c < p(\gamma_z)$ by Assumption \[Assum2\]. Let $\tau^* := \tau^+_{(S\setminus E)\cup\{x\}}$, so $$\begin{aligned} c \cdot p(z,y) & \leq c \cdot {\mathbb{P}}^z(X_{\tau^*} = y) \leq p(\gamma_z) \cdot {\mathbb{P}}^z(X_{\tau^*} = y) \\ & \leq {\mathbb{P}}^x(X_{\tau^*} = y) = \tilde{p}(\cup E,y) \end{aligned}$$ which implies $c \cdot \max_{z\in E}p(z,y) \leq \tilde{p}(\cup E,y)$ *i.e.* half of the statement $\tilde{p}(\cup E,y) \cong \max_{z\in E} p(z,y)$. Now, for every positive natural $N$ let ${\bf A}_N := \{x\} \times (E\setminus\{x\})^{N-1} \times \{y\}$. $$\begin{aligned} {\mathbb{P}}^x(X_{\tau^*} = y,\tau^* = N) & = \sum_{\gamma\in {\bf A}_N} p(\gamma)\\ & \leq \max_{z\in E}p(z,y)\sum_{\gamma t\in {\bf A}_N} p(\gamma)\\ & = \max_{z\in E}p(z,y) {\mathbb{P}}^x(\tau^* \geq N-1)\end{aligned}$$ Let $q := (1-c)^{|S|} < 1$, so ${\mathbb{P}}^x(\tau^* \geq N) \leq (1-c)^{-1}\cdot q^N$ by Lemma \[lem:et\], and $$\begin{aligned} {\mathbb{P}}^x(X_{\tau^*} = y) & = \sum_{N=1}^{\infty}{\mathbb{P}}^x(X_{\tau^*} = y,\tau^* = N)\\ & \leq \max_{z\in E}p(z,y)\sum_{N=0}^{\infty}(1-c^{-1})\cdot q^N \\ & = (1-c)^{-1}(1-q)^{-1}\cdot \max_{z\in E}p(z,y)\end{aligned}$$ Up to focusing let $p$ satisfy Assumption \[Assum2\]. 1. Let $y$ be in the set of the transient states $T$, so there exists $x\notin T$ and $\gamma\in \Gamma_T(y,x)$ such that $\gamma$ is also in the essential graph. By Assumption \[Assum2\] this implies $$0 < \liminf_{{\epsilon}\to 0}p_{\epsilon}(\gamma) \leq \liminf_{{\epsilon}\to 0}{\mathbb{P}}_{{\epsilon}}^y(\tau^+_x < \tau^+_y).$$ On the other hand, let $E$ be the essential class of $x$. Lemma \[lem:p-cong-max\] implies that $$\begin{aligned} {\mathbb{P}}^x(\tau^+_y < \tau^+_x) & \leq 1- {\mathbb{P}}^x(X_{\tau^+_{S\setminus E\cup\{x\}}} = x) \\ &= \sum_{z\notin E}{\mathbb{P}}^x(X_{\tau^+_{S\setminus E\cup\{x\}}} = z) \cong \sum_{z\notin E}\max_{t\in E}p(t,z)\end{aligned}$$ Since $E$ is an essential class, for all $t\in E$ and $z\notin E$ the function $p(t,z)$ is not $\precsim$-maximal. So $\liminf_{{\epsilon}\to 0} p_{\epsilon}(t,z) = 0$ by definition of the essential graph, and $\liminf_{{\epsilon}\to 0} \sum_{z\notin E}\max_{t\in E}p_{\epsilon}(t,z) = 0$ by Assumption \[Assum1\], Lemma \[lem:cong\].\[lem:cong6\], and finiteness. By combining this with the two inequalities above one obtains $$\liminf_{{\epsilon}\to 0} \frac{{\mathbb{P}}_{{\epsilon}}^x(\tau^+_y < \tau^+_x)}{{\mathbb{P}}_{{\epsilon}}^y(\tau^+_x < \tau^+_y)} = 0$$ and noting the following by Lemma \[lem:hsr\] allows us to conclude. $$\tilde{\mu}_{{\epsilon}}(y) \leq \frac{\tilde{\mu}_{{\epsilon}}(y)}{\mu_{{\epsilon}}(x)} = \frac{{\mathbb{P}}_{{\epsilon}}^x(\tau^+_y < \tau^+_x)}{{\mathbb{P}}_{{\epsilon}}^y(\tau^+_x < \tau^+_y)}$$ 2. By Assumption \[Assum2\] (twice) and Lemma \[lem:hsr\]. Let us first prove the first conjunct. On the one hand ${\mathbb{P}}^y(\tau_x < \tau_y) \leq {\mathbb{P}}^y(\tau_E < \tau_y) = \tilde{{\mathbb{P}}}^y(\tau_{\tilde{x}} < \tau_y)$, and on the other hand ${\mathbb{P}}^y(\tau_x < \tau_y) \geq {\mathbb{P}}^y(\tau_E < \tau_y)\cdot \min_{z\in E}{\mathbb{P}}^z(\tau_x < \tau_y)$ by the strong Markov property. Since $\lim_{{\epsilon}\to 0}{\mathbb{P}}^z(\tau_x < \tau_y) = 1$ for all $z$ in the essential class $E$, $\tilde{{\mathbb{P}}}^y(\tau_{\tilde{x}} < \tau_y)$ and ${\mathbb{P}}^y(\tau_x < \tau_y)$ are even asymptotically equivalent. Let us now prove the second conjunct. Let $\tau^*_{E,0} := 0$ and $\tau^*_{E,n+1} := \inf \{t > \tau^*_{E,n}\, \mid\, X_t\in E\,\wedge\,\exists s\in ]\tau^*_{E,n},t[,\,X_s\notin E\}$ for all $n\in \mathbb{N}$. Informally, when starting in $E$, $\tau^*_{E,n}$ is the first time that the chain is in $E$ after $n$ stays outside that are separated by stays inside. $$\begin{aligned} \tilde{{\mathbb{P}}}^{\cup E}(\tau_y < \tau_{\cup E}) & = {\mathbb{P}}^x(\tau_y < \tau_x, \tau_y <\tau^*_{E,1}) \\ & \textrm{ by definition of the essential collapse, and } \tau^*_{E,1}\\ & \leq {\mathbb{P}}^x(\tau_y < \tau_x) \textrm{, showing half of the first conjunct.} \end{aligned}$$ For the other half, let $X$ satisfy Assumption \[Assum2\] up to focusing. For all $z\in E \setminus \{x\}$ there is a simple path in the essential graph from $x$ to $z$, so $c < {\mathbb{P}}^x(\tau_z < \tau_{S\setminus E \cup \{x\}})$, and by the strong Markov property $$\begin{aligned} {\mathbb{P}}^x(\tau_y < \tau_x,\tau_y < \tau^*_{E,1}) & \geq {\mathbb{P}}^x(\tau_z < \tau_{S\setminus E \cup \{x\}})\cdot {\mathbb{P}}^z(\tau_y < \tau_x,\tau_y < \tau^*_{E,1}) \nonumber\\ & \geq c \cdot {\mathbb{P}}^z(\tau_y < \tau_x,\tau_y < \tau^*_{E,1}) \label{ineq1}\end{aligned}$$ For all $z\in E\setminus \{x\}$ there is also a simple path in the essential graph from $z$ to $x$, so ${\mathbb{P}}^z(\tau_{S\setminus E} < \tau_x) \leq 1 - c$. Also note that ${\mathbb{P}}^z(\tau^*_{E,1} < \tau_x) \leq {\mathbb{P}}^z(\tau_{S\setminus E} < \tau_x)$, and let us show by induction on $n$ that ${\mathbb{P}}^x(\tau^*_{E,n} < \tau_x) \leq (1-c)^n$, which holds for $n =0$. $$\begin{aligned} {\mathbb{P}}^x(\tau^*_{E,n+1} < \tau_x) &\leq {\mathbb{P}}^x(\tau^*_{E,n} < \tau_x) \cdot \max_{z\in E} {\mathbb{P}}^z(\tau^*_{E,1} < \tau_x) \\ & \textrm{ by the strong Markov property.} \nonumber\\ & \leq {\mathbb{P}}^x(\tau^*_{E,n} < \tau_x) \cdot (1-c) \textrm{ by the remark above.}\nonumber\\ & \leq (1-c)^{n+1} \textrm{ by induction hypothesis.} \label{ineq2}\end{aligned}$$ Let us now conclude about the second half of the first conjunct. $$\begin{aligned} {\mathbb{P}}^x(\tau_y < \tau_x) & = \sum_{n=0}^\infty {\mathbb{P}}^x(\tau_y < \tau_x, \tau^*_{E,n} < \tau_y < \tau^*_{E,n+1})\\ & \textrm{ by a case disjunction.}\\ & \leq \sum_{n=0}^\infty {\mathbb{P}}^x(\tau_y < \tau_x, \tau^*_{E,n} < \tau_x, \tau_y < \tau^*_{E,n+1})\\ & \textrm{ since the new conditions are weaker.}\\ & \leq \sum_{n=0}^\infty {\mathbb{P}}^x(\tau^*_{E,n} < \tau_x) \max_{z\in E}{\mathbb{P}}^z(\tau_y < \tau_x, \tau_y < \tau^*_{E,1})\\ & \textrm{ by the strong Markov property.}\\ & \leq c^{-1}{\mathbb{P}}^x(\tau_y < \tau_x, \tau_y < \tau^*_{E,1}) \cdot \sum_{n=0}^\infty (1-c)^n\\ & \textrm{ by inequalities~\ref{ineq1} and \ref{ineq2}.}\\ & \leq c^{-2} \tilde{{\mathbb{P}}}^{\cup E}(\tau_y < \tau_{\cup E})\end{aligned}$$ Up to focusing let $p$ satisfy Assumption \[Assum2\]. 1. Let us first assume that $p$ is irreducible, and so is $\tilde{p}$ by Observation \[obs:ess-coll\]. So both $p$ and $\tilde{p}$ have unique, positive stationary distribution maps. Let us prove that $\tilde{\mu}(\tilde{x}) \cong \mu(x)$ where $\tilde{x} := \cup E$ and $\tilde{\mu}(y) \cong \mu(y)$ for all $y\in S\setminus E$. For $y\in S\setminus E$, Lemma \[lem:hsr\] and Lemma \[lem:preserve-stable\] imply the following. $$\label{eq:ratio-mu-p-2} \frac{\mu(y)}{\mu(x)} = \frac{{\mathbb{P}}^x(\tau^+_y < \tau^+_x)}{{\mathbb{P}}^y(\tau^+_x < \tau^+_y)} \cong \frac{\tilde{{\mathbb{P}}}^{\tilde{x}}(\tau^+_y < \tau^+_{\tilde{x}})}{\tilde{{\mathbb{P}}}^y(\tau^+_{\tilde{x}} < \tau^+_y)} = \frac{\tilde{\mu}(y)}{\tilde{\mu}(\tilde{x})}$$ Summing the above equation over $y\in S\setminus E$ and adding $\frac{\mu(x)}{\mu(x)} = \frac{\tilde{\mu}(\tilde{x})}{\tilde{\mu}(\tilde{x})}$ yields $(1-\bar{\mu})\mu(x)^{-1} \cong \tilde{\mu}(x)^{-1}$, where $\bar{\mu} := \sum_{z\in E\setminus\{x\}}\mu(z)$. So by Definition \[def:cong\], let $a$ and $b$ be positive real numbers such that $a\cdot \tilde{\mu}(\tilde{x})^{-1} \leq (1-\bar{\mu})\mu(x)^{-1} \leq b\cdot \tilde{\mu}(\tilde{x})^{-1}$ on a neighborhood of $0$, which yields $a\cdot \mu(x) \leq \tilde{\mu}(\tilde{x}) \leq b\cdot \mu(x) + \bar{\mu}$. Since $\bar{\mu} \leq b'\cdot \mu(x)$ for some $b'$ by Proposition \[prop:essential-graph\].\[prop:essential-graph2\], $\tilde{\mu}(\tilde{x}) \cong \mu(x)$. Now by Lemmas \[lem:cong\].\[lem:cong2\] and \[lem:cong\].\[lem:cong3\], let us rewrite $\mu(x)$ with $\tilde{\mu}(\tilde{x})$ in Equation \[eq:ratio-mu-p-2\], which shows the claim for irreducible perturbations. Let us now prove the general claim by induction on the number of the non-zero transition maps of $p$ that have zeros. Base case, all the non-zero maps are positive. Let $E'_1\dots,E'_{k'}$ be the sink SCCs of the graph on $S$ with arc $(x,y)$ if $p(x,y)$ is non-zero. The essential graph is included in this digraph, and the essential class $E$ is included in $E'_j$ for some $j\in\{1,\dots,k'\}$. Let $\tilde{E}'_j := \{E\} \cup E'_j\setminus E$ and for all $i \neq j$ let $\tilde{E}'_i := E'_i$. For all $i\in\{1,\dots,k'\}$ let $\mu_{E'_i}$ ($\tilde{\mu}_{\tilde{E}'_i}$) be the extension to $S$ ($\{E\}\cup S\setminus E$), by the zero-function outside of $E'_i$ ($\tilde{E}'_i$), of the unique stationary distribution of $p\mid_{E'_i \times E'_i}$ ($\tilde{p}\mid_{\tilde{E}'_i \times \tilde{E}'_i}$), and by the irreducible case above let $\tilde{\mu}_{\tilde{E}'_j}$ ($\mu_{E'_i}$) be the corresponding unique distribution of $\tilde{p}\mid_{\{E\}\cup E'_j\setminus E}$ after ($p\mid_{E'_j\times E'_j}$ before) the essential collapse. Since $\mu$ ($\tilde{\mu}$) is a convex combination $\sum_{1\leq i \leq k'}\alpha_i\mu_{E'_i}$ ($\sum_{1\leq i \leq k'}\alpha_i\tilde{\mu}_{\tilde{E}'_i}$), it is easy to check that $\tilde{\mu} := \sum_{1\leq i \leq k'}\alpha_i\tilde{\mu}_{\tilde{E}'_i}$ ($\mu := \sum_{1\leq i \leq k'}\alpha_i\mu_{E'_i}$) is a witness for the base case. Inductive case. Let $p(z,t)$ be a non-zero function with support $J \subsetneq I$. Up to focusing we may assume that $0$ is a limit point of both $J$ and $I\setminus J$. By induction hypothesis on $p\mid_{J} \cong \tilde{p}\mid_{J}$ and $p\mid_{I\setminus J} \cong \tilde{p}\mid_{I\setminus J}$ we obtain two distribution maps $\tilde{\mu}_I$ and $\tilde{\mu}_{I\setminus J}$ ($\mu_I$ and $\mu_{I\setminus J}$) that can be combined to witness the claim. 2. By Propositions \[prop:essential-graph\].\[prop:essential-graph1\] and \[prop:essential-trans\].\[prop:essential-trans3\] in both cases, and also by Proposition \[prop:essential-graph\].\[prop:essential-graph2\] if $y\in E$. Up to focusing let $p$ satisfy Assumption \[Assum2\]. By induction on the number of the non-zero transition maps of $p$ that have zeros. Base case, all the non-zero maps are positive. Let $E'_1\dots,E'_{k'}$ be the sink SCCs of the graph on $S$ with arc $(x,y)$ if $p(x,y)$ is non-zero. Note that this digraph includes the essential graph, and that $\delta(p\mid_{E'_i \times E'_i}) = \delta(p)\mid_{E'_i\cap S\setminus T \times E'_i\cap S\setminus T}$. Moreover, by Lemmas \[lem:state-del\].\[lem:state-del2\] the stable states of each $p\mid_{E'_i \times E'_i}$ are also stable for $\delta(p\mid_{E'_i \times E'_i})$, and the converse holds by Lemmas \[lem:state-del\].\[lem:state-del2\] and \[lem:ess-weight\]. Therefore a state is stable for $p$ iff it is stable for some $p\mid_{E'_i \times E'_i}$ iff it is stable for some $\delta(p)\mid_{E'_i\cap S\setminus T \times E'_i\cap S\setminus T}$ iff it is stable for $\delta(p)$. Inductive case. Let $p(z,t)$ be a non-zero function with support $J \subsetneq I$. Up to focusing we may assume that $0$ is a limit point of both $J$ and $I\setminus J$. By induction hypothesis $p\mid_{J}$ and $\delta(p)\mid_{J}$ have the same stable states, and likewise for $p\mid_{I\setminus J}$ and $\delta(p)\mid_{I\setminus J}$, which shows the claim. Lemma \[lem:escape-decomp\] below is a generalization to the reducible case of Proposition 2.8 from [@BL14]. \[lem:escape-decomp\] Let $A$ be a finite subset of the state space $S$ of a Markov chain. Then for all $x\in A$ and $y\in S\setminus A$ $$\begin{array}{l} {\mathbb{P}}^x(X_{\tau_{S\setminus A}} = y) =\\ \qquad \sum_{\gamma\in \Gamma_A(x,y), p(\gamma) > 0} \prod_{i=1}^{|\gamma| -1} \frac{p(\gamma_i,\gamma_{i+1})}{1-{\mathbb{P}}^{\gamma_i}(X_{\tau^+_{(S\setminus A)\cup \{\gamma_1,\dots,\gamma_i\}}} = \gamma_i)} \end{array}$$ We proceed by induction on $|A|$. The claim trivially holds for $|A| = 0$; so now let $x \in A$. The strong Markov property gives $$\begin{aligned} {\mathbb{P}}^x(X_{\tau_{S \setminus A}} = y) &= {\mathbb{P}}^x(X_{\tau^+_{(S\setminus A)\cup\{x\}}} = y) \\ & + {\mathbb{P}}^x(X_{\tau^+_{(S\setminus A)\cup\{x\}}} = x) \cdot {\mathbb{P}}^x(X_{\tau_{S\setminus A}} = y)\end{aligned}$$ If $p(\gamma) = 0$ for all $\gamma\in \Gamma_A(x,y)$, the claim boils down to $0 = 0$, so let us assume that there exists $\gamma\in\Gamma_A(x,y)$ with $p(\gamma) = 0$, so ${\mathbb{P}}^x(X_{\tau^+_{(S\setminus A)\cup\{x\}}} = x) < 1$, and the above equation may be rearranged into $${\mathbb{P}}^x(X_{\tau_{S\setminus A}} = y) = \frac{{\mathbb{P}}^x(X_{\tau^+_{(S\setminus A)\cup\{x\}}} = y)}{1-{\mathbb{P}}^x(X_{\tau^+_{(S\setminus A)\cup\{x\}}} = x)}$$ where the numerator may be decomposed as\ $p(x,y) + \sum_{z\in A\setminus\{x\}}p(x,z) {\mathbb{P}}^z(X_{\tau_{(S\setminus A)\cup\{x\}}} = y)$. By the induction hypothesis for the set $A \setminus \{x\}$ let us rewrite ${\mathbb{P}}^z(X_{\tau_{(S\setminus A) \cup\{x\}}} = y)$ for all $z \in A\setminus\{x\}$, and obtain the equation below that may be re-indexed to yield the claim. $$\begin{aligned} {\mathbb{P}}^x(X_{\tau_{S\setminus A}} = y) &= \frac{p(x,y)} {1-{\mathbb{P}}^x(X_{\tau^+_{(S\setminus A)\cup\{x\}}} = x)} + \sum_{z \in A \setminus \{ x \}, p(x,z) > 0}\\ & \sum_{ \gamma\in \Gamma_{A\setminus\{x\}}(z,y), p(\gamma) > 0} \frac{p(x,z)\cdot \Pi} {1-{\mathbb{P}}^x(X_{\tau^+_{(S\setminus A)\cup\{x\}}} = x)} \\ & \textrm{where } \Pi := \prod_{i=1}^{|\gamma|-1} \frac{p(\gamma_i,\gamma_{i+1})}{1-{\mathbb{P}}^{\gamma_i}(X_{\tau^+_{(S\setminus A)\cup \{x,\gamma_1,\dots,\gamma_i\}}} = \gamma_i)}\end{aligned}$$ Up to focusing let $p$ satisfy Assumption \[Assum2\]. Let $\tau^* := \tau^+_{S\setminus T}$, and consider $${\mathbb{P}}^x(X_{\tau^*} = y) = p(x,y) + \sum_{z\in T}p(x,z){\mathbb{P}}^z(X_{\tau^*} = y).$$ For all $z\in T$ there are $z'\in S\setminus T$ and $\gamma_z\in\Gamma_T(z,z')$ in the essential graph, so for all $K \subseteq S$ we have ${\mathbb{P}}^z(X_{\tau^+_{(S\setminus T)\cup K}} = z) \leq {\mathbb{P}}^z(X_{\tau^+_{(S\setminus T)\cup\{z\}}} = z) \leq 1 - p(\gamma) < 1-c$. So by Lemma \[lem:escape-decomp\] $${\mathbb{P}}^z(X_{\tau^*} = y) \leq \sum_{\gamma\in \Gamma_T(z,y)}p(\gamma)\cdot c^{-|\gamma|} \leq c^{-|T|}\sum_{\gamma\in \Gamma_T(z,y)}p(\gamma).$$ Since ${\mathbb{P}}^z(X_{\tau^*} = y) \geq \sum_{\gamma\in \Gamma_T(z,y)}p(\gamma)$, thus ${\mathbb{P}}^z(X_{\tau^*} = y) \cong \sum_{\gamma\in \Gamma_T(z,y)}p(\gamma)$, an by Lemma \[lem:cong\].\[lem:cong5\] we can replace the sum with the maximum. Let ${\epsilon}\in I$. If $g({\epsilon}) \neq 0$ then $((f \div_n g)\cdot g) ({\epsilon}) = \frac{f({\epsilon})}{g({\epsilon})}\cdot g({\epsilon})= f({\epsilon})$; if $g({\epsilon}) = 0$ then $f({\epsilon}) = 0 = (f \div_n g)({\epsilon})\cdot g({\epsilon})$. Let $m$ be as in Definition \[defn:os\] and let $J \subseteq I$ be its support. 1. $\sigma(p)(x,y)\mid_{I\setminus J} = |S|^{-1}$ for all $x,y\in S$, so $\sigma(p)_{\epsilon}$ is stochastic for all ${\epsilon}\in I\setminus J$. Let ${\epsilon}\in J$. On the one hand $\sum_{y\in S}\sigma(p)_{\epsilon}(x,y) = \frac{p_{\epsilon}(x,x) + m - 1}{m} + \sum_{y\in S\setminus\{x\}} \frac{p_{\epsilon}(x,y)}{m} = 1$, on the other hand $\sum_{y\in S\setminus\{x\}}\sigma(p)_{\epsilon}(x,y) = \sum_{y\in S\setminus\{x\}} \frac{p_{\epsilon}(x,y)}{|S|\cdot \max\{p_{\epsilon}(t,z)\,\mid\,z,t\in S \wedge z \neq t\}} \leq \frac{(|S|-1)}{|S|} < 1$, so $\sigma(p)_{\epsilon}$ is stochastic. If $0$ is not a limit point of $J$, it is clear that $\sigma(p)(x,y) \cong 1$ for all $x,y\in S$, and $\sigma(p)$ satisfies Assumption \[Assum1\]. If $0$ is a limit point of $J$, then $p(x,y)\mid_J \precsim p(x',y')\mid_J$ implies $\sigma(p)(x,y)\mid_J \precsim \sigma(p)(x',y')\mid_J$ (since $m$ is non-zero on $J$), so $\sigma(p)$ satisfies Assumption \[Assum1\] by Lemma \[lem:cong\].\[lem:cong7\] (since $p\mid_J$ and then $\sigma(p)\mid_{J}$ do). 2. Let us first prove the claim if $J = I$. The equation below shows that $\mu \cdot p = \mu$ iff $\mu\cdot \sigma(p) = \mu$, for all distribution maps $\mu$ on $S = \{x_1,\dots,x_n\}$, so $p$ and $\sigma(p)$ have the same stable states. $$\begin{aligned} (\mu\cdot \sigma(p))_j & = \frac{p(x_j,x_j) + m - 1}{m}\cdot \mu_j + \sum_{i\neq j} \frac{p(x_i,x_j) \mu_i}{m} \\ &= \frac{(m-1)\mu_j + \sum_{i}p(x_i,x_j) \mu_i}{m}\end{aligned}$$ Let us now prove the claim if $J \subsetneq I$. The case where $0$ is not a limit point of $I\setminus J$ amounts, up to focusing, to the case $J =I$, so let us assume that $0$ is a limit point of $I\setminus J$. For all ${\epsilon}\in I\setminus J$ we have $p_{\epsilon}(x,y) = 0$ for all states $x \neq y$, so all distributions are stationary for $p_{\epsilon}$. Moreover, the uniform distribution is stationary for $\sigma(p)_{\epsilon}$, so, all states are stable for $p\mid_{I\setminus J}$ and $\sigma(p)\mid_{I\setminus J}$. Therefore, if $0$ is not a limit point of $J$, we are done, and if it is, we are also done by Lemma \[lem:cong\].\[lem:cong8\]. 3. Let distinct $x,y\in S$ be such that $p(z,t) \precsim p(x,y)$ for all distinct $z,t\in S$. So $p(x,y) \cong \max\{p(z,t)\,\mid\, z,t\in S \wedge z \neq t\}$ by Lemma \[lem:cong\].\[lem:cong6\], so $\sigma(p)(x,y) \cong p(x,y) \div_{|S|} |S| \cdot p(x,y) = \frac{1}{|S|} \cong 1$, so $(x,y)$ is in the essential graph of $\sigma(p)$. \[lem:3t\] Let a perturbation $p$ with state space $S$ satisfy Assumption \[Assum1\], let $E_1,\dots, E_k$ be its essential classes, and for all $i$ let $x_i \in E_i$. The state $x$ is stable for $p$ iff $x$ belongs to some $E_i$ such that $\cup E_i$ is stable for $\delta\circ\kappa(\dots\kappa(\kappa(\sigma(p),x_1),x_2)\dots,x_k)$. By applying Lemma \[lem:3t\] recursively. If $p$ is the identity matrix then all states are stable. Otherwise the essential graph of $\sigma(p)$ is non-empty, so either one essential class is not a singleton, or one state is transient. If there is a non-singleton essential class, the corresponding essential collapse decreases the number of states; if one state is transient, the transient deletion decreases the number of states. Since these transformations do not increase the number of states, $\delta\circ\kappa(\dots \kappa(\kappa(\sigma(p),x_1),x_2)\dots,x_k)$ has fewer states than $p$, whence termination of the recursion on an identity perturbation whose non-empty state space corresponds to the stable states of $p$. If $f \leq 0$ then $f = (f \div 0)\cdot 0 = 0$. Also, $(f \div 1) = (f \div 1) \cdot 1 = f$. 1. By Lemma \[lem:cong\].\[lem:cong7\] with $J$ the support of $g$ and $I\setminus J$, then by Lemmas \[lem:cong\].\[lem:cong2\] and \[lem:cong\].\[lem:cong3\]. 2. By Observations \[obs:ccc-loc\].\[obs:ccc-loc2\] and \[obs:prec-div\], and since ${[{\epsilon}\mapsto 1]}$ is the ${[\precsim]}$-maximum of ${[G\cup\{{\epsilon}\mapsto 0\}]}$. <!-- --> 1. Clear by comparing Definitions \[defn:essential-class\] and \[defn:ao\].\[defn:ao1\]. 2. Let ${[p]}(z,t) = \max\{{[p]}(z',t') : (z',t')\in S\times S \wedge z'\neq t'\}$. If $x \neq y$ then $$\begin{aligned} {[\sigma]}({[p]})(x,y) & = {[p]}(x,y) {[\div]} {[p]}(z,t) \textrm{ by definition of }{[\sigma]},\\ & = {[p(x,y)]} {[\div]} {[p(z,t)]} \textrm{ by definition of }{[p]},\\ & = {[p(x,y) \div_{|S|} p(z,t)]} \textrm{ by Lemma~\ref{lem:odsm-f}},\\ & = {[p(x,y) \div_{|S|} \max\{p(z,t)\,\mid\,z,t\in S\wedge z \neq t\}]}\\ &\textrm{ by Lemma~\ref{lem:odsm-f} and Lemma~\ref{lem:cong}.\ref{lem:cong6}},\\ & = {[\sigma(p)]}(x,y) \textrm{ by definition of }\sigma.\end{aligned}$$ (a) \(x) [$x$]{}; (y) \[right of = x\] [$y$]{}; \(x) edge \[loop above\] node [$1-\frac{{\epsilon}^4}{9}$]{} () edge \[bend left\] node [$\frac{{\epsilon}^4}{9}$]{} (y) (y) edge \[loop above\] node [$1-\frac{{\epsilon}^7}{3}$]{} () edge \[bend left\] node [$\frac{{\epsilon}^7}{3}$]{} (x); ; \(b) \[right of = a\] \(x) [$x$]{}; (y) \[right of = x\] [$y$]{}; \(x) edge \[loop above\] node [${[1]}$]{} () edge \[bend left\] node [${[{\epsilon}^4]}$]{} (y) (y) edge \[loop above\] node [${[1]}$]{} () edge \[bend left\] node [${[{\epsilon}^7]}$]{} (x); ; \(c) \[below of = b\] \(x) [$x$]{}; (y) \[right of = x\] [$y$]{}; \(x) edge \[loop above\] node [${[1]}$]{} () edge \[bend left\] node [${[1]}$]{} (y) (y) edge \[loop above\] node [${[1]}$]{} () edge \[bend left\] node [${[{\epsilon}^3]}$]{} (x); ; \(d) \[left of = c\] \(x) [$x$]{}; (y) \[right of = x\] [$y$]{}; \(x) edge \[loop above\] node [$1-\frac{1}{2\max(1,3{\epsilon}^3)}$]{} () edge \[bend left\] node \[below\][$\frac{1}{2\max(1,3{\epsilon}^3)}$]{} (y) (y) edge \[loop above\] node [$1-\frac{3{\epsilon}^3}{2\max(1,3{\epsilon}^3)}$]{} () edge \[bend left\] node[$\frac{3{\epsilon}^3}{2\max(1,3{\epsilon}^3)}$]{} (x); ; \(a) to node [${[\,\,]}$]{} (b); (b) to node [${[\sigma]}$]{} (c); (a) to node [$\sigma$]{} (d); (d) to node [${[\,\,]}$]{} (c); 3. The essential classes of $p$ are singletons since $\delta(p)$ is well-defined. Let $\{x\}$ and $\{y\}$ be distinct essential classes of $p$, and of ${[p]}$ by Lemma \[lem:pop\].\[lem:pop0\]. Let $M := \max_{{[\precsim]}}\{{[p]}(\gamma) : \gamma\in \Gamma_T(x,y)\}$, so $M = {[\max\{p(\gamma) : \gamma\in \Gamma_T(x,y)\}]}$ by Lemma \[lem:cong\].\[lem:cong6\]. Note that $p(x,x) \cong 1$ since $\{x\}$ is an essential class of $p$, so $\sum_{z\in S\setminus T}\max\{p(\gamma) : \gamma\in \Gamma_T(x,z)\} \cong 1$ too. So $$\begin{aligned} &{[\chi]}({[p]})(\{x\},\{y\}) = M \textrm{ by definition of }{[\chi]},\\ & = M {[\div]} {[1]} \textrm{ by Observation~\ref{obs:div-semiring}},\\ & = M {[\div]} {[\sum_{z\in S\setminus T}\max\{p(\gamma) : \gamma\in \Gamma_T(x,z)\}]}\textrm{ by a remark above},\\ & = {[\max\{p(\gamma) : \gamma\in \Gamma_T(x,y)\}]} {[\div]} \\ &{[\sum_{z\in S\setminus T}\max\{p(\gamma) : \gamma\in \Gamma_T(x,z)\}]}\textrm{ by a remark above},\\ & = {[\max\{p(\gamma) : \gamma\in \Gamma_T(x,y)\} \div_{|S|} \\ &\sum_{z\in S\setminus T}\max\{p(\gamma) : \gamma\in \Gamma_T(x,z)\}]} \textrm{ by Lemma~\ref{lem:odsm-f}}, \\ & = {[\delta(p)]}(x,y) \textrm{ by definition of }\delta.\end{aligned}$$ (a) \(x) [$x$]{}; (z) \[above right of = x\] [$z$]{}; (y) \[below right of = z\] [$y$]{}; \(x) edge \[loop below\] node [$1-\frac{{\epsilon}^2}{4}-\frac{{\epsilon}^3}{3}$]{} () edge \[bend left\] node [$\frac{{\epsilon}^2}{4}$]{} (z) edge \[bend right\] node [$\frac{{\epsilon}^3}{3}$]{} (y) (z) edge \[loop above\] node [$\frac{1}{2}$]{} () edge \[bend left\] node [$\frac{1}{4}$]{} (x) edge \[bend left\] node [$\frac{1}{4}$]{} (y) (y) edge \[loop below\] node [$1-{\epsilon}$]{} () edge \[bend left\] node [${\epsilon}$]{} (z); ; \(b) \[right of = a\] \(x) [$x$]{}; (z) \[above right of = x\] [$z$]{}; (y) \[below right of = z\] [$y$]{}; \(x) edge \[loop below\] node [${[1]}$]{} () edge \[bend left\] node [${[{\epsilon}^2]}$]{} (z) edge \[bend right\] node [${[{\epsilon}^3]}$]{} (y) (z) edge \[loop above\] node [${[1]}$]{} () edge \[bend left\] node [${[1]}$]{} (x) edge \[bend left\] node [${[1]}$]{} (y) (y) edge \[loop below\] node [${[1]}$]{} () edge \[bend left\] node [${[{\epsilon}]}$]{} (z); ; \(c) \[below of = b\] \(x) [$x$]{}; (y) \[right of = x\] [$y$]{}; \(x) edge \[loop below\] node [${[1]}$]{} () edge \[bend left\] node [${[{\epsilon}^2]}$]{} (y) (y) edge \[loop below\] node[${[1]}$]{} () edge \[bend left\] node [${[{\epsilon}]}$]{} (x); ; \(d) \[left of = c\] \(x) [$x$]{}; (y) \[right of = x\] [$y$]{}; \(x) edge \[loop below\] node [$1 - \dots$]{} () edge \[bend left\] node [$\frac{\max(\frac{3}{4}{\epsilon}^2,4{\epsilon}^3)}{12-3{\epsilon}^2-4{\epsilon}^3 + \max(\frac{3}{4}{\epsilon}^2,4{\epsilon}^3)}$]{} (y) (y) edge \[loop below\] node [$\frac{ \max({\epsilon},4-4{\epsilon})}{{\epsilon}+ \max({\epsilon},4-4{\epsilon})}$]{} () edge \[bend left\] node \[above\] [$\frac{{\epsilon}}{{\epsilon}+ \max({\epsilon},4-4{\epsilon})}$]{} (x); ; \(a) to node [${[\,\,]}$]{} (b); (b) to node [${[\chi]}$]{} (c); (a) to node [$\delta$]{} (d); (d) to node [${[\,\,]}$]{} (c); 4. Let $x,y\in S\setminus E_i$. First, ${[\kappa]}({[p]},E_i)(x,y) = {[p]}(x,y) = {[p(x,y)]} = {[\kappa(p,x_i)(x,y)]}$ by definitions of ${[\kappa]}$, ${[p]}$, and $\kappa$. Also ${[\kappa]}({[p]},E_i)(\cup E_i,y) = \max_{{[\precsim]}}\{{[p]}(x,y)\,|\,x\in E_i\} = {[\max\{p(x,y)\,|\,x\in E_i\}]} = {[\kappa(p,x_i)(\cup E_i,y)]}$ by definition, Lemma \[lem:cong\].\[lem:cong6\], and Lemma \[lem:p-cong-max\]. Likewise ${[\kappa]}({[p]},E_i)(y,\cup E_i) = \max_{{[\precsim]}}\{{[p]}(y,x)\,|\,x\in E_i\} = {[\max\{p(y,x)\,|\,x\in E_i\}]} = {[\kappa(p,x_i)(y,\cup E_i)]}$. (a) \(x) [$x$]{}; (z) \[above right of = x\] [$z$]{}; (y) \[below right of = z\] [$y$]{}; \(x) edge \[loop below\] node [$\frac{1}{2}$]{} () edge node [$\frac{1}{2}$]{} (y) (z) edge \[loop above\] node [$\frac{2-{\epsilon}^2}{3}$]{} () edge \[bend right\] node [$\frac{{\epsilon}^2}{3}$]{} (x) edge \[bend left\] node [$\frac{1}{3}$]{} (y) (y) edge \[loop below\] node [$0$]{} () edge node [${\epsilon}$]{} (z) edge \[bend left\] node [$1-{\epsilon}$]{} (x); ; \(b) \[right of = a\] \(x) [$x$]{}; (z) \[above right of = x\] [$z$]{}; (y) \[below right of = z\] [$y$]{}; \(x) edge \[loop below\] node [${[1]}$]{} () edge node [${[1]}$]{} (y) (z) edge \[loop above\] node [${[1]}$]{} () edge \[bend right\] node [${[{\epsilon}^2]}$]{} (x) edge \[bend left\] node [${[1]}$]{} (y) (y) edge \[loop below\] node [${[0]}$]{} () edge node [${[{\epsilon}]}$]{} (z) edge \[bend left\] node [${[1]}$]{} (x); ; \(c) \[below of = b\] \(x) [$x\cup y$]{}; (z) \[right of = x\] [$z$]{}; \(x) edge \[loop below\] node [${[1]}$]{} () edge \[bend left\] node [${[{\epsilon}]}$]{} (z) (z) edge \[loop below\] node [${[1]}$]{} () edge \[bend left\] node [${[1]}$]{} (x); ; \(d) \[left of = c\] \(x) [$x\cup y$]{}; (z) \[right of = x\] [$z$]{}; \(x) edge \[loop below\] node [$1-\frac{{\epsilon}}{2}$]{} () edge \[bend left\] node [$\frac{{\epsilon}}{2}$]{} (z) (z) edge \[loop below\] node [$\frac{2-{\epsilon}^2}{3}$]{} () edge \[bend left\] node [$\frac{1+{\epsilon}^2}{3}$]{} (x); ; \(a) to node [${[\,\,]}$]{} (b); (b) to node [${[\kappa]}(\cdot,x\cup y)$]{} (c); (a) to node [$\kappa(\cdot,x)$]{} (d); (d) to node [${[\,\,]}$]{} (c); 5. Let $P := {[p]}$ and $\leq := {[\precsim]}$, and let us prove the claim abstractly. First note that ${[\chi]}\circ{[\kappa]}(P,E_1)$ and ${[\chi]}(P)$ have the same state space $\{\cup E_1,\dots,\cup E_k\}$. For $i,j\neq 1$ the definition of ${[\chi]}$ gives ${[\chi]}\circ{[\kappa]}(P,E_1)(\cup E_j,\cup E_j) = \max_{\leq}\{{[\kappa]}(P,E_1)(\gamma) : \gamma\in \Gamma_T(E_i,E_j)\}$, where $T := S\setminus \cup_iE_i$. It is equal to ${[\chi]}(P)(\cup E_j,\cup E_j)$ since ${[\kappa]}(P,E_1)(x,y) = P(x,y)$ for all $x,y\in S\setminus E_1$, which then also holds for paths $\gamma\in\Gamma_T(E_i,E_j)$. Let us now show that ${[\chi]}(P)(\cup E_1,\cup E_j) = {[\chi]}\circ{[\kappa]}(P,E_1)(\cup E_1,\cup E_j)$. On the one hand for all paths $x\gamma\in \Gamma_T(E_1,E_j)$ we have $P(x\gamma) \leq {[\kappa]}(P,E_1)((\cup E_1)\gamma)$ since ${[\kappa]}(P,E_1)(\cup E_1,y) := \max_{\leq}\{P(x,y) : x\in E_1\}$, and on the other hand for every $\gamma\in T^*\times E_j$ there exists $x\in E_1$ such that ${[\kappa]}(P,E_1)((\cup E_1)\gamma) = P(x\gamma)$. So $$\begin{aligned} {[\chi]}(P)(\cup E_1,\cup E_j) & = \max_{\leq}\{P(\gamma) : \gamma\in\Gamma_T(E_1,E_j)\}\textrm{ by definition},\\ & = \max_{\leq}\{{[\kappa]}(P,E_1)(\gamma) : \gamma\in \Gamma_T(\cup E_1,E_j)\}\\ &\textrm{ by the remark above},\\ & = {[\chi]}\circ{[\kappa]}(P,E_1)(\cup E_1,\cup E_j) \textrm{ by definition}.\end{aligned}$$ The equality ${[\chi]}(P)(\cup E_i,\cup E_1) = {[\chi]}\circ{[\kappa]}(P,E_1)(\cup E_i,\cup E_1)$ can be proved likewise. 6. Let us first prove ${[\delta\circ\kappa(\dots\kappa(\kappa(p,x_1),x_2)\dots,x_k)]} (\cup E_i,\cup E_j) = {[\chi]}({[p]})(\cup E_i,\cup E_j)$ for all $i \neq j$ by induction on the number $k'$ of non-singleton essential classes. Since collapsing a singleton class has no effect, the claim holds for $k' = 0$ by Lemma \[lem:pop\].\[lem:pop2\], so let us assume that it holds for some arbitrary $k'$ and that $p$ has $k'+1$ non-singleton essential classes. One may assume up to commuting and renaming that $E_1$ is not a singleton. Since $\kappa(\kappa(p,x_1),\cup E_1) = \kappa(p,x_1)$, also $\delta\circ\kappa(\dots\kappa(\kappa(p,x_1),x_2)\dots,x_{k}) = \delta\circ\kappa(\dots\kappa(\kappa(\kappa(p,x_1),\cup E_1),x_2)\dots,x_{k})$. So ${[\delta\circ\kappa(\dots\kappa(\kappa(p,x_1),x_2)\dots,x_{k})]} (\cup E_i,\cup E_j) =$\ $ {[\chi]}({[\kappa(p,x_1)]})(\cup E_i,\cup E_j)$ for all $i \neq j$ by induction hypothesis. Moreover, ${[\chi]}({[\kappa(p,x_1)]}) = {[\chi]}({[\kappa]}({[p]},E_1)) = {[\chi]}({[p]})$ by Lemmas \[lem:pop\].\[lem:pop3\] and \[lem:pop\].\[lem:pop4\]. Therefore ${[\chi]}({[\sigma(p)]}) (\cup E_i,\cup E_j)= {[\delta\circ\kappa(\dots\kappa(\kappa(\sigma(p),x_1),x_2)\dots,x_k)]}(\cup E_i,\cup E_j)$ for all $i \neq j$ by Lemma \[lem:pop\].\[lem:pop1\] and Observation \[obs:ao-refl-irrel\]. Line \[line:type-vertex\] from Algorithm \[algo:br\] is performed once and takes $n$ steps; Line \[line:type-edge\] takes one step and is performed $n^2$ times. Let us now focus on the recursive function . If all the arcs of the input are labelled with $0$, the algorithm terminates; if not, $p(s,t) = 1$ at least for some distinct $s,t\in S$ after the outgoing scaling at Line \[line:norm-label\], so either the strongly connected component of $s$ is not a sink, or $s$ is in the same strongly connected component as $t$, which implies in both cases that there are fewer $\cup S_i$ than vertices in $S$, and subsequently that is recursively called at most $n$ times for an input with $n$ vertices. Lines \[line:max-label\], \[line:norm-label\], \[line:max-label-edges\], \[line:partial-max\], \[line:transient\], \[line:reduce-transient\], and \[line:remove\] take at most $O(n^2)$ steps at each call, thus contributing $O(n^3)$ altogether. Tarjan’s algorithm and its modification both run in $O(|A|+|S|)$ which is bounded by $O(n^2)$, and moreover the arcs from different recursive steps are also different, so the overall contribution of Line \[line:Tarjan-SSCC\] is $O(n^2)$. Let us now deal with the more complex Lines \[line:Dijkstra\] and \[line:global-max\]. Let $r$ be the number of recursive calls that are made to , and at the $j$-th call let $T_j$ denote the vertices otherwise named $T$. Since the $(j+1)$-th recursive call does not involve vertices in $T_j$, we obtain $\sum_{j=1}^r|T_j| \leq n$. The loop at Line \[line:global-max\] is taken at most $n^2|T_j|$ times during the $j$-th call, which yields an overall contribution of $O(n^3)$. Likewise, since a basic shortest-path algorithms terminates within $O(n^2)$ steps and since it is called $|T_j|$ times during the $j$-th recursive call, Line \[line:Dijkstra\]’s overall contribution is $O(n^3)$. Up to focusing let $p$ satisfy Assumption \[Assum2\]. Let $y$ be in the set of the transient states $T$, so there exist $x\notin T$ and $\gamma\in \Gamma_T(y,x)$ in the essential graph. By Assumption \[Assum2\] this implies $$0 < \liminf_{{\epsilon}\to 0}p_{\epsilon}(\gamma) \leq \liminf_{{\epsilon}\to 0}{\mathbb{P}}_{{\epsilon}}^y(\tau^+_x < \tau^+_y).$$ On the other hand, let $E$ be the essential class of $x$. Lemma \[lem:p-cong-max\] implies that $$\begin{aligned} {\mathbb{P}}^x(\tau^+_y < \tau^+_x) & \leq 1- {\mathbb{P}}^x(X_{\tau^+_{S\backslash E\cup\{x\}}} = x)\\ & = \sum_{z\notin E}{\mathbb{P}}^x(X_{\tau^+_{S\backslash E\cup\{x\}}} = z) \cong \sum_{z\notin E}\max_{t\in E}p(t,z)\end{aligned}$$ Since $E$ is an essential class, $\sum_{z\notin E}\max_{t\in E}p_{\epsilon}(t,z) \stackrel{{\epsilon}\to 0}{\longrightarrow} 0$, thus by Lemma \[lem:hsr\] $$\tilde{\mu}_{{\epsilon}}(y) \leq \frac{\tilde{\mu}_{{\epsilon}}(y)}{\mu_{{\epsilon}}(x)} = \frac{{\mathbb{P}}_{{\epsilon}}^x(\tau^+_y < \tau^+_x)}{{\mathbb{P}}_{{\epsilon}}^y(\tau^+_x < \tau^+_y)}\stackrel{{\epsilon}\to 0}{\longrightarrow} 0.$$ In the procedure underlying Theorem \[thm:stable-states\], only the states that are transient at some point during the run are deleted. By Proposition \[prop:transient-vanish\] these are exactly the fully vanishing states. For all $x\in S$ let $q^x_{\epsilon}:= \sum_{T\in \mathcal{T}_x} \prod_{(z,t)\in T}p_{\epsilon}(z,t)$ and let $q := (q^y_{\epsilon})_{y\in S,{\epsilon}\in I}$. By irreducibility $q^z_{\epsilon}> 0$ for all $z\in S$ and ${\epsilon}\in I$, so let $\mu_{\epsilon}^z := \frac{q^z_{\epsilon}}{\sum_{y\in S}q^y_{\epsilon}}$ for all $z\in S$ and ${\epsilon}\in I$. Let us assume that $\beta^{y} \precsim \beta^{x}$ for all $y\in S$, so by finiteness of $S$ there exist positive $c$ and ${\epsilon}_0$ such that $\beta^y_{\epsilon}\leq c \cdot \beta^x_{\epsilon}$ for all $y\in S$ and ${\epsilon}< {\epsilon}_0$. For all $y\in S$ and ${\epsilon}< {\epsilon}_0$ we have $$q^x_{\epsilon}\geq \beta^x_{\epsilon}\geq \frac{\beta^y_{\epsilon}}{c} \geq \frac{1}{c \cdot |\mathcal{T}_y|} \cdot \sum_{T\in \mathcal{T}_y} \prod_{(z,t)\in T}p_{\epsilon}(z,t) = \frac{q^y_{\epsilon}}{c \cdot |\mathcal{T}_y|}$$ Note that $|\mathcal{T}_y| \leq 2^{|S|^2}$ since a spanning tree of a graph is a subset of its arcs, so $$\mu_{\epsilon}^x = \frac{q^x_{\epsilon}}{\sum_{y\in S}q^y_{\epsilon}} \geq \frac{1}{c \cdot \sum_{y\in S}|\mathcal{T}_y|} \geq \frac{1}{c \cdot |S|\cdot 2^{|S|^2}}$$ which ensures that $\liminf_{{\epsilon}\to 0} \mu^x_{\epsilon}> 0$. By the Markov chain tree theorem $\mu\cdot p = \mu$, so $x$ is a stable state. Conversely, let us assume that $\neg(\beta^{y} \precsim \beta^{x})$ for some $y\in S$, so for all $c,{\epsilon}> 0$ there exists a positive $\eta < {\epsilon}$ such that $c \cdot \beta^x_{\eta} < \beta^y_{\eta}$. Let $c,{\epsilon}> 0$ and let a positive $\eta < {\epsilon}$ be such that $c \cdot 2^{|S|^2} \cdot \beta^x_{\eta} < \beta^y_{\eta}$, so $c \cdot \mu_\eta^x < \mu_\eta^y$. Since $\mu \leq 1$, it shows that $\liminf_{{\epsilon}\to 0} \mu^x_{\epsilon}= 0$. let $G$ be the graph with arc $(x,y)$ if $p(x,y) > 0$. Let $E'_1,\dots,E'_{k'}$ be the sink (aka bottom) strongly connected components of $G$, so a state is stable for $p$ iff it is stable for one of the $p\mid_{E'_i \times E'_i}$. Since the $p\mid_{E'_i \times E'_i}$ are irreducible perturbations, Lemma \[lem:gen-Young\] can be applied, and by Assumption \[Assum1\] the weights of the spanning trees are totally preordered, so there are stable states. For all $x,y\in S$ let $I_{xy}$ be the support of $p(x,y) : I\to[0,1]$. By Assumption \[Assum1\] the $I_{xy}$ are totally ordered by inclusion. Among these sets let $0\subsetneq I_1 \subsetneq \dots \subsetneq I_l \subsetneq I$ be the non-trivial subsets of $I$. Up to focusing on a smaller neighborhood of $0$ inside $I$, let us assume that $0$ is a limit point of $I_{1}$, all the ${I_{i+1}\setminus I_i }$, and $I\setminus I_{l}$. By Lemma \[lem:cong\].\[lem:cong8\] a state is stable for $p$ iff it is stable for $p\mid_{I_1}$, all the $p\mid_{I_{i+1}\setminus I_i }$, and $p\mid_{I\setminus I_l}$. These restrictions all satisfy the positivity assumption of Observation \[obs:gen-pos-ss\], whose underlying algorithm computes the stable states in $O(n^3)$. Since there are at most $n^2$ restrictions, stability is decidable in $O(n^5)$. By induction on $n := |\{p(x,y)\,\mid\, x\neq y \wedge p(x,y)\neq 0 \wedge \neg(0 < p(x,y))\}|$. If $n =0$, let $G$ be the graph with arc $(x,y)$ if $p(x,y) > 0$. Let $E'_1,\dots,E'_{k'}$ be the sink SCCs of $G$, so a state is stable for $p$ iff it is stable for one of the $p\mid_{E'_i \times E'_i}$. By decidability of $\precsim$ and since the $p\mid_{E'_i \times E'_i}$ are irreducible perturbations, Lemma \[lem:gen-Young\] allows us to compute their stable states. If $n > 0$ let $p(x,y)$ be a non-zero function with zeros, and let $J$ be its support. If $0$ is not a limit point of $J$ ($I\backslash J$), the stable states of $p$ are the stable states of $p_{I\backslash J }$ ($p_{J}$), which are computable by induction hypothesis. If $0$ is a limit point of both $J$ and $I\backslash J$, by Lemma \[lem:cong\].\[lem:cong8\] the stable states wrt $p$ are the states that are stable wrt both $p_{J}$ and $p_{I\backslash J}$, and we can use the induction hypothesis for both. By induction. More specifically, let us prove that these roots are preserved and reflected by outgoing scaling, essential collapse, and transient deletion. - Since the outgoing scaling divides all the coefficients by the same scale $f\in F$, the weights of the spanning trees are all divided by $f^{|S|-1}$, and the order between them is preserved. - Let $E$ be a (sink) SCC of the essential graph of $P$, and let $x,y \in E$. It is easy to see that a spanning tree rooted at $x$ can be modified (only within $E$) into a spanning tree rooted at $y$ that has the same weight. Since the arcs in $E$ do not contribute to the weight, the essential collapse is safe. - Let the sink SCCs of $P$ be singletons, and let $\{y\}$ not be one of those, so there exists a path from $y$ to a sink SCC $\{x\}$. Let $T$ be a spanning tree rooted at $y$. Following $T$, let $x'$ be the successor of $x$, so the weight of $(x,x')$ is less than $1$. Let us modify $T$ into $T'$ by letting $y$ lead to the new root $x$ by a path of weight $1$. The weight of $T'$ is greater than that of $T$ by at least the weight of $(x,x')$. This shows that only essential vertices may be the roots of spanning trees of maximum weights. Moreover, let $T$ be a spanning tree of maximum weight, and let $x$ and $y$ be essential vertices such that following $T$ from $x$ leads to $y$ without visiting any other essential vertex. Then this path between $x$ and $y$ must have maximal weight among all paths from $x$ to $y$ that avoid other essential vertices. So the weight of maximal spanning trees after transient deletion correspond to the weight before deletion. [^1]: This implies that $I$ is infinite. $]0,1]$ and $\{\frac{1}{2^n}\,|\,n\in \mathbb{N}\}$ are typical $I$.
--- author: - 'Jeffrey Z. Pan' - Nicholas Zufelt bibliography: - 'ref/attacks.bib' - 'ref/defenses.bib' - 'ref/Top\_sim.bib' - 'ref/ML.bib' - 'ref/background.bib' - 'ref/experiments.bib' title: On Intrinsic Dataset Properties for Adversarial Machine Learning ---
--- abstract: | In this work we explore the possible evolutionary track of the neutron star in the newly discovered Be/X-ray binary SXP 1062, which is believed to be the first X-ray pulsar associated with a supernova remnant. Although no cyclotron feature has been detected to indicate the strength of the neutron star’s magnetic field, we show that it may be $\ga 10^{14}\,$G. If so SXP 1062 may belong to the accreting magnetars in binary systems. We attempt to reconcile the short age and long spin period of the pulsar taking account of different initial parameters and spin-down mechanisms of the neutron star. Our calculated results show that, to spin down to a period $\sim 1000\,$s within $10-40\,$kyr requires efficient propeller mechanisms. In particular, the model for angular momentum loss under energy conservation seems to be ruled out. author: - 'Lei Fu and Xiang-Dong Li' title: 'Could SXP 1062 be an Accreting Magnetar?' --- Introduction ============ As a subgroup of high-mass X-ray binaries (HMXBs), Be/X-ray binaries (BeXBs) consist of a neutron star (NS) and a Be companion star which show emission lines and infrared (IR) excess in its spectrum. The origin of the emission lines and the IR excess is attributed to the circumstellar discs, which is fed from the material expelled from the rapidly rotating Be stars. The X-ray emissions is believed to originate from accretion of matter in the circumstellar discs by the NSs [see @rei11 for a review]. BeXBs are subdivided into persistent and transient sources according to their different X-ray properties. Transient systems are characterized by outbursting activities, in which the X-ray flux increases by $\sim 1-4$ orders of magnitude compared with in quiescent state, and the outbursts typically last about $0.2-0.3$ orbital period. These systems often have moderately eccentric ($e\ga 0.3$) and relatively narrow orbits ($P_{\rm orb} \la 100 \,{\rm days}$). Persistent sources are relatively quiet systems with low X-ray luminosities ($L_{\rm X} \sim 10^{34}-10^{35}\,{\rm erg\,s^{-1}}$), the variability of which is less than an order of magnitude. They usually have slowly rotating NSs ($P \ga 200\,{\rm s}$), low eccentric ($e \la 0.2$) and relatively wide orbits ($P_{\rm orb} \ga 200 \, {\rm days}) $ [@rei11]. Recently @hen12 reported a new BeXB SXP 1062 in the Wing of the Small Magellanic Cloud (SMC). This source was first discovered as a transient BeXB during the [*XMM-Newton*]{} and [*Chandra*]{} observations in 2010, and was not active during the [*ROSAT*]{} and [*ASCA*]{} observations of the SMC [@hab00; @yok03]. However, SXP 1062 seems to share some characteristics with persistent BeXBs: a relatively low intrinsic X-ray luminosity $L_{\rm X} \simeq 6.3(^{+0.7}_{-0.8})\times 10^{35} \, {\rm erg\, s^{-1}}$ (corresponding an accretion rate of $\dot{M}=L_{\rm X}/\eta c^2\sim 6\times 10^{15}\,{\rm g\,s^{-1}}$ with energy conversion efficiency $\eta=0.1$, $c$ is the speed of light), a slowly rotating NS with period of $P\simeq 1062 \, {\rm s}$, a relatively flat light curve with sporadic fluctuation less than an order of magnitude, and a probably wide orbital period $P_{\rm orb}\sim 300\,{\rm days}$ derived from the @cor84 diagram. What makes this discovery most noticeable is that SXP 1062 is located in the center of a shell-like nebula, which is considered to be a supernova remnant (SNR), aging only $\sim 10-40$ kyr [@hab12; @hen12]. Thus SXP 1062 provides the first example of an X-ray pulsar associated with a SNR, and it challenges the traditional spin-down model of NSs because of its extraordinarily long spin period combined with a relatively young age. @hab12 measured the spin period change in SXP 1062 over a 18 day duration of observation. Their timing analysis shows that the NS in SXP 1062 has a very large average spin-down rate with the spin frequency derivative $\dot{\nu}\sim 2.6 \times 10^{-12}\,{\rm Hz\,s^{-1}}$(or period derivative $\dot{P}\sim 100\rm\,s\,yr^{-1}$). If the NS has a normal magnetic field ($B\sim 10^{12}-10^{13}$ G), it’s hard to spin-down to a period $\sim 1000$s within a few $10^{4}$ years. Assuming in the extreme case that the NS has spun down with a constant spin frequency derivative $-2.6 \times 10^{-12}\,{\rm Hz\,s^{-1}}$ over it’s whole lifetime, @hab12 derived the lower limit of the initial spin period of the NS is 0.5 s. Since the duration of the observation lasts only 18 days (probable less than one tenth of the orbit period), the extraordinarily large spin-down rate is very unlikely to sustain in the whole lifetime of SXP 1062, thus the present value of spin-down rate may be just a short-term one. @pop12 (hereafter PT12) suggest another possibility to reconcile the long spin period and short age of SXP 1062. Assuming that the NS is spinning at the equilibrium period, PT12 estimated the [*current*]{} magnetic field to be $B\la 10^{13}$ G according to the model of @sha12. Their calculations show that if the NS in SXP 1062 was born as a magnetar ($B>10^{14}$ G) it can be spun down to $\sim 1000$ s within a few $10^4$ yr. In this work we consider the proposed mechanisms that can account for the observed rapid spin-down in SXP 1062, if it is alternatively a magnetar with current field strength $\ga 10^{14}$ G. In Section 3 we point out that this possibility remains according to current observations. Based on this, we investigate its spin-down evolution, taking account of various kinds of braking torques during the propeller stage, which is introduced in Section 2, to examine how the observations of SXP 1062 can constrain the possible spin-down mechanisms in NSs. We present our calculated results in Section 4, and discuss their possible implications in Section 5. We conclude that the spin-down evolution is sensitive to the specific propeller mechanism rather the initial spin period of the NS. Spin-down models of NSs in HMXBs ================================ The ejector phase ----------------- Normally a newborn NS in a binary first appears as a radio pulsar (or ejector) after the supernova explosion [@lip92]. In this phase the spin-down of the NS is due to the loss of its rotational energy dominated by magneto-dipole radiation and the outgoing flux of relativistic particles. If the NS’s companion is a high-mass star, the stellar wind matter from the companion within the gravitational radius of $R_{\rm G}=2GM/{V^2}$ will be captured by the NS at a rate $\dot{M}=\pi R^2_{\rm G} \rho V$. Here $G$ is the gravitational constant, $M$ the mass of the NS, $\rho$ the density of the wind at $R_{\rm G}$, and $V$ the velocity of the NS relative to the stellar wind, i.e. $V=(V_{\rm orb}^2+V_{\rm w}^2)^{1/2}$, where $V_{\rm orb}$ and $V_{\rm w}$ are the orbital velocity of the NS and the wind velocity, respectively. The pressure of the outgoing radiation and particles is larger than that of the incoming matter at $R_{\rm G}$. The energy loss rate can be expressed as $\dot{E}=-{\mu}^2 {\Omega}^4(1+\sin^2 \alpha)/{c^3}$ [@spi06], where $\mu\equiv BR^3$ is the magnetic moment of the NS ($B$ and $R$ are the surface magnetic field and radius of the NS, respectively), $\Omega$ the angular velocity, and $\alpha$ the inclination angle between the magnetic and rotational axes. Thus the spin-down rate in the ejector phase is $$\dot{P}={4{\pi}^2 B^2 {R^6}(1+\sin^2 \alpha) \over IPc^3}\, ,$$ where $I$ is moment of inertia of the NS. As the NS spins down and the outgoing pressure goes down, the transition to the supersonic propeller phase will occur when the two pressures are in balance. The spin period of the NS at the transition point is $$P_{\rm ej}={2\pi \over c} \left[ {B^2 R^6(1+{\sin}^2\alpha) \over 4 \dot{M} V} \right]^{1/4}.$$ The propeller phase ------------------- Once the wind matter crosses the gravitational radius $R_{\rm G}$ the propeller phase starts. If the plasma enters the light cylinderical radius the pulsar mechanism will switch off and the incoming matter will form a quasi-static atmosphere surrounding the NS. At this moment accretion does not occur since the magnetosphere radius $R_{\rm m}=(\mu^2/\dot{M}\sqrt{2GM})^{2/7}$ is larger than the corotation radius $R_{\rm co}\equiv (GM/{\Omega}^2)^{1/3}$ of the NS. The infalling material is stopped at the magnetosphere by the centrifugal barrier, which prevents material from accreting onto the NS. The ejected material will carry away the angular momentum of the NS and decelerate its spin. This so-called propeller effect was first introduced by @ill75. @dav79 pointed out that, according to the Mach number ${\mathcal{M}} = \Omega R_{\rm m}/c_{\rm s}$ (here $c_{\rm s}\sim (GM/ R_{\rm m})^{1/2}$ is the sound velocity at $R_{\rm m}$) at the magnetosphere, the propeller phase can be subdivided into two cases: supersonic propeller and subsonic propeller. Accordingly the above mentioned propeller mechanism is related to the supersonic propeller since the Mach number ${\mathcal{M}} >1$. This phase ends when $R_{\rm co} = R_{\rm m}$ (i.e., ${\mathcal{M}}=1$) and the corresponding spin period $$P_{\rm eq}=2^{11/14}\pi \mu^{6/7}\dot{M}^{-3/7} (GM)^{-5/7}\,$$ is called the equilibrium period. Further works [@aro76; @els77] showed that, unless the material outside the magnetosphere is able to cool, accretion is unlikely to happen. Thus, even with $R_{\rm co} > R_{\rm m}$ the propeller stage will succeed once the energy deposition rate is larger than the energy loss rate of the surrounding shell, and keep removing angular momentum from the NS. Because the Mach number ${\mathcal{M}} <1$, this stage is called the subsonic propeller. This process will cease if the loss rate of the rotational energy can no longer support the surrounding atmosphere against cooling, then the atmosphere will collapse and the NS enters the accretor stage. The spin period of the NS at this point is so-called the break period, given by [@dav81; @ikh01] $$P_{\rm br}\simeq 86.88\,\mu_{30}^{16/21}\dot{M}_{16}^{-5/7}(M/M_{\odot})^{-4/21} \,\rm s\,,$$ where $\mu_{30}=\mu/10^{30}\,{\rm G\,cm^3}$, and $\dot{M}_{16}=\dot{M}/10^{16}\,{\rm g\,s^{-1}}$. It should be noted that the supersonic propeller can occur in both wind-fed and disc-fed cases. However, there is no consensus on the angular momentum loss rate of a NS during the propeller phase [@dav79]. Here we adopt a general formulation of the spin-down torque as follows [@mor03], $$I\dot{\Omega} =-\dot{M} R_{\rm m}^2 \Omega_{\rm K}(R_{\rm m}) \left[ {\Omega \over \Omega_{\rm K}(R_{\rm m})}\right]^{\gamma}\,,$$ where $\gamma$ is a parameter ranging from $-1$ to 2, and its value reflects various propeller mechanisms and spin-down efficiencies. For the supersonic propeller, $\gamma$ =$-1$, 0 and 1. When $\gamma=-1$, the matter is assumed to be ejected with the escape velocity at $R_{\rm m}$, i.e., $v_{\rm esc}(R_{\rm m})=\sqrt{2GM/R_{\rm m}}$ [@ill75], and the spin-down torque is calculated based on energy budget. The energy loss rate is $I\Omega\dot{\Omega}=-(1/2)\dot{M}v^2_{\rm esc}(R_{\rm m}) =-\dot{M}[R_{\rm m}\Omega_{\rm K}(R_{\rm m})]^2$, where $\Omega_{\rm K}(R_{\rm m})$ is the Keplerian angular velocity at $R_{\rm m}$. When $\gamma=0$ and 1, the matter is assumed to be ejected at the escape velocity $v_{\rm esc}(R_{\rm m})$ [@dav73] and the rotating velocity $R_{\rm m}\Omega$ [@sha75] of the magnetosphere at $R_{\rm m}$, respectively, and the torque is derived under the angular momentum budget. The corresponding angular momentum loss rate is $I\dot{\Omega}=-\dot{M}R_{\rm m}(2GM/R_{\rm m})^{1/2} =-2^{1/2}\dot{M}R_{\rm m}^2 \Omega_{\rm K}(R_{\rm m})$ and $I\dot{\Omega}=-\dot{M}R^2_{\rm m}\Omega$, respectively. The value of $\gamma$ for the subsonic propeller phase is 2, in which the rotational energy of the NS is assumed to be dissipated at a rate of $I\Omega\dot{\Omega}= -\dot{M}R_{\rm m}^2\Omega^2_{\rm K}(R_{\rm m})[\Omega/\Omega_{\rm K}(R_{\rm m})]^3$, and the resulting torque is $I\dot{\Omega}=-\dot{M} R_{\rm m}^2 \Omega_{\rm K}(R_{\rm m}) [{\Omega/\Omega_{\rm K}(R_{\rm m})}]^2$. Thus the spin-down rate in the propeller stage can be summarized as $$\dot{P}={{(2\pi)^{\gamma-1}(GM)^{1-\gamma \over 2} \dot{M}R_{\rm m}^{1+3\gamma \over 2}}\over {I P^{\gamma -2}}}.$$ As an illustration, we consider a $1.4\,M_{\sun}$ NS with an initial spin period of 0.01 s, a surface magnetic field of $10^{12}\,$G, and an accreting rate of $10^{16}$ gs$^{-1}$. The corresponding characteristic spin-periods are $P_{\rm ej}\simeq 0.24\,$s, $P_{\rm eq}\simeq 6.7\,$s and $P_{\rm br}\simeq 81.5\,$s respectively. The timescales in the supersonic propeller phase varies from 30kyr to 20Myr as $\gamma$ decreases from 1 to $-1$, and in the subsonic propeller phase which has a spin-down rate irrelevant with $P$ the spin-down timescale is $\sim 90\,$kyr. In the above calculation we use $300\,{\rm km\,s^{-1}}$ as the relative velocity [see @rag98]. The accretor phase ------------------ Steady wind accretion onto the NS starts at $P>P_{\rm br}$. In this phase the spin period could be further changed since the wind matter possesses some angular momentum. However both observations [@bil97] and numerical calculations [e.g., @fry88; @mat92; @anz95; @ruf99] have shown that the efficiency of angular momentum transfer in wind accretion is quite low, with alternating short-term spin-up and spin-down. Thus one may expect that the present spin periods of wind-fed X-ray pulsars are not significantly different from the $P_{\rm br}$ achieved earlier. Recently @sha12 proposed a model of subsonic quasi-spherical accretion onto a slowly rotating NS in HMXBs with low X-ray luminosities ($L_{\rm X}<10^{36}\,{\rm erg\,s^{-1}}$). In this model the accreting matter settles down subsonically onto the rotating magnetosphere, forming an extended quasi-static shell around it. The angular momentum can be removed from or injected into the NS depending on the sign of the specific angular momentum of the falling matter. In the case of moderate coupling between the plasma and the magnetosphere, from the torques acted on the NS due to both the magnetosphere-plasma interaction and accretion, the changing rate of the spin period is derived to be (see also PT12) $$\dot{P}=-{P^2 \over 2\pi I}\left[ A\dot{M}^{(3+2n)/11}_{16} - C\dot{M}^{3/11}_{16} \right]\,,$$ where $A\sim 2.2\times 10^{32} K_1(B_{12}R_{6})^{1/11}V_{300}^{-4}P_{{\rm orb},300}^{-1}$, and $C\sim 5.4\times 10^{31} K_1(B_{12}R_{6})^{13/11} P_{1000}^{-1}$. Here $B_{12}=B/10^{12}\,{\rm G}$, $R_{6}=R/10^6\,{\rm cm}$, $P_{1000}=P/1000\,{\rm s}$, $P_{\rm orb,300}=P_{\rm orb}/300\,{\rm hr}$, $V_{300}=V/300\,{\rm km\,s^{-1}}$. The constants $K_1$ and $n$ are set to be 40 and 2 respectively. It is seen that there is an critical accretion rate at which $\dot{P}=0$. Estimate of the magnetic field ============================== The NS magnetic field is a critical parameter in the spin-down models. Before investigating the spin history of SXP 1062, we need to know the information about its magnetic field strength. The cyclotron features in the X-ray spectra provide the most direct and accurate way to measure the magnetic field strengths of accreting NSs. Unfortunately, they have not been detected in SXP 1062. Nevertheless, there are several other ways to estimate the NS magnetic field from its spin period and period derivative, though model dependent. One of the hints comes from the young age of the SNR associated with the NS. This requires that the lifetime of the ejector phase (usually much longer than that of the propeller phase) must end within a few $10^4$ years. Assuming that the magnetic field has changed little during this phase and that the initial spin period is much smaller than $P_{\rm ej}$, one can estimate the timescale of the ejector phase to be (PT12) $$\tau_{\rm ej}= {c^3IP_{\rm ej}^2\over16\pi^2B^2R^6}\sim 1.5\, \dot{M}_{16}^{-1/2}V_{300}^{-1/2}B_{12}^{-1}\,{\rm Myr}.$$ This value is about two orders of magnitude larger than the estimated age of SXP 1062, unless $B_{12}>100$. This means that SXP 1062 must have possessed very strong magnetic field. PT12 further assumed that the NS in SXP 1062 is spinning at the equilibrium period as described in the model of @sha12, and derived the current magnetic field to be $B_{12}\la 10^{13}$ G using Eq. (7). Accordingly they suggest that the NS magnetic field must have been stronger in the past and then decayed to its present, normal value. It is noted that the model of @sha12 has quite a few parameters whose magnitudes are uncertain. For example, the value of $K_1$, which relates the poloidal ($B_p$) and toroidal ($B_{\phi}$) magnetic field components, is found to be $\sim 40$ in @sha12. This will result in $B_{\phi}\gg B_p$ during the accretor phase, and it is not known whether the magnetic field configuration can remain stable in this case [cf. @aly85; @wan95]. The extraordinary large spin-down rate of SXP 1062 can be used to put useful constraint on the magnetic field of the NS. As shown by many authors [@lyn74; @lip82; @bis91], the maximum spin-down torque exerted on a NS in either disc or spherical accretion is $$I\dot{\Omega}=-\kappa{\mu^2\over R_{\rm co}^3},$$ where $\kappa< 1$. To account for the spin-down rate measured in SXP 1062, the NS magnetic field has to be $$B\simeq 3\times 10^{14}\kappa^{-1/2}M_{1.4}^{1/2}I_{45}^{1/2} R_6^{-3}(\dot{P}/100\,{\rm syr}^{-1})^{1/2}\,{\rm G},$$ where $M_{1.4}=M/1.4M_{\sun}$, and $I_{45}=I/10^{45}$ gcm$^2$. The same result can be obtained if the spin-down torque in the subsonic propeller phase [@dav81] is used. Another efficient spin-down mechanism was proposed by @ill90. They argued that there could be outflows from the NS magnetosphere caused by heating of hard X-ray emission of the NS if the X-ray luminosity falls in the range of $\sim 2 \times10^{34}$ ergs$^{-1} - 3\times 10^{36}$ ergs$^{-1}$. Compton scattering heats the accreted matter anisotropically, and some of the heated matter with a low density can flow up and form outflows to take the angular momentum away. The corresponding spin-down torque is $$I\dot{\Omega}=-\kappa{\chi\over 2\pi}\dot{M}_{\rm out}\Omega R_{\rm m}.$$ Here $\dot{M}_{\rm out}$ is the mass outflow rate (no larger than the mass transfer rate) and $\chi$ is the solid angle of the outflow. This gives the magnetic field to be $$B\simeq 3.6\times 10^{14}({\kappa\chi\over 2\pi})^{-7/8}I_{45}^{7/8}M_{1.4}^{1/4}I_{45}^{1/2} R_6^{-3}({\dot{M}_{\rm out}\over 10^{16}\,{\rm gs}^{-1}})^{-3/8} ({\dot{P}\over 100\,{\rm syr}^{-1}})^{7/8}({P\over 1062\,{\rm s}})^{-7/8}\,{\rm G}.$$ The above estimates show that SXP 1062 could be an accreting magnetar. Similar conclusions have also been drawn for other X-ray pulsars in HMXBs. @dor10 reported the spin history of the 685 s X-ray pulsar GX 301$-$2, and found it spinning down at a rate $\dot{\nu}\sim 10^{-13}$ Hzs$^{-1}$. Reig et al. (2012) showed that the measurements of the spin period (5560 s) of 4U2206$+$54 imply a spin-down rate of $\dot{\nu}\sim -1.5(\pm 0.2)\times 10^{-14}$ Hzs$^{-1}$. Using the above spin-down mechanisms to explain the spin-down rates also leads to very strong magnetic fields ($>10^{14}$ G) in these NSs [see also @lip82]. @if12 suggested an alternative interpretation for the rapid spin-down in GX 301$-$2. They showed that if the accreting material is magnetized, the magnetic pressure in the accretion flow increases more rapidly than its ram pressure, and under certain conditions the magnetospheric radius $$R_{\rm mca}\simeq 1.5\times 10^{8}\alpha_{0.1}^{2/3}B_{12}^{6/13}R_6^{18/13} T_6^{-2/13}M_{1.4}^{1/13}\dot{M}_{16}^{-4/13}\,{\rm cm},$$ is considerably smaller than the traditional magnetospheric radius. Here $\alpha=0.1 \alpha_{0.1}$ is the efficiency parameter of Bohm diffusion, and $T=10^6T_6$ K is the plasma temperature at the magnetospheric boundary. The spin-down torque applied to the NS is found to be $$I\dot{\Omega}=-\frac{\kappa_m\mu^2}{(R_{\rm co}R_{\rm mca})^{3/2}},$$ where $\kappa_m$ is a dimensionless efficiency parameter for the magnetic viscosity coefficient, and $0<\kappa_m<1$. The above equation can explain the spin-down of GX 301$-$2 with a normal field of a few $10^{12}$ G [**if $\kappa_m\sim 0.1$.**]{} In the case of SXP 1062, it results in the estimate of the magnetic field to be $$B\simeq 2\times 10^{14}\kappa_{0.1}^{-13/17}I_{45}^{13/17}M_{1.4}^{8/17}I_{45}^{13/17} R_6^{-3}\dot{M}_{16}^{-6/17}T_6^{-3/17} ({\dot{P}\over 100\,{\rm syr}^{-1}})^{13/17}({P\over 1062\,{\rm s}})^{-13/17}\,{\rm G},$$ where $\kappa_{0.1}=\kappa_{m}/0.1$. In the same way, @ikh12 estimated the magnetic field of SXP 1062 to be $\sim 4\times 10^{13}$ G by assuming $\kappa_m=1$. Even this limiting value is comparable to the quantum critical field $B_{\rm Q}=4.4\times 10^{13}$ G. According to the above arguments, in the following we assume that the current magnetic field of SXP 1062 is $\ga 10^{14}$ G. As to the evolution of the magnetic field, we consider two kinds of models. First we assume that the magnetic field was initially stronger and adopt a phenomenological model for the magnetic field decay [@dall12] $$\frac{{\rm d} B}{{\rm d} t}=-AB^{1+\alpha}=-\frac{B}{\tau_{\rm d}(B)},$$ where the field decay timescale $\tau_{\rm d}(B)=(AB^\alpha)^{-1}$, and $A$ and $\alpha$ are constants. The solution of the above equation in the case of $\alpha \neq 0$ is $$B=B_{\rm i}(1+\alpha t/\tau_{\rm d,i})^{-1/\alpha},$$ where $B_{\rm i}$ is the initial field strength and $\tau_{\rm d,i} =(AB_{\rm i}^\alpha)^{-1}$. @dall12 show that, to be compatible with the observations of magnetar candidates, the magnetic field should decay on a timescale of $\sim 10^3$ yr for $B\sim 10^{15}$ G, with a decay index most likely within the range $1.5\la \alpha \la 1.8$. Here we adopt the initial magnetic field as $7\times 10^{14}$, $3\times 10^{14}$ and $10^{14}\, {\rm G}$, with $\alpha=1.6$ and $\tau_{\rm d,i}=10^3/B_{\rm i,15}^{\alpha}$ yr where $B_{\rm i, 15}=B_{\rm i}/10^{15}$ G. On the other hand, the observed braking indices for several young radio pulsars have been measured and are all less than 3 [@lyne93; @lyne96; @kas94; @liv05; @liv06; @liv11; @wel11], suggesting that the NS magnetic fields may be increasing. In particular, the braking index of the high-field ($5\times 10^{13}$ G) radio pulsar PSR J1734$-$3333 was found to be $0.9\pm 0.2$ [@esp11], implying that this pulsar may soon have the rotational properties of a magnetar. In the second approach, we adopt a field growth model in the following form $$B=B_{\rm i}(1+t/\tau)^{\alpha},$$ with $B_{\rm i}=3\times 10^{12}$ G, $\tau=10^3$ yr, and $\alpha=1.45$, so that $B=8.5\times 10^{13}$ G and $6.3\times 10^{14}$ G at $t=10^4$ and $4\times 10^4$ yr, respectively. In Figure 1 we show the model evolution of the magnetic fields. Spin evolution ============== A newborn NS is usually rotating rapidly. However, @hab12 suggested that SXP 1062 could have been born with a period much larger than $0.01$s. Some central compact objects (CCOs) in supernova remnants which have spin periods ranging from $\sim 0.1$ to $\sim 0.5$s [@zav00; @g05; @gh09] seem to support this point of view, since there is evidence that the spin periods of these sources are very close to the initial ones. Thus in our model we take 0.01s, 0.5s and 6.5s as the initial period of the NS in order to examine whether it can significantly influence the spin-down evolution. We use the ultra-long initial spin period of 6.5s because this value is larger than $P_{\rm ej}$ with $B=7 \times 10^{14}\,$G in the ejector phase, so that the NS will directly enter the supersonic propeller phase after the SN event. It was shown by @aro76 and @els77 that, for stable accretion to occur, the plasma at the base of the NS magnetosphere should become sufficiently cool, so that the magnetospheric boundary is unstable with respect to interchange instabilities. This can be realized only if the spin period of the star exceeds the break period $P_{\rm br}$, and the X-ray luminosity is larger than $$L_{\rm cr}=3\times 10^{36}B_{12}^{1/4} M_{1.4}^{1/2}R_6^{-1/8} \,{\rm ergs}^{-1}.$$ If $B\ga 10^{14}$ G, SXP 1062 should be in the subsonic propeller phase. However, it is not clear whether the picture of the subsonic propeller can be applied to BeXBs, due to the following reasons. (1) The mass accretion in BeXBs is now believed to be triggered by Roche-lobe overflow of the Be discs which is truncated by the NS through a tidal torque [@oka01; @oka02; @rei11], thus is deviated from the traditional Bondi accretion in supergiant HMXBs. This means that the NS in SXP 1062 is probably surrounded by a (quasi-)disc rather a quasi-static, spherical atmosphere. (2) Even in the spherical wind-fed case, the spin period - orbital period correlation in BeXBs seems to be well accounted for by assuming that the NSs are spinning at the equilibrium periods described by Eq. (3) [@cor84; @wat89][^1]. Thus in our calculations we don’t consider the subsonic propeller phase, and assume that the evolutionary sequence of the NS is ejector-supersonic propeller-accretor. We use different $\gamma$ to calculate the spin-down torque in the supersonic propeller phase, and take the equilibrium period (Eq. \[3\]) as the final period (i.e., the period does not change during the accretor phase unless the magnetic field changes). In Figures $2-4$ we show the calculated results corresponding to different initial spin periods of the NS. Here we take the NS mass to be $M=1.4 M_{\sun}$, the inclination angle $\alpha =90^{\circ}$, $I=10^{45}\,\rm g\,s^{-1}$, and $R=10^{6}\rm\,cm$. The relative wind velocity $V$ is set to be $300\,\rm km\,s^{-1}$, and the accretion rate is fixed to be $10^{16}\,\rm g\,s^{-1}$. The three thin lines (from top to bottom) describe the spin evolution with initial magnetic field of $7\times 10^{14}$ G, $3\times 10^{14}$ G, and $10^{14}$ G undergoing field decay, respectively; the thick line is for the field growth model with initial field of $3\times 10^{12}$ G. The solid, dashed, and dotted lines represent the ejector, propeller, and accretor phases, respectively. We notice that the time spent in the supersonic propeller phase is sensitive to the value of $\gamma$ which reflect different spin-down mechanisms in the supersonic propeller phase as we mentioned before. In the case of $\gamma=-1$, where the spin-down torque is most inefficient, the NS has not evolved out of the supersonic propeller phase at the age of the SNR, even with a superstrong magnetic field. Our results are not sensitive to the initial spin period of the NS. Thus even for the case of ultra-long initial spin period there is no significant change in the final NS period. In other cases the NS can successfully reach $P_{\rm eq}$ when $t=10-40$ kyr. Since $P_{\rm eq}$ depends on the magnetic field, we can see that the magnetic field determines the final spin period which the NS can achieve, and the value of $\gamma$ determines the evolutionary timescale. In the field decay model, the spin period remains invariant once it reaches $P_{\rm eq}$, since we assume that during the accretor phase the long-term, net torque from the wind is small. In the field growth model, the spin period keeps increasing with $P_{\rm eq}$ in the final stage, since the increase of $B$ always breaks the instaneous equilibrium and causes a spin-down torque. It is seen that $B\ga 10^{14}\,\rm G$ can fulfill the requirement of SXP 1062 in both models. This result favors that in the supersonic propeller phase matter is ejected at $R_{\rm m}$ with the escape velocity under angular momentum conservation, consistent with the numerical calculation by @wan85. Discussion and conclusion ========================= The newly found Be/X-ray binary SXP 1062 is believed to be the first X-ray pulsar associated with a SNR, which shows a combination of young age and long spin period that cannot be explained by a typical NS. Previous studies [@hab12; @pop12] explored its possible origin invoking initially long spin period or ultra-strong magnetic field. Here we discuss the possibility that SXP 1062 is an accreting magnetar with $B\ga 10^{14}$ G, and examine in this case how the properties of the NS (i.e. initial spin period, magnetic field and its evolution) and the spin-down torques can be constrained. Other candidates of accreting magnetars in binaries include 4U 2206$+$54 [@fin10; @rei12] and GX 301$-$2 [@dor10]. However, it is controversial whether they really possess ultra-strong magnetic fields. @wan10 reported the existence of two cyclotron absorption lines at $\sim 30$ and 60 keV in 4U 2206$+$54, and derived a magnetic field of $3.3\times 10^{12}$ G, although no sign of this feature has been detected in other observations. Observations of @la05 showed the cyclotron resonance scattering feature at $\sim 35-45$ keV in GX 301$-$2, suggesting the field strength of $4\times 10^{12}$ G. @if12 proposed a magnetic wind accretion model for GX 301$-$2 to account for the difference in the field strengths derived from the cyclotron lines and from the spin-down rates. In the case of SXP 1062, it is found that the magnetic field may be at least strong as $\sim B_{\rm Q}$ in the model of @ikh12. For a dipole magnetic field of $\sim 10^{14}$ G, the electron cyclotron line would appear at $E > 1$ MeV, but a proton cyclotron line would appear at $E \sim 0.5(B/10^{14} {\rm G}) = 0.3$ keV. Although a line with this energy should be observable with [*XMM-Newton*]{} detectors, it is in a region affected by strong interstellar absorption [@rei12]. Currently, no significant lines have been detected in the persistent emission of magnetars [@mer08]. If SXP 1062 is or has been a magnetar, the association between the SNR and SXP 1062 may provide an opportunity to investigate the formation and evolution of magnetars. @vink06 showed that there is no evidence that magnetars are formed from rapidly rotating proto-neutron stars. The SNR associated with SXP 1062 is one of the faintest SNRs known in the SMC [@fil05; @fil08; @owen11]. This seems to in line with the finding of @vink06 that their formation may not be accompanied with extraordinarily bright supernovae. However, it is known that the brightness of SNRs depends strongly on the density of the environment. Nevertheless, the age of the SNR can be used to set useful constraints on the timescale of magnetic field evolution, either due to field decay or growth. Since the long spin period is most likely to be reached during the propeller phase, the age of the SNR also plays a role in testing the efficiency of the spin-down torques in different propeller mechanisms. Our results seem to rule out the model with $\gamma=-1$, and prefer larger values of $\gamma$ which correspond to more efficient propeller spin-down. Recent 2- and 3-dimensional magnetohydrodynamic (MHD) simulations by @rom05 and @ust06 on disc-fed NSs suggest $\dot{\Omega}\propto -\Omega^{2}$ for propeller-driven outflows. @tor10 investigate the spinning-down of magnetars rotating in the propeller regime with axisymmetric MHD simulations, and find $\dot{\Omega}\propto -\Omega^{1.5}$. It should be noted that the mass transfer rate is assumed to be constant throughout our calculations, but in reality it must have varied with the orbital motion of the NS. For instance, an eccentric orbit may result in alternation among the ejector, propeller and accretor phases. The sporadic outburst behavior will further complicate the spin-down evolution of the NS. This means that the calculated evolutionary sequence in our model and the values of $\gamma$ should be taken as an illustration and lower limits, respectively. However, both the high spin-down rate and the young age of SXP 1062 provide strong evidence that the binary indeed harbors or harbored a magnetar, and a effective spin-down mechanism is required. We expect further observations to confirm the long-term spin behavior of SXP 1062. We thank an anonymous referee for helpful comments. This work was supported by the Natural Science Foundation of China under grant number 11133001 and the Ministry of Science, the National Basic Research Program of China (973 Program 2009CB824800), and the Qinglan project of Jiangsu Province. Aly, J. J. 1985, , 143, 19 Arons, J., & Lea, S. M. 1976, , 207, 914 Anzer, U. & B[ö]{}rner, G. 1995, , 299, 62 Bildsten, L. et al. 1997, , 113, 367 Bisnovatyi-Kogan, G. S. 1991, , 245, 528 Corbet, R. H. D. 1984, , 141, 91 Dai, H.-L. & Li, X.-D., 2006, , 653, 1410 Dall’Osso, S., Granot, J., & Piran, T. 2012, , 422, 2878 Davidson, K. & Ostriker, J. P. 1973, , 179, 585 Davies, R. E., Fabian, A. C., & Pringle, J. E. 1979, , 186, 779 Davies, R. E. & Pringle, J. E. 1981, , 196, 209 Doroshenko, V., Santangelo, A., & Suleimanov, V. et al. 2010, , 515, 10 Elsner, R. F. & Lamb, F. K. 1977, , 215, 897 Espinoza, C. M., Lyne, A. G., Kramer, M., Manchester, R. N. & Kaspi, V. 2011, , 741, L13 Finger, M. H., Ikhsanov, N. R.,Wilson-Hodge, C. A., & Patel, S. K. 2010, , 709, 1249 Filipovi[ć]{}, M. D., Haberl, F., & Winkler, P. F., et al. 2008, , 485, 63 Filipovi[ć]{}, M. D., Payne, J. L., & Reid, W. et al. 2005, , 364, 217 Francischelli, G. J. & Wijers, R. A. M. J. 2002, , 565, 471 Fryxell, B. A. & Taam, R. E. 1988, , 335, 862 Gotthelf, E. V. & Halpern, J. P., 2009, , 695, L35 Gotthelf, E. V., Halpern, J. P., & Seward F. D. 2005, , 627, 390 Harbel, F., Filipovi[ć]{}, M. D., & Pietsch, W. et al. 2000, , 142, 41 Harbel, F., Sturm, R., & Filipović, M. D. et al. 2012, , 537, L1 H[é]{}nault-Brunet, V., Oskinova, L. M., & Guerrero, M. A. et al. 2012, , 420, L13 Ikhsanov, N.R. 2001, , 368, L5 Ikhsanov, N.R. 2012, , 424, L39 Ikhsanov, N. R. & Finger, M. H. 2012, , 753, 1 Illarionov, A. F. & Kompaneets, D. 1990, , 247, 219 Illarionov, A. F. & Sunyaev, R. A. 1975, , 39, 185 Kaspi, V. M., Manchester, R. N., Siegman, B., Johnston, S., & Lyne, A. G. 1994, , 422, L83 La Barbera, A., Segreto, A., & Santangelo, A., et al. 2005, , 438, 617 Lipunov, V. M. 1982, Soviet Astronomy, 26, 537 Lipunov, V. M., 1992, Astrophysics of Neutron Stars, Berlin, Springer-Verlag Livingstone, M. A. & Kaspi, V. M. 2011, , 742, 31 Livingstone, M. A., Kaspi, V. M., & Gavriil, F. P. 2005, , 633, 1095 Livingstone, M. A., Kaspi, V. M., Gotthelf, E. V., & Kuiper, L. 2006, , 647, 1286 Lynden-Bell, D. & Pringle, J. E. 1974, , 168, 603 Lyne, A. G., Pritchard, R. S., & Smith, F. G. 1993, , 265, 1003 Lyne, A. G., Pritchard, R. S., Graham-Smith, F., & Camilo, F. 1996, , 381, 497 Matsuda, T., Ishii, T., & Sekino, N. et al. 1992, , 255, 183 Mereghetti, S. 2008, , 15, 225 Mori, K. & Ruderman, M. A. 2003, , 592, L75 Okazaki, A. T., Bate, M. R., Ogilvie, G. I., & Pringle, J. E. 2002, , 337, 967 Okazaki, A. T. & Negueruela, I. 2001, , 377, 161 Owen, R. A., Filipovi[ć]{}, M. D., & Ballet, J., et al. 2011, , 530, A132 Popov, S. B. & Turolla, R. 2012, , 421, L127 Raguzova N. V. & Lipunov V. N. 1998, , 340, 85 Reig, P. 2011, , 321, 1 Reig, P., Torrejón, J. M., & Blay, P. 2012, , in press (arXiv:1203.1490) Romanova, M. M., Ustyugova, G. V. Koleoba, A. V., & Lovelace, R. V. E., 2005, , 635, L165 Ruffert, M. 1999, , 346, 861 Shakura, N. I. 1975, Sov. Astron. Lett., 1, 223 Shakura, N., Postnov K., & Kochetkova A. et al. 2012, , 420, 216 Spitkovsky, A. 2006, , 648, L51 Stella, L., White, N. E., & Rosner, R. 1986, , 308, 669 Toropina, O., Romanova, M., & Lovelace, R. V. E. 2010, in Proceedings of the 25th Texas Symposium on Relativistic Astrophysics. December 6-10, 2010. Heidelberg, Germany. Eds. F. M. Rieger, C. van Eldik and W. Hofmann. Published online at http://pos.sissa.it/cgi-bin/reader/conf.cgi?confid=123, id.232 Ustyugova, G. V., Koleoba, A. V., Romanova, M.M., & Lovelace, R. V. E. 2006, , 646, 304 Vink, J. & Kuiper, L. 2006, , 370, L14 Wang, W., 2010, , 520, A22 Wang, Y.-M. 1995, , 449, L153 Wang, Y.-M. & Robertson, J. A. 1985, , 151, 361 Weltevrede, P., Johnston, S., & Espinoza, C. M. 2011, , 411, 1917 Waters, L. B. F. M. & van Kerkwijk, M. H. 1989, , 223, 296 Yokogawa, J., Imanishi, K., & Tsujimoto, M. et al. 2003, , 55, 161 Zavlin, V. E., Pavlov, G. G., Sanwal, D., & Tr¬umper, J., 2000, , 540, L25 [^1]: Additionally, population synthesis calculations by @dai06 showed that the distribution of the spin and orbital periods of X-ray pulsars in supergiant HMXBs can be roughly explained without requiring that an X-ray pulsar emerges after the subsonic propeller phase [see also @ste86].
--- abstract: 'By finding orthogonal representation for a family of simple connected called $\delta$-graphs it is possible to show that $\delta$-graphs satisfy delta conjecture. An extension of the argument to graphs of the form $\overline{P_{\Delta(G)+2}\sqcup G}$ where $P_{\Delta(G)+2}$ is a path and $G$ is a simple connected graph it is possible to find an orthogonal representation of $\overline{P_{\Delta(G)+2}\sqcup G}$ in $\mathbb{R}^{\Delta(G)+1}$. As a consequence we prove delta conjecture.' author: - 'Pedro Díaz Navarro[^1]' date: 'Junio , 2018' title: A Proof for Delta Conjecture --- [[**Key words:**]{} delta conjecture , simple connected graphs, minimum semidefinite rank, $\delta$-graph, C-$\delta$ graphs, orthogonal representation.]{}\ \ [[**DOI:**]{} 05C50,05C76 ,05C85 ,68R05 ,65F99,97K30.]{} Introduction ============ A [*graph*]{} $G$ consists of a set of vertices $V(G)=\{1,2,\dots,n\}$ and a set of edges $E(G)$, where an edge is defined to be an unordered pair of vertices. The [*order*]{} of $G$, denoted $\vert G\vert $, is the cardinality of $V(G)$. A graph is [*simple*]{} if it has no multiple edges or loops. The [*complement* ]{} of a graph $G(V,E)$ is the graph $\overline{G}=(V,\overline{E})$, where $\overline{E}$ consists of all those edges of the complete graph $K_{\vert G\vert}$ that are not in $E$. A matrix $A=[a_{ij}]$ is [*combinatorially symmetric*]{} when $a_{ij}=0$ if and only if $a_{ji}=0$. We say that $G(A)$ is the graph of a combinatorially symmetric matrix $A=[a_{ij}]$ if $V=\{1,2,\dots,n\}$ and $E=\{\{i,j\}: a_{ij}\ne0\}$ . The main diagonal entries of $A$ play no role in determining $G$. Define $S(G,\F)$ as the set of all $n\times n$ matrices that are real symmetric if $\F=\Re$ or complex Hermitian if $\F=\C$ whose graph is $G$. The sets $S_+(G,\F)$ are the corresponding subsets of positive semidefinite (psd) matrices. The smallest possible rank of any matrix $A\in S(G,\F)$ is the [*minimum rank*]{} of $G$, denoted $\mr(G,\F)$, and the smallest possible rank of any matrix $A\in S_+(G,\F)$ is the [*minimum semidefinite rank*]{} of $G$, denoted $\mr_+(G)$ or $\msr(G)$. In 1996, the minimum rank among real symmetric matrices with a given graph was studied by Nylen [@PN]. It gave rise to the area of minimum rank problems which led to the study of minimum rank among complex Hermitian matrices and positive semidefinite matrices associated with a given graph. Many results can be found for example in [@FW2; @VH; @YL; @LM; @PN]. During the AIM workshop of 2006 in Palo Alto, CA, it was conjectured that for any graph $G$ and infinite field $F$, $\mr(G,\F)\le |G|-\delta(G)$ where $\delta(G)$ is the minimum degree of $G$. It was shown that for if $\delta(G)\le 3$ or $\delta(G)\ge |G|-2$ this inequality holds. Also it can be verified that if $|G|\le 6$ then $\mr(G,F)\le |G|-\delta(G)$. Also it was proven that any bipartite graph satisfies this conjecture. This conjecture is called the [*Delta Conjecture*]{}. If we restrict the study to consider matrices in $S_+(G,\F)$ then delta conjecture is written as $\msr(G)\le |G|-\delta(G)$. Some results on delta conjecture can be found in [@AB; @RB1; @SY1; @SY] but the general problem remains unsolved. In this paper, by using a generalization of the argument in [@PD], we give an argument which prove that delta conjecture is true for any simple and connected graph which means that delta conjecture is true. Graph Theory Preliminaries ========================== In this section we give definitions and results from graph theory which will be used in the remaining chapters. Further details can be found in [@BO; @BM; @CH]. A [**graph**]{} [$G(V,E)$]{} is a pair [$(V(G),E(G)),$]{} where [$V(G)$]{} is the set of vertices and [$E(G)$]{} is the set of edges together with an [**incidence function**]{} $\psi(G)$ that associate with each edge of $G$ an unordered pair of (not necessarily distinct) vertices of $G$. The [**order**]{} of [$G$]{}, denoted [$|G|$]{}, is the number of vertices in [$G.$]{} A graph is said to be [**simple**]{} if it has no loops or multiple edges. The [**complement**]{} of a graph [$G(V,E)$]{} is the graph [$\overline{G}=(V,\overline{E}),$]{} where [$\overline{E}$]{} consists of all the edges that are not in [$E$]{}. A [**subgraph**]{} [$H=(V(H),E(H))$]{} of [$G=(V,E)$]{} is a graph with [$V(H)\subseteq V(G)$]{} and [$E(H)\subseteq E(G).$]{} An [**induced subgraph**]{} [$H$]{} of [$G$]{}, denoted G\[V(H)\], is a subgraph with [$V(H)\subseteq V(G)$]{} and [$E(H)=\{\{i,j\} \in E(G):i,j\in V(H)\}$]{}. Sometimes we denote the edge $\{i,j\}$ as $ij$. We say that two vertices of a graph $G$ are [**adjacent**]{}, denoted $v_i\sim v_j$, if there is an edge $\{v_i,v_j\}$ in $G$. Otherwise we say that the two vertices $v_i$ and $v_j$ are [**non-adjacent**]{} and we denote this by $v_i \not\sim v_j$. Let [$N(v)$]{} denote the set of vertices that are adjacent to the vertex [$v$]{} and let [$N[v]=\{v\}\cup N(v)$]{}. The [**degree**]{} of a vertex [$v$]{} in [$G,$]{} denoted [$\d_G(v),$]{} is the cardinality of [$N(v).$]{} If [$\d_G(v)=1,$]{} then [$v$]{} is said to be a [**pendant**]{} vertex of [$G.$]{} We use [$\delta(G)$]{} to denote the minimum degree of the vertices in [$G$]{}, whereas [$\Delta(G)$]{} will denote the maximum degree of the vertices in [$G$]{}. Two graphs $G(V,E)$ and $H(V',E')$ are identical denoted $G=H$, if $V=V', E=E'$, and $\psi_G=\psi_H$ . Two graphs $G(V,E)$ and $H(V',E')$ are [**isomorphic**]{}, denoted by $G\cong H$, if there exist bijections $\theta:V\to V'$ and $\phi: E\to E' $ such that $\psi_G(e)=\{u,v\}$ if and only if $\psi_H(\phi(e))= \{\theta(u), \theta(v)\}$. A [**complete graph**]{} is a simple graph in which the vertices are pairwise adjacent. We will use [$nG$]{} to denote [$n$]{} copies of a graph [$G$]{}. For example, $3K_1$ denotes three isolated vertices $K_1$ while [$2K_2$]{} is the graph given by two disconnected copies of $K_2$. A [**path**]{} is a list of distinct vertices in which successive vertices are connected by edges. A path on [$n$]{} vertices is denoted by [$P_n.$]{} A graph [$G$]{} is said to be [**connected**]{} if there is a path between any two vertices of [$G$]{}. A [**cycle**]{} on [$n$]{} vertices, denoted [$C_n,$]{} is a path such that the beginning vertex and the end vertex are the same. A [**tree**]{} is a connected graph with no cycles. A graph $G(V,E)$ is said to be [**chordal**]{} if it has no induced cycles $C_n$ with $n\ge 4$. A [**component**]{} of a graph $G(V,E)$ is a maximal connected subgraph. A [**cut vertex**]{} is a vertex whose deletion increases the number of components. The [**union**]{} $G\cup G_2$ of two graphs $G_1(V_1,E_1)$ and $G_2(V_2,G_2)$ is the union of their vertex set and edge set, that is $G\cup G_2(V_1\cup V_2,E_1\cup E_2$. When $V_1$ and $V_2$ are disjoint their union is called [**disjoint union**]{} and denoted $G_1\sqcup G_2$. The Minimum Semidefinite Rank of a Graph ======================================== In this section we will establish some of the results for the minimum semidefinite rank ($\msr$)of a graph $G$ that we will be using in the subsequent chapters. A [**positive definite**]{} matrix $A$ is an Hermitian $n\times n$ matrix such that $x^\star A x>0$ for all nonzero $x\in \C^n$. Equivalently, $A$ is a $n\times n$ Hermitian positive definite matrix if and only if all the eigenvalues of $A$ are positive ([@RC], p.250). A $n\times n$ Hermitian matrix $A$ such that $x^\star A x\ge 0$ for all $x\in \C^n$ is said to be [**positive semidefinite (psd)**]{}. Equivalently, $A$ is a $n\times n$ Hemitian positive semidefinite matrix if and only if $A$ has all eigenvalues nonnegative ([@RC], p.182). If $\overrightarrow{V}=\{\overrightarrow{v_1},\overrightarrow{v_2},\dots, \overrightarrow{v_n}\}\subset \Re^m$ is a set of column vectors then the matrix $ A^T A$, where $A= \left[\begin{array}{cccc} \overrightarrow{v_1} & \overrightarrow{v_2} &\dots& \overrightarrow{v_n} \end{array}\right]$ and $A^T$ represents the transpose matrix of $A$, is a psd matrix called the [**Gram matrix**]{} of $\overrightarrow{V}$. Let $G(V,E)$ be a graph associated with this Gram matrix. Then $V_G=\{v_1,\dots, v_n\}$ correspond to the set of vectors in $\overrightarrow{V}$ and E(G) correspond to the nonzero inner products among the vectors in $\overrightarrow{V}$. In this case $\overrightarrow{V}$ is called an [**orthogonal representation**]{} of $G(V,E)$ in $\Re^m$. If such an orthogonal representation exists for $G$ then $\msr(G)\le m$. Some results about the minimum semidefinite rank of a graph are the following: [@VH]\[msrtree\] If $T $ is a tree then $\msr(T)= |T|-1$. [@MP3]\[msrcycle\] The cycle $C_n$ has minimum semidefinite rank $n-2$. \[res2\] [@MP3]  If a connected graph $G$ has a pendant vertex $v$, then $\msr(G)=\msr(G-v)+1$ where $G-v$ is obtained as an induced subgraph of $G$ by deleting $v$. [@PB] \[OS2\] If [$G$]{} is a connected, chordal graph, then [$\msr(G)=\cc(G).$]{} \[res1\] [@MP2] If a graph $G(V,E)$ has a cut vertex, so that $G=G_1\cdot G_2$, then $\msr(G)= \msr(G_1)+\msr(G_2)$. Delta-Graphs and the Delta Conjecture ===================================== In [@PD] is is defined a family of graphs called $\delta$-graphs and show that they satisfy the delta conjecture. \[ccpg\] Suppose that $G=(V,E)$ with $|G|=n \ge 4$ is simple and connected such that $\overline{G}=(V,\overline{E})$ is also simple and connected. We say that $G$ is a [**$\mathbf{\delta}$-graph**]{} if we can label the vertices of $G$ in such a way that 1. the induced graph of the vertices $v_1,v_2,v_3$ in $G$ is either $3K_1$ or $K_2 \sqcup K_1$, and 2. for $m\ge 4$, the vertex $v_m$ is adjacent to all the prior vertices $v_1,v_2,\dots,v_{m-1}$ except for at most $\dis{\left\lfloor\frac{m}{2}-1\right\rfloor}$ vertices. A second family of graphs also defined in [@PD] contains the complements of $\delta$-graphs. Suppose that a graph $G(V,E)$ with $|G|=n \ge 4$ is simple and connected such that $\overline{G}=(V,\overline{E})$ is also simple and connected. We say that $G(V,E)$ is a [**C-$\mathbf{\delta}$ graph**]{} if $\overline{G}$ is a $\delta$-graph. In other words, $G$ is a [**C-$\mathbf{\delta}$ graph**]{} if we can label the vertices of $G$ in such a way that 1. the induced graph of the vertices $v_1,v_2,v_3$ in $G$ is either $K_3$ or $P_3$, and 2. for $m\ge 4$, the vertex $v_m$ is adjacent to at most $\dis{\left\lfloor\frac{m}{2}-1\right\rfloor}$ of the prior vertices $v_1,v_2,\dots,v_{m-1}$. \[examplecp\] The cartesian product $K_3\square P_4$ is a C-$\delta$ graph and its complement is a $\delta$-graph. By labeling as the following picture we can verified the definition for both graphs. ![image](K3cpP4.png){height="40mm"} Note that we can label the vertices of $K_3\square P_4$ clockwise $v_1=(1,1),v_2= (1,2),v_3=(1,3),\dots v_{12}=(3,4)$. The graph induced by $v_1,v_2,v_3$ is $P_3$. The vertex $v_4$ is adjacent to a prior vertex which is $v_3$ in the induced subgraph of $K_3\square P_4$ given by $\{v_1,v_2,v_3,v_4\}$. Also, the vertex $v_5$ is adjacent only to vertex $v_1$ in the induced subgraph of $K_3\square P_4$ given by $\{v_1,v_2,v_3,v_4, v_5\}$. Continuing the process trough vertex $v_12$ we conclude that $K_3\square P_4$ is a C-$\delta$ graph. In the same way we conclude that its complement $\overline{K_3\square P_4}$ is a $\delta$-graph. \[lem2\] Let [$G(V,E)$]{} be a $\delta$-graph. Then the induced graph of [$\{v_1,v_2,v_3\}$]{} in [$G$]{} denoted by $H$ has an orthogonal representation in [$\Re^{\Delta(\overline{G})+1}$]{} satisfying the following conditions: 1. the vectors in the orthogonal representation of $H$ can be chosen with nonzero coordinates, and 2. \[L1\]$\overrightarrow{v}\not\in \sp(\overrightarrow{u})$ for each pair of distinct vertices $u,v$ in $H$. \[main\] Let $G(V,E)$ be a $\delta$-graph then $$\msr(G)\le\Delta(\overline{G})+1=|G|-\delta(G)\label{mrsineq1}$$ The proof of these two results can be found in [@PD] and [@PD1]. The argument of the proof is based on the construction of a orthogonal representation of pairwise linear independent vectors for a $\delta$ graph $G$ at $\Re^{\overline{G})+1}$. Since $\msr(G)$ is the minimum dimension in which we can get an orthogonal representation for a simple connected graphs the result is a direct consequence of this construction. A survey of $\delta$-graphs and upper bounds their minimum Semidefinite rank ============================================================================ The theorem \[main\] give us a huge family of graph which satisfies delta conjecture. Since, the complement of a C-$\delta$ graphs is ussually a $\delta$-graph, it is enough to identify a C-$\delta$-graph and therefore we know that its complement is a $\delta$-graph satisfying delta conjecture if it is simple and connected. Some examples of C-$\delta$ graphs that we can find in [@PD] are the Cartesian Product $K_n\square P_m,n\ge 3, m\ge 4$, Mobiüs Lader $ML_{2n}, n\ge 3$, Supertriangles $Tn, n\ge 4$, Coronas $S_n\circ P_m, n\ge 2 , m\ge 1$ where $S_n$ is a star and $P_m$ a path, Cages like Tutte’s (3,8) cage, Headwood’s (3,6) cage and many others, Blanusa Snarks of type $1$ and $2$ with $26, 34$, and $42$ vértices, and Generalized Petersen Graphs $Gp1$ to $Gp16$. Upper bounds for the Minimum semidefinite rank of some families of Simple connected graphs ------------------------------------------------------------------------------------------ From the definition of C-$\delta$ graph and the Theorem \[main\] we can obtain upper bounds for the graph complement of a C-$\delta$ graphs. It is enough to label the vertices of $G$ in such a way that the labeled sequence of vertices satisfies the definition. That is, if we start with the induced graph of $ \{v_1,v_2,v_3\}$, the newly added vertex $v_m$ is adjacent to at most $\lfloor\frac{m}{2}-1\rfloor $ of the prior vertices $v_1,v_2,\dots,v_{m-1}$. Then $G$ is a C-$\delta$ graph and its graph complement $\overline{G}(V,\overline{E})$ will have an orthogonal representation in $\Re^{\Delta(G)+1}$ any time it is simple and connected. In order to show the technique used in the proved result consider the following examples \[upper1\] If $G$ is the Robertson’s (4,5)-cage on 19 vertices then it is a 4-regular C-$\delta$ graph. Since $\Delta(G)=4$, the $\msr(G)\le 5$. To see this is a C-$\delta$ graph it is enough to label its vertices in the way shown in the next figure: ![image](cage45.png){height="50mm"} [Figure B.2 Robertson’s (4,5)-cage (19 vertices)]{} \[figA.1.2\] \[upper2\] If $G$ is the platonic graph Dodecahedron then it is a 3-regular C-$\delta$ graph. Since $\Delta(G)=3$, the $\msr(\overline{G})\le 4$. To see this is a C-$\delta$ graph it is enough to label its vertices in the way shown in the next figure: ![image](dodecahedron.png){height="60mm"} [Figure 3. Dodecahedron]{} \[figA.1.1\] The next table contains C-$\delta$ graphs $G$ taken from [@RW] and upper bounds for $\msr(\overline{G})$ given by $\Delta(G)+1$ are found in [@PD].\ -------------------- -------------------------------- ---------------- ---------------------- -- Family Name of Graph $\msr(\overline{G})$ $G$ $\vert G\vert$ $\le \Delta(G)+1 $ Archimedean Graphs Cuboctahedron 12 $4$ Icosidodecahedron 30 $5$ Rhombicuboctahedron 24 $5$ Rombicosidodecahedrom 60 $6$ Snub cube 24 $6$ Snub dodecahedrom 60 $6$ Truncated cube 24 $4$ Truncated Cuboctahedron 48 $4$ $G$ $\vert G\vert$ $\le \Delta(G)+1 $ Truncated dodecahedron 60 $4$ Truncated icosahedrom 60 $4$ Truncated icosidodecahedrom 120 $6$ Truncated Tetrahedron 12 $4$ Truncated octahedron 24 $4$ Antiprisms $2n, n\in \N, \ n\ge 3$ $2n, n\ge 3$ $5$ $4$-antiprism $8$ $5$ $5$-antiprism $10$ $5$ Cages Balaban’s $(3,10)$ cage 70 $4$ Foster $(5,5)$ cage 30 $6$ Harries’s $(3,10)$ cage 70 $4$ Headwood’s $(3,6)$ cage 14 $4$ MacGee’s $(3,7)$ cage 24 $4$ Petersen’s $(3,5)$ cage 10 4 Robertson’s $(5,5)$ cage 30 $6$ Robertson’s $(4,5)$ cage 19 $5$ The Harries-Wong $(3,10)$ cage 70 $4$ The $(4,6)$ cage 26 $5$ Tutte’s$(3,8)$ cage 30 $4$ Wongs’s $(5,5)$ cage 30 $4$ The Harries-Wong $(3,10)$ cage 70 $4$ The $(4,6)$ cage 26 $5$ Tutte’s$(3,8)$ cage 30 $4$ Wongs’s $(5,5)$ cage 30 $4$ -------------------- -------------------------------- ---------------- ---------------------- -- ----------------------------- ----------------------- ---------------- ---------------------- -- Family Name of the Graph $\msr(\overline{G})$ $G$ $\vert G\vert$ $\le \Delta(G)+1 $ Blanusa Snarks Type 1: 26 vertices 26 4 Type 2: 26 vertices 26 4 Type 1: 34 vertices 34 4 Type 2: 34 vertices 34 4 Type 1: 42 vertices 42 4 Type 2: 34 vertices 42 4 Generalized Petersen Graphs Gp1 10 $4$ Gp2 12 $4$ Gp3 14 $4$ Gp4 16 $4$ Gp5 16 $4$ Gp6 18 $4$ Gp7 18 $4$ Gp8 20 $4$ Gp9 20 $4$ Gp10 20 $4$ Gp11 22 $4$ Gp12 22 $4$ Gp13 24 $4$ Gp14 24 $4$ Gp15 24 $4$ Gp16 24 $4$ Non-Hamiltonian Cubic Grinberg’s Graph 44 4 Tutte’s Graph 46 $4$ (38 vertices) 38 4 (42 vertices) 42 $4$ Platonic Graphs Cube 8 $4$ Dodecahedron 20 $4$ Prisms $n$-prism,$n\ge 4$ $2n$ $4$ $4$-prism $8$ $4$ $5$-prism $10$ $4$ Snarks Celmins-Swarf snark 1 26 $4$ Celmins-Swarf snark 2 26 $4$ Double Star snark 30 $4$ Flower snark $J_7$ 28 $4$ Flower snark $J_9$ 36 $4$ Flower snark $J_{11}$ 44 $4$ ----------------------------- ----------------------- ---------------- ---------------------- -- ----------------------------- -------------------------------------- ---------------- ---------------------- -- Family Graph $\msr(\overline{G})$ $G$ $\vert G\vert$ $\le \Delta(G)+1 $ Hypercube $2^4$ $5$ Loupekine’s snark 1 (Sn28) 22 $4$ Loupekine’s snark 2 (Sn29) 22 $4$ The Biggs-Smith 102 $4$ The Greenwood-Gleason 16 $6$ The Szekeres snark 50 $4$ Watkin’s snark 50 $4$ Miscelaneous Regular Graphs Chvatal’s graph 12 $5$ Cubic Graph with no perfect matching $16$ $4$ Cubic Identity graphs $12$ $4$ Folkman’s graph 20 $5$ Franklin’s graph 12 $4$ Herschel’s graph 11 $5$ Hypercube $16$ $4$ Meredith’s graph 70 $4$ Mycielslski’s graph 11 $6$ The Greenwood-Gleason graph $16$ $6$ The Goldner-Harary dual $$ ( the truncated Prism) 18 $4$ Tietze’s graph 11 $4$ ----------------------------- -------------------------------------- ---------------- ---------------------- -- Proof of Delta Conjecture ========================= In this section we give an argument which prove that Delta Conjecture is true for any simple graph not necessarily connected as a generalization of the result given in [@PD]. For that purpose We define a generalization of C-$\delta$ graphs called [**extended C-$\delta$ g graph**]{}. Previously, we stablish that Delta Conjecture holds for $\delta-$Graphs. The condition $2\le\Delta(G)\le\vert G\vert-2$ in the proof of \[main\] was given as a sufficient condition to obtain that the graph complement of a C-$\delta$ graph is connected. We will see that the condition of connectivity of a C-$\delta$ graphs is not necessary in order to proof Delta Conjecture when using the result \[main\]. Hence, we can define a generalization of C-$\delta$ graphs in the following way. All vertices of $G$ are connected with all vertices of $P_n$ in $\overline{G'}$. As a consequence $\overline{G'}$ is a simple and connected graph. . ![image](ecdgraph.png){height="50mm"} \[propo1\][ Let $G'=P_n\sqcup G$ be an extended C-$\delta$ graph. Then $\overline{G'}$ has an orthogonal representation in $\Re^{\Delta(G)+1}$. ]{} : Let $G'(V',E')$ be an extended C-$\delta$ graph. Then $G'=P_n\sqcup G ; n=2\Delta(G)+2$ and $G(V,E)$ is a simple graph. Since $P_n$ is a C-$\delta$ graph we know that we can label its vertices in such a way that if $v_2,\dots,v_n$ are its vertices then $\overrightarrow{v_1},\dots,\overrightarrow{v_n}$ is an orthogonal representation of its graph complement $\overline{P_n}$ in $\Re^3$. But since $\Delta(G)\ge 2 $ because $G$ is connected and $\vert G\vert\ge 4$ then we can also obtain an orthogonal representation of $P_n$ in $\Re^{\Delta(G)+1}$ getting $2\Delta(G)+2$ vectors for $\overline{G'}$ in $\Re^{\Delta(G)+1}$ using the C-$\delta$ construction. Thus, in $G'$, $v_{2\Delta(G)+2}$ is adjacent with all prior vertices $v_1,\dots,v_{2\Delta(G)+1}$ but at most $$\dis{\left\lfloor\frac{2\Delta(G)+2}{2}-1\right\rfloor}\\ =\Delta(G)\ge 2$$ vertices. Actually to all of them but one. Now, choose a vertex $v'$ in $\overline{G}$ and label it as $ v'=v_{2\Delta(G)+3}$. In $\overline{G'}$ $v_{2\Delta(G)+3}$ is adjacent with all of the vertices of $P_n$. As a consequence, $v_{2\Delta(G)+3}$ satisfies the delta construction in $G'$. If $Y_{2\Delta(G)+2}$ is the induced graph of $G'$ given by $v_1,\dots,v_{2\Delta(G)+2}$ then $Y_{2\Delta(G)+3}=Y_{2\Delta(G)+2}\cup \{v_{2\Delta(G)+3}\}$ is simple and connected and $\overline{Y}_{2\Delta(G)+3}$ can be constructed by using $\delta$-construction because. $v_{2\Delta(G)+3}$ is adjacent with all previous vertices $v_1,v_2,\dots v_{2\Delta(G)+2}$ in $\overline{G'}$. Then it is adjacent with all previous vertices in $Y_{2\Delta(G)+2}$ but at most $\dis{\left\lfloor \frac{2\Delta(G)+3}{2}-1\right\rfloor}\ge \Delta(G)$. Now, by labeling the remaining vertices in $G'$ which are vertices in G in any random sequence to obtain $v_{2\Delta(G)+4,\dots,v_{2\Delta(G)+2+\vert G\vert}}$ we get a sequence of induced subgraph of $\overline{G}$ $$Y_{2\Delta(G)+4}\subseteq Y_{2\Delta(G)+5}\subseteq\dots\subseteq Y_{2\Delta(G)+2+\vert G\vert}=\overline{G'}$$ All of these induced subgraphs can be constructed using $\delta$-construction. As a consequence $Y_{2\Delta(G)+2+\vert G\vert}\\ =\overline{G'}$ can be constructed using $\delta$-construction which implies that there is an orthogonal representation $\overrightarrow{v}_1,\overrightarrow{v}_2,\dots,\overrightarrow{v}_{2\Delta(G)+2+\vert G\vert}$ of the vertices of $\overline{G'}$ at $\Re^{\Delta(G')+1}$. But $\Delta(G')\ge \Delta(G)$ since $\vert G\vert\ge 4$, $\overline{G}$ is simple a nd connected, and $G$ is an induced graph of $G'$. Then we can get the orthogonal representation of $\overline{G'}$ in $\Re^{\Delta(G)+1}$. Finally, if $\overrightarrow{v}_1,\overrightarrow{v}_2,\dots,\overrightarrow{v}_{2\Delta(G)+2+\vert G\vert}$ is the orthogonal representation of $G'$ in $\\Re^{\Delta(G)+1}$ take the vectors $\overrightarrow{v}_{2\Delta(G)+3},\overrightarrow{v}_{2\Delta(G)+4},\dots,\overrightarrow{v}_{2\Delta(G)+2+\vert G\vert}$. These vectors satisfy all the adjacency conditions and orthogonal conditions of $\overline{G}$ because $\overline{G}$ is an induced subgraph of $\overline{G'}$. As a consequence, $\overrightarrow{v}_{2\Delta(G)+3},\overrightarrow{v}_{2\Delta(G)+4},\dots,\overrightarrow{v}_{2\Delta(G)+2+\vert G\vert}$ is an orthogonal representation of $\overline{G}$ in $\Re^{\Delta(G)+1}$.\ .$\square$ \[teo1\][If $G$ is a simple connected graph, $\vert G\vert \ge 4$ then $G$ satisfies Delta conjecture]{}. : Let $G$ be a simple connected graph. Since $G$ can be seen as a component of a extended C-$\delta$ graph $G'= P_{2\Delta(G)+2} \sqcup G$ by the proposition proved above $G$ has an orthogonal representation in $\Re^{\Delta(G)+1}=\Re^{\vert G\vert-\delta(G)}$ which implies that $\msr(G)\le \vert G\vert-\delta(G)$ . As a consequence delta conjecture holds for any simple connected graph $G$ with $\vert G\vert\ge 4$.$\square$ Finally, by using extended C-$\delta$ graphs $G'=P_{2\Delta(G)+2}\sqcup G$ for all $G, \vert G\vert \le 3$ and the technique described in the proof of the proposition above or any other way it is easy to check that all of simple connected graphs with $\vert G\vert \le 3$ satisfies Delta conjecture. As a consequence we have the following theorem: : From \[teo1\] we know that delta conjecture hold for any simple graph $G, \vert G\vert \ge 4$. Checking all cases for all simple connected graph $G, \vert G\vert \le 3 $ we complete the proof for delta conjecture. $\square$ Conclusion ========== In this paper we proved the delta conjecture as a main result. Also we applied the technique for finding the minimum semidefinite rank of a C-$\delta$ to give a table of upper bounds of a large amount of families of simple connected graphs. These upper bounds will be usefull in the study of the minimum semidefinite rank of a graph. In the future, the techniques applied in this paper could be useful to solve other problems related with simple connected graphs and minimum semidefinite rank. Acknowledment ============= I would like to thanks to my advisor Dr. Sivaram Narayan for his guidance and suggestions of this research. Also I want to thank to the math department of University of Costa Rica and Universidad Nacional Estatal a Distancia because their sponsorship during my dissertation research and specially thanks to the math department of Central Michigan University where I did the researh for C-$delta$ graphs which was a paramount research to proof delta conjecture. [^1]: Escuela de Matemática, Universidad de Costa Rica
--- abstract: 'The physical properties of the so-called Ostriker isothermal, non-rotating filament have been classically used as benchmark to interpret the stability of the filaments observed in nearby clouds. However, such static picture seems to contrast with the more dynamical state observed in different filaments. In order to explore the physical conditions of filaments under realistic conditions, in this work we theoretically investigate how the equilibrium structure of a filament changes in a rotating configuration. To do so, we solve the hydrostatic equilibrium equation assuming both uniform and differential rotations independently. We obtain a new set of equilibrium solutions for rotating and pressure truncated filaments. These new equilibrium solutions are found to present both radial and projected column density profiles shallower than their Ostriker-like counterparts. Moreover, and for rotational periods similar to those found in the observations, the centrifugal forces present in these filaments are also able to sustain large amounts of mass (larger than the mass attained by the Ostriker filament) without being necessary unstable. Our results indicate that further analysis on the physical state of star-forming filaments should take into account rotational effects as stabilizing agents against gravity' author: - | S. Recchi$^{1}$[^1], A. Hacar$^{1}$ [^2] and A. Palestini$^{2}$ [^3]\ $^{1}$ Department of Astrophysics, Vienna University, Türkenschanzstrasse 17, A-1180, Vienna, Austria\ $^{2}$ MEMOTEF, Sapienza University of Rome Via del Castro Laurenziano 9, 00161 Rome, Italy date: 'Received; accepted' title: On the equilibrium of rotating filaments --- stars: formation – ISM: clouds – ISM: kinematics and dynamics – ISM: structure Introduction {#sec:intro} ============ Although the observations of filaments within molecular clouds have been reported since decades (e.g. Schneider & Elmegreen 1979), only recently their presence has been recognized as a unique characteristic of the star-formation process. The latest Herschel results have revealed the direct connection between the filaments, dense cores and stars in all kinds of environments along the Milky Way, from low-mass and nearby clouds (Andr[é]{} et al. 2010) to most distant and high-mass star-forming regions (Molinari et al. 2010). As a consequence, characterizing the physical properties of these filaments has been revealed as key to our understanding of the origin of the stars within molecular clouds. The large majority of observational papers (Arzoumanian et al. 2011; Palmeirim et al. 2013; Hacar et al. 2013) use the classical “Ostriker” profile (Ostriker 1964) as a benchmark to interpret observations. More specifically, if the estimated linear mass of an observed filament is larger than the value obtained for the Ostriker filament ($\simeq$ 16.6 M$_\odot$ pc$^{-1}$ for T=10 K), it is assumed that the filament is unstable. Analogously, density profiles flatter than the Ostriker profile are generally interpreted as a a sign of collapse. However, it is worth recalling the assumptions and limitations of this model: $(i)$ filaments are assumed to be isothermal, $(ii)$ they are not rotating, $(iii)$ they are isolated, $(iv)$ they can be modeled as cylindrical structures with infinite length, $(v)$ their support against gravity comes solely from thermal pressure. An increasing number of observational results suggest however that none of the above assumptions can be considered as strictly valid. In a first paper (Recchi et al. 2013, hereafter Paper I) we have relaxed the hypothesis $(i)$ and we have considered equilibrium structures of non-isothermal filaments. Concerning hypothesis $(ii)$, and after the pioneering work of Robe (1968), there has been a number of publications devoted to the study of equilibrium and stability of rotating filaments (see e.g. Hansen et al. 1976; Inagaki & Hachisu 1978; Robe 1979; Simon et al. 1981; Veugelen 1985; Horedt 2004; Kaur et al. 2006; Oproiu & Horedt 2008). However, this body of knowledge has not been recently used to constrain properties of observed filaments in molecular clouds. In this work we aim to explore the effects of rotation on the interpretation of the physical state of filaments during the formation of dense cores and stars. Moreover, we emphasize the role of envelopes on the determination of density profiles, an aspect often overlooked in the recent literature. The paper is organised as follows. In Sect. \[sec:obs\] we review the observational evidences suggesting that star-forming filaments are rotating. In Sect. \[sec:rotfil\] we study the equilibrium configuration of rotating filaments and the results of our calculations are discussed and compared with available observations. Finally, in Sect. \[sec:conc\] some conclusions are drawn. Observational signs of rotation in filaments {#sec:obs} ============================================ Since the first millimeter studies in nearby clouds it is well known that star-forming filaments present complex motions both parallel and perpendicular to their main axis (e.g. Loren 1989; Uchida et al. 1991). Recently, Hacar & Tafalla (2011) have shown that the internal dynamical structure of the so-called velocity coherent filaments is dominated by the presence of local motions, typically characterized by velocity gradients of the order of 1.5–2.0 km s$^{-1}$ pc$^{-1}$, similar to those found inside dense cores (e.g. Caselli et al. 2002). Comparing the structure of both density and velocity perturbations along the main axis of different filaments, Hacar & Tafalla (2011) identified the periodicity of different longitudinal modes as the streaming motions leading to the formation of dense cores within these objects. These authors also noticed the presence of distinct and non-parallel components with similar amplitudes than their longitudinal counterparts. Interpreted as rotational modes, these perpendicular motions would correspond to a maximum angular frequency $\omega$ of about 6.5 $\cdot$ 10$^{-14}$ s$^{-1}$. Assuming these values as characteristic defining the rotational frequency in Galactic filaments, the detection of such rotational levels then rises the question on whether they could potentially influence the stability of these objects.[^4] To estimate the dynamical relevance of rotation we can take the total kinetic energy per unit length as equal to $\mathcal{T}=\frac{1}{2} \omega^2 R_c^2 M_{lin}$, where $R_c$ is the external radius of the cylinder and $M_{lin}$ its linear mass. The total gravitational energy per unit mass is $W=G{M_{lin}}^2$, hence the ratio $\mathcal{T}/W$ is $$\frac{\mathcal{T}}{W} \simeq 0.65 \left( \frac{\omega}{6.5 \cdot 10^{-14}} \right)^2 \left(\frac{R_c}{0.15 \,{\rm pc}} \right)^2 \left(\frac{M_{lin}}{16.6\, {\rm M}_{\odot} \,{\rm pc}^{-1}} \right)^{-1}.$$ Clearly, for nominal values of $\omega$, $R_c$ and $M_{lin}$ the total kinetic energy associated to rotation is significant, thus rotation is dynamically important. The equilibrium configuration of rotating, non-isothermal filaments {#sec:rotfil} =================================================================== In order to calculate the density distribution of rotating, non-isothermal filaments, we extend the approach already used in Paper I, which we shortly repeat here. The starting equation is the hydrostatic equilibrium with rotation: $\nabla P = \rho (g + \omega^2 r)$. We introduce the normalization: $$\label{eq:normalization} \rho = \theta \rho_0,\;\;T=\tau T_0,\;\; r=Hx\;\; \Omega=\sqrt{\frac{2}{\pi G \rho_0}} \omega.$$ Here, $\rho_0$ and $T_0$ are the central density and temperature, respectively, $H=\sqrt{\frac{2 k T_0}{\pi G \rho_0 \mu m_H}}$ is a length scale and $\Omega$ is a normalized frequency. Simple steps transform the hydrostatic equilibrium equation into: $$\theta\tau^\prime+\tau\theta^\prime=\theta\left(\Omega^2 x - 8 \frac{\int_0^x {\tilde x} \theta d{\tilde x}}{x}\right). \label{eq:start}$$ Calling now $I=\int_0^x {\tilde x} \theta d{\tilde x}$, then clearly $I^\prime =\theta x$. Solving the above equation for $I$, we obtain $8I=\Omega^2 x^2 -\tau^\prime x - \tau x \frac{\theta^\prime}{\theta}$. Upon differentiating this expression with respect to $x$ and rearranging, we obtain: $$\theta^{\prime\prime}=\frac{\left(\theta^\prime\right)^2}{\theta} -\theta^\prime\left[\frac{\tau^\prime}{\tau}+\frac{1}{x}\right]- \frac{\theta}{\tau}\left[\tau^{\prime\prime}+\frac{\tau^\prime}{x}+ 8 \theta -2 \Omega^2 -2 x \Omega\Omega^\prime\right]. % 8 \theta -2 \Omega\left(\Omega+2 x \Omega^\prime\right)\right]. \label{eq:basic}$$ Correctly, for $\Omega=0$ we recover the equation already used in Paper I. This second-order differential equation, together with the boundary conditions $\theta(0)=1$, $\theta^\prime(0)=-\tau^\prime(0)$ (see Paper I) can be integrated numerically to obtain equilibrium configurations of both rotating and non-isothermal filaments independently. This expression is more convenient than classical Lane-Emden type equations (see e.g. Robe 1968; Hansen et al. 1976) for the problem at hand. Notice also that also the normalization of $\omega$ differs from the more conventional assumption $\eta^2=\omega^2/4 \pi G \rho_0$ (Hansen et al. 1976). Uniformly rotating filaments {#subsec:rotfils} ---------------------------- ![Logarithm of the normalized density $\theta$ as a function of $x$ for various models of isothermal filaments with different normalized angular frequencies. The model with $\Omega=0$ corresponds to the Ostriker profile with $\rho\propto r^{-4}$ at large radii.[]{data-label="fig:denrot"}](den_rot.eps){width="6cm"} If we set $\tau, \Omega=Const$ in Eq. \[eq:basic\], we can obtain equilibrium solutions for isothermal, uniformly rotating filaments. We have checked that our numerical results reproduce the main features of this kind of cylinders, already known in the literature, namely: - Density inversions take place for $\Omega^2>0$ as the centrifugal, gravitational and pressure gradient forces battle to maintain mechanical equilibrium. Density oscillations occur in other equilibrium distributions of polytropes (see Horedt 2004 for a very comprehensive overview). Noticeably, the equilibrium solution of uniformly rotating cylindrical polytropes with polytropic index $n=1$ depends on the (oscillating) zeroth-order Bessel function $J_0$ (Robe 1968; see also Christodoulou & Kazanas 2007). Solutions for rotating cylindrical polytropes with $n>1$ maintain this oscillating character although they can not be expressed analytically. As evident in Fig. \[fig:denrot\], in the case of isothermal cylinders (corresponding to $n \rightarrow \infty$), the frequency of oscillations is zero for $\Omega=0$, corresponding to the Ostriker profile. This frequency increases with the angular frequency $\Omega$. - For $\Omega>2$, $\rho^\prime(0)>0$, due to the fact that, in this case, the effective gravity $g+\omega^2r$ is directed outwards. For $\Omega<2$, $\rho^\prime(0)<0$. If $\Omega=2$, there is perfect equilibrium between centrifugal and gravitational forces (Keplerian rotation) and the density is constant (see also Inagaki & Hachisu 1978). - The density tends asymptotically to the value $\Omega^2/4$. This implies also that the integrated mass per unit length $\Pi =\int_0^\infty 2 \pi x \theta(x) dx$ diverges for $\Omega^2>0$. Rotating filaments must be thus pressure truncated. This limit of $\theta$ for large values of $x$ is essentially the reason why density oscillations arise for $\Omega \neq 2$. This limit can not be reached smoothly, i.e. the density gradient can not tend to zero. If the density gradient tends to zero, so does the pressure gradient. In this case there must be asymptotically a perfect equilibrium between gravity and centrifugal force (Keplerian rotation) but, as we have noticed above, this equilibrium is possible only if $\Omega=2$. Thanks to the density oscillations, $\nabla P$ does not tend to zero and perfect Keplerian rotation is never attained. Notice moreover that the divergence of the linear mass is a consequence of the fact that the centrifugal force diverges, too, for $x \rightarrow \infty$. All these features can be recognized in Fig. \[fig:denrot\], where the logarithm of the normalized density $\theta$ is plotted as a function of the filament radius $x$ for models with various angular frequencies $\Omega$, ranging from 0 (non-rotating Ostriker filament) to 1. Hansen et al. (1976) performed a stability analysis of uniformly rotating isothermal cylinders, based on a standard linear perturbation of the hydrodynamical equations. They noticed that, beyond the point where the first density inversion occurs, the system behaves differently compared to the non-rotation case. Dynamically unstable oscillation modes appear and the cylinder tends to form spiral structures. Notice that a more extended stability analysis, not limited to isothermal or uniformly rotating cylinders, has been recently performed by Freundlich et al. (2014; see also Breysse et al. 2014). Even in its simplest form, the inclusion of rotations has interesting consequences in the interpretation of the physical state of filaments. As discussed in Paper I, the properties of the Ostriker filament (Stod[ó]{}lkiewicz 1963; Ostriker 1964), in particular its radial profile and linear mass, are classically used to discern the stability of these structures. According to the Ostriker solution, an infinite and isothermal filament in hydrostatic equilibrium presents an internal density distribution that tends to $\rho (r) \propto r^{-4}$ at large radii and a linear mass M$_{Ost}\simeq16.6$ M$_{\odot}$ pc$^{-1}$ at 10 K. As shown in Fig. \[fig:denrot\], and ought to the effects of the centrifugal force, the radial profile of an uniformly rotating filament in equilibrium ($\Omega>0$) could present much shallower profiles than in the Ostriker-like configuration (i.e. $\Omega=0$). Such departure from the Ostriker profile is translated into a variation of the linear mass that can be supported by these rotating systems. For comparison, an estimation of the linear masses for different rotating filaments in equilibrium truncated at a normalized radius x=3 and x=10 are presented in Tables \[table1\] and \[table2\], respectively. In these tables, the temperature profile is the linear function $\tau(x)=1+Ax$. In particular, the case $A=0$ refers to isothermal filaments, whereas if $A>0$, the temperature is increasing outwards.[^5] As can be seen there, the linear mass of a rotating filament could easily exceed the critical linear mass of its Ostriker-like counterpart without being necessary unstable. It is also instructive to obtain estimations of the above models in physical units in order to interpret observations in nearby clouds. For typical filaments similar to those found in Taurus (Hacar & Tafalla 2011; Palmeirim et al. 2013; Hacar et al. 2013), with central densities of $\sim 5\cdot 10^4$ cm$^{-3}$, one obtains $\Omega\simeq 0.5$ according to Eq. \[eq:normalization\]. Assuming a temperature of 10 K, and from Tables \[table1\] and \[table2\] (case $A=0$), this rotation level leads to an increase in the linear mass between $\sim$ 17.4 M$_\odot$ pc$^{-1}$ if the filament is truncated at radius x=3, and up to $\sim$ 112 M$_\odot$ pc$^{-1}$ for truncation radius of x=10. Here, it is worth noticing that a normalized frequency of $\Omega\simeq 0.5$, or $\omega \sim 6.5 \cdot 10 ^{-14}$ s$^{-1}$, corresponds to a rotation period of $\sim$ 3.1 Myr. With probably less than one revolution in their entire lifetimes ($\tau\sim$ 1–2 Myr), the centrifugal forces inside such slow rotating filaments can then provide a non-negligible support against their gravitational collapse, being able to sustain larger masses than in the case of an isothermal and static Ostriker-like filament. Differentially rotating filaments {#subsec:diffrotfils} --------------------------------- ![Logarithm of the normalized density $\theta$ as a function of $x$ for various models of filaments with different rotation laws.[]{data-label="fig:dendiffrot"}](den_diffrot.eps){width="6cm"} As can be noticed in Fig. \[fig:denrot\], a distinct signature of the centrifugal forces acting within rotating filaments is the presence of secondary peaks (i.e. density inversions) in their radial density distribution at large radii. Such density inversions could dynamically detach the outer layers of the filament to its central region, eventually leading to the mechanical breaking of these structures. In Sect. \[subsec:rotfils\], we assumed that the filaments present a uniform rotation, similar to solid bodies. However, our limited information concerning the the rotation profiles in real filaments invites to explore other rotation configurations. $\Omega$ $A=0$ $A=0.02$ $A=0.1$ $A=0.5$ ---------- ------- ---------- --------- --------- 0.1 1.006 1.015 1.049 1.167 0.5 1.166 1.176 1.213 1.330 0.8 1.553 1.561 1.593 1.676 1.0 2.108 2.108 2.111 2.117 : Normalized linear masses at $x=3$ compared to the Ostriker filament with similar truncation radius, with M$_{Ost}(x\le 3)= 14.9$ M$_{\odot}$ pc$^{-1}$, as a function of $\Omega$ and $A$.[]{data-label="table1"} $\Omega$ $A=0$ $A=0.02$ $A=0.1$ $A=0.5$ ---------- ------- ---------- --------- --------- 0.1 1.015 1.039 1.137 1.623 0.2 1.075 1.102 1.212 1.730 0.3 1.287 1.309 1.415 1.951 0.4 2.533 2.321 2.063 2.379 0.5 7.019 6.347 4.377 3.234 0.6 10.37 10.53 9.398 4.988 0.7 12.29 12.77 13.78 8.399 0.8 14.96 15.14 16.59 13.84 0.9 20.05 19.39 19.43 20.22 1.0 26.22 25.70 23.71 25.95 : Similar to Table \[table1\] but for linear masses at $x=10$, with M$_{Ost}(x\le 10)= 16.4$ M$_{\odot}$ pc$^{-1}$.[]{data-label="table2"} For the sake of simplicity, we have investigated the equilibrium configuration of filaments presenting differential rotation, assuming that $\Omega$ linearly varies with the filament radius $x$. For illustrative purposes, we choose two simple laws: $\Omega_1(x)=x/10$ and $\Omega_2(x)=1-x/10$, both attaining the typical frequency $\Omega=0.5$ at $x=5$. The first of these laws presumes that the filament rotates faster at larger radii but presents no rotation at the axis, resembling a shear motion. Opposite to it, the second one assumes that the filament presents its maximum angular speed at the axis and that it radially decreases outwards. The comparison of the resulting density profiles for these two models presented above are shown in Fig. \[fig:dendiffrot\] for normalized radii x$\le$ 10. For comparison, there we also overplot the density profile obtained with a constant frequency $\Omega=0.5$ (see Sect. \[subsec:rotfils\]). For these models, we are assuming A=0, i.e. isothermal configurations. Clearly, the law $\Omega_1(x)$ displays a radial profile with even stronger oscillations than the model with uniform rotation. As mentioned above, oscillations are prone to dynamical instabilities. In this case, instabilities start occurring at the minimum of the density distribution, here located at $x \simeq 4.45$. Conversely, these density oscillations are suppressed in rotating filaments that obey a law like $\Omega_2(x)$. It is however worth noticing that this last rotational law fails to satisfy the Solberg-H[ø]{}iland criterion for stability against axisymmetric perturbations (Tassoul 1978; Endal & Sofia 1978; Horedt 2004). Stability can be discussed by evaluating the first order derivative $\frac{d}{dx}[x^4 \Omega^2_2(x)]$, which is positive for $x \in (0, 20/3) \cup (10, +\infty)$ and negative for $x \in (20/3, 10)$. We must therefore either consider that this filament is unstable at large radii, or we must assume it to be pressure-truncated at radii smaller than x=20/3 $\simeq$ 6.7. As we mentioned above, we could not exclude the hypothesis that rotation indeed induced instability and fragmentation of the original filament, separating the central part (at radii x$\simlt$ 4.45 for $\Omega=\Omega_1(x)$ and x$\simlt$ 6.7 for $\Omega=\Omega_2(x)$) from the outer mantel, which might subsequently break into smaller units. This (speculative) picture would be consistent with the bundle of filaments observed in B213 (Hacar et al. 2013). For comparison, the mass per unit length attained by the model with $\Omega=\Omega_1(x)$ at $x<4.45$ (which corresponds to $\sim$ 0.2 pc for $T=10$ K and $n_c \sim 5\cdot 10^4$ cm$^{-3}$) is equal to 0.99 M$_{Ost}$ whereas the mass outside this minimum is equal to 22.7 M$_{Ost}$, i.e. there is enough mass to form many other filaments. Non-isothermal and rotating filaments {#subsec:nonisofils} ------------------------------------- ![Logarithm of the normalized density $\theta$ as a function of $x$ for various models of uniformly rotating filaments with $\Omega=0.5$ and different temperature slopes $A$.[]{data-label="fig:denlinrot"}](den_lin_rot.eps){width="6cm"} As demonstrated in Paper I, the presence of internal temperature gradients within filaments could offer an additional support against gravity. Under realistic conditions, these thermal effects should be then considered in combination to different rotational modes in the study of the stability of these objects. The numerical solutions obtained for the equilibrium configuration of filaments with $\Omega=0.5$ and various values of $A$ are plotted in Fig. \[fig:denlinrot\]. Notice that Fig. 5 of Palmeirim et al. (2013) suggests a rather shallower dust temperature gradient with a value of A of the order of 0.02 (green curve in Fig. \[fig:denlinrot\]). However, as discussed in Paper I, the gas temperature profile could be steeper than the dust one, so it is useful to consider also larger values of $A$. Fig. \[fig:denlinrot\] shows that the asymptotic behaviour of the solution does not depend on $A$: $\theta(x)$ always tends to $\Omega^2/4$ for $x\rightarrow \infty$. By looking at Eq. \[eq:basic\], it is clear that the same asymptotic behaviour holds for a wide range of reasonable temperature and frequency profiles. Whenever $\tau^{\prime\prime}$, $\tau^\prime/x$ and $\Omega\Omega^\prime x$ tend to zero for $x\rightarrow \infty$, and this condition holds for a linear increasing $\tau(x)$ and for $\Omega=$constant, the asymptotic value of $\theta(x)$ is $\Omega^2/4$. It is easy to see that also the asymptotically constant law fulfils this condition if the angular frequency is constant. Figure \[fig:denlinrot\] also shows that density oscillations are damped in the presence of positive temperature gradients. This was expected as more pressure is provided to the external layers to contrast the effect of the centrifugal force. Since density inversions are dynamically unstable, positive temperature gradients must be thus seen as a stabilizing mechanism in filaments. Our numerical calculations indicate in addition that the inclusion of temperature variations also increases the amount of mass that can be supported in rotating filaments. This effect is again quantified in Tables \[table1\] and \[table2\] for truncation radii of $x=3$ and $x=10$, respectively, compared to the linear mass obtained for an Ostriker profile at the same radius. As can be seen there, the expected linear masses are always larger than in the isothermal and non-rotating filaments, although the exact value depends on the combination of $\Omega$ and $A$ due to the variation in the position of the secondary density peaks compared to the truncation radius. Derived column densities for non-isothermal, rotating filaments: isolated vs. embedded configurations ----------------------------------------------------------------------------------------------------- ![Column density, as a function of the normalized impact parameter $\chi$, for filaments characterized by three different rotation laws: increasing outwards ($\Omega=x/10$), decreasing outwards ($\Omega=1-x/10$) and constant ($\Omega=0.5$). The filament is embedded in a cylindrical molecular cloud, with radius five times the radius of the filament. The column density of the Ostriker filament (case $p=4$) and the one obtained for a Plummer-like model with $\rho\sim r^{-2}$ (case $p=2$) are also shown for comparison.[]{data-label="fig:dcolcyl"}](dcol_cyl.eps){width="6cm"} ![Same as Fig. \[fig:dcolcyl\] but for a filament embedded in a slab, with half-thickness five times the radius of the filament.[]{data-label="fig:dcolslab"}](dcol_slab.eps){width="6cm"} In addition to their radial profiles, we also calculated the column density profiles produced by these non-isothermal, rotating filaments in equilibrium presented in previous sections, as a critical parameter to compare with the observations. For the case of isolated filaments, the total column density at different impact parameters $\chi$ can be directly calculated integrating (either analytically or numerically) their density profiles along the line of sight. As a general rule, if the volume density $\rho$ is proportional to $r^{-p}$, then the column density $\Sigma (\chi)$ is proportional to $\chi^{1-p}$. This result holds not only for both Ostriker filaments (see also Appendix \[sec:a2\]) and more general Plummer-like profiles (e.g. see Eq. 1 in Arzoumanian et al. 2011), but also also for the new rotating, non-isothermal configurations explored in this paper. Recent observations seem to indicate that those filaments typically found in molecular clouds present column density profiles with $\Sigma(\chi) \sim \chi^{-1}$, i.e. $p\simeq 2$ (see Arzoumanian et al. 2011; Palmeirim et al. 2013), a value that we use for comparison hereafter. An aspect often underestimated in the literature is the influence of the filament envelope in the determination of column densities profiles. Particularly if a filament is embedded in (and pressure-truncated by) a large molecular cloud, the line of sight also intercepts some cloud material whose contribution to the column density could be non-negligible (see also Appendix \[sec:a2\]), as previously suggested by different observational and theoretical studies (e.g. Stepnik et al. 2003; Juvela et al. 2012). In order to quantify the influence of the ambient gas in the determination of the column densities, here we consider two prototypical cases: 1. The filament is embedded in a co-axial cylindrical molecular cloud with radius $R_{m}$. 2. The filament is embedded in a sheet with half-thickness $R_{m}$. Note that, if the filament is not located in the plane of the sky, the quantity that enters the calculation of the column density is not $R_{m}$ itself, but $R_{m}'=R_{m}/\cos\beta$, where $\beta$ is the angle between the axis of the filament and this plane. ![Fractional contribution of filament and envelope to the total column density. The model shown here corresponds to the blue line of Fig. \[fig:dcolcyl\]: the rotation profile is $\Omega=1-x/10$ and the filament is surrounded by a cylindrical envelope with R$_m$/R$_c$=5.[]{data-label="fig:fil_env"}](fil_env.eps){width="6cm"} Following the results presented in Sect. \[subsec:rotfils\]-\[subsec:nonisofils\], we have investigated the observational properties of three representative filaments in equilibrium obeying different rotational laws, namely $\Omega_1(x)=x/10$, $\Omega_2(x)=1-x/10$ and $\Omega_3(x)=0.5$, covering both differential and uniform rotational patterns. The contribution of the envelope to the observed column densities is obviously determined by its relative depth compared to the truncation radius of the filament as well as the shape of its envelope. To illustrate this behaviour, we have first assumed that these filaments are pressure-truncated at $x=3$ (a conservative estimate). Moreover, we have considered these filaments to be embedded into the two different cloud configurations presented before, that is a slab and a cylinder, both with extensions $R_{m}$ corresponding to five times the radius of the filament (i.e. R$_{m}$/R$_{c}=5$). In both cases, we have assumed that the density of the envelope is constant and equal to the filament density at its truncation radius i.e. at $x=3$. The recovered column densities for the models presented above as a function of the impact parameter $\chi$ in the case of the two cylindrical and slab geometries are shown in Figs. \[fig:dcolcyl\] and \[fig:dcolslab\], respectively. In both cases, the impact parameter $\chi$ is measured in units of $H$. The results obtained there are compared with the expected column densities in the case of two infinite filaments described by an Ostriker-like profile (case $p=4$) and a Plummer-like profile with $\rho \propto r^{-2}$ at large radii (case $p=2$), as suggested by observations. From these comparisons it is clear that all the explored configurations present shallower profiles than the expected column density for its equivalent Ostriker-like filament. This is due to the constant value of the density in the envelope, which tends to wash out the density gradient present in the filament if the envelope radius is large. Moreover, the column densities expected for embedded filaments described by rotating laws like $\Omega_1(x)$ and $\Omega_3(x)$ (this last one only if the filament embedded into a slab) exhibit a radial dependency even shallower than these p=2 models at large impact parameters. The relative contribution of filament and envelope is outlined in Fig. \[fig:fil\_env\]. The model shown here corresponds to the blue line of Fig. \[fig:dcolcyl\]: the rotation profile is $\Omega=1-x/10$ and the filament is surrounded by a cylindrical envelope with R$_m$/R$_c$=5. As expected, at larger projected radii the observed radial profiles are entirely determined by the total column density of the cloud. Finally, it is important to remark that the expected column density profiles for the models presented above and, particularly, their agreement to these shallow Plummer-like profiles with p=2, significantly depend on the selection of the truncation radius R$_{c}$ and the extent of the filament envelopes R$_{m}$. This fact is illustrated in Fig. \[fig:cyl\_models\] exploring the expected slope of the observed column density profiles for pressure truncated and isothermal filaments following a rotational law like $\Omega_2(x)=1-x/10$ under different configurations for both their truncation and cloud radii. These results were calculated as the averaged value of the local slope of the column density at impact parameters $\chi\le R_{c}$, that is, where our models are sensitive to the distinct contributions of both filaments and envelopes. As expected, the larger the cloud depth is compared to the filament, the flatter profile is expected. Within the range of values explored in the figure, multiple combinations for both R$_{c}$ and R$_{m}$ parameters present slopes consistent to a power-law like dependency with p=2. Although less prominently, few additional combinations can be also obtained in the case of filaments with rotational laws like $\Omega_1(x)=x/10$ or $\Omega_3(x)=0.5$ (not shown here). Unless the rotational state of a filament is known and the contribution of the cloud background is properly evaluated, such degeneration between the parameters defining the cloud geometry and the relative weights of both the filament and its envelope makes inconclusive any stability analysis solely based on its mass radial distribution. Conclusions {#sec:conc} =========== The results presented this paper have explored whether the inclusion of different rotational patterns affect the stability of gaseous filaments similar to those observed in nearby clouds. Our numerical results show that, even in configurations involving slow rotations, the presence of centrifugal forces have a stabilizing effect, effectively sustaining large amounts of gas against the gravitational collapse of these objects. These centrifugal forces promote however the formation of density inversions that are dynamically unstable at large radii, making the inner parts of these rotating filaments to detach from their outermost layers. To prevent the formation of these instabilities as well as the asymptotical increase of their linear masses at large radii, any equilibrium configuration for these rotating filaments would require them to be pressure truncated at relatively low radii. In order to have a proper comparison with observations, we have also computed the expected column density profiles for different pressure truncated, rotating filaments in equilibrium. To reproduce their profiles under realistic conditions we have also considered these filaments to be embedded in an homogeneous cloud with different geometries. According to our calculations, the predicted column density profiles for such rotating filaments and their envelopes tend to produce much shallower profiles than those expected for the case of Ostriker-like filaments, resembling the results found in observations of nearby clouds. Unfortunately, we found that different combinations of rotating configurations and envelopes could reproduce these observed of profiles, complicating this comparison. To conclude, the stability of an observed filament can not be judged by a simple comparison between observations and the predictions of the Ostriker profile. We have shown in this paper that density profiles much flatter than the Ostriker profile and linear masses significantly larger than the canonical value of $\simeq$ 16.6 M$_\odot$ pc$^{-1}$ can be obtained for rotating filaments in equilibrium, surrounded by an envelope. Detailed descriptions of the filament kinematics and their rotational state, in addition to the analysis of their projected column densities distributions, are therefore needed to evaluate the stability and physical state in these objects. Acknowledgements {#acknowledgements .unnumbered} ================ This publication is supported by the Austrian Science Fund (FWF). We wish to thank the referee, Dr Chanda J. Jog, for the careful reading of the paper and for the very useful report. Andr[é]{}, P., Men’shchikov, A., Bontemps, S., et al. 2010, A&A, 518, L102 Arzoumanian, D., Andr[é]{}, P., Didelon, P., et al. 2011, A&A, 529, L6 Breysse, P. C., Kamionkowski, M., & Benson, A. 2014, MNRAS, 437, 2675 Caselli, P., Benson, P. J., Myers, P. C., & Tafalla, M. 2002, ApJ, 572, 238 Christodoulou, D. M., & Kazanas, D. 2007, arXiv:0706.3205 Endal, A. S., & Sofia, S. 1978, ApJ, 220, 279 Freundlich, J., Jog, C. J., & Combes, F. 2014, A&A, 564, A7 Hacar, A., & Tafalla, M. 2011, A&A, 533, A34 Hacar, A., Tafalla, M., Kauffmann, J., & Kovacs, A. 2013, A&A, 554, A55 Hansen, C. J., Aizenman, M. L., & Ross, R. L. 1976, ApJ, 207, 736 Horedt, G. P. 2004, Polytropes - Applications in Astrophysics and Related Fields, Astrophysics and Space Science Library, 306 Inagaki, S., & Hachisu, I.1978, PASJ, 30, 39 Juvela, M., Malinen, J., & Lunttila, T. 2012, A&A, 544, A141 Kaur, A., Sood, N. K., Singh, L., & Singh, K. D. 2006, Ap&SS, 301, 89 Loren, R. B. 1989, ApJ, 338, 925 Molinari, S., Swinyard, B., Bally, J., et al. 2010, A&A, 518, L100 Oproiu, T., & Horedt, G. P. 2008, ApJ, 688, 1112 Ostriker, J. 1964, ApJ, 140, 1056 Palmeirim, P., Andr[é]{}, P., Kirk, J., et al. 2013, A&A, 550, A38 Recchi, S., Hacar, A., Palestini, A.2013, A&A, 558, A27 (Paper I) Robe, H. 1968, Annales d’Astrophysique, 31, 549 Robe, H. 1979, A&A, 75, 14 Schneider, S., & Elmegreen, B. G. 1979, ApJS, 41, 87 Simon, S. A., Czysz, M. F., Everett, K., & Field, C. 1981, American Journal of Physics, 49, 662 Stepnik, B., Abergel, A., Bernard, J.-P., et al. 2003, A&A, 398, 551 Stod[ó]{}lkiewicz, J. S. 1963, Acta Astronomica, 13, 30 Tassoul, J.-L. 1978, Princeton Series in Astrophysics, Princeton: University Press, 1978, Uchida, Y., Fukui, Y., Minoshima, Y., Mizuno, A., & Iwata, T. 1991, Nature, 349, 140 Veugelen, P. 1985, Ap&SS, 109, 45 On the column density of filaments embedded in molecular clouds {#sec:a2} =============================================================== In this appendix we derive a formula to calculate the column density of filaments embedded in large molecular clouds. For that, let us assume first the general case of an isothermal filament described by the Ostriker solution $\theta_i(x)=\left[1+x^2\right]^{-2}$. If we call $z$ the (normalized) distance between the plane in the sky where the filament is located and a generic plane, then the distance between the point $(\chi,z)$ (where $\chi$ is the normalized impact parameter) and the axis is simply $x=\sqrt{\chi^2+z^2}$. As it is well known, if we assume that the filament extends until infinite distances, then the column density is: $$\begin{aligned} \Sigma(\chi)&=\int_{-\infty}^\infty \theta_i(\chi,z)dz=\int_{-\infty}^\infty \frac{dz}{(1+z^2+\chi^2)^2}\notag\\ &=\frac{1}{2}\frac{\pi}{(\chi^2+1)^{3/2}}. \end{aligned}$$ However, the cylinder could be embedded in a more extended cloud, with radius $R_m$. If we take for simplicity the cloud aligned with the filament, the situation is shown in Fig. \[fig:cden\_scheme\]. ![Section of the filament (with radius $R_c$), embedded in a (cylindrical, co-axial) molecular cloud with radius $R_m$.[]{data-label="fig:cden_scheme"}](cden.ps){width="9cm"} Based on this figure (and due to the symmetry of the problem), we can write the column density as: $$\Sigma(\chi)=2\int_{0}^{z_o} \theta_i(\chi,z)dz= 2\int_{z_o}^{z_i} \theta_b dz + 2 \int_0^{z_i}\frac{dz}{(1+z^2+\chi^2)^2}.$$ Here we have defined (see also Fig. \[fig:cden\_scheme\]): $$z_o=\sqrt{R_m^2-\chi^2},\;\;\;\;z_i=\sqrt{R_c^2-\chi^2},\;\;\;\; \theta_b=\theta(R_c),$$ and assumed that the density of the molecular cloud is constant and equal to $\theta(R_c)$. The result is: $$\begin{aligned} \Sigma(\chi)&=2\theta_b (z_o-z_i)+\frac{z_i}{(\chi^2+1)(R_c^2+1)} +\frac{\tan^{-1} \sqrt{\frac{R_c^2-\chi^2}{\chi^2+1}}}{(\chi^2+1)^{3/2}}, \notag\\ &=2\frac{\sqrt{R_m^2-\chi^2}-\sqrt{R_c^2-\chi^2}}{(1+R_c^2)^2}+ \frac{\sqrt{R_c^2-\chi^2}}{(\chi^2+1)(R_c^2+1)} +\notag\\& +\frac{\tan^{-1} \sqrt{\frac{R_c^2-\chi^2}{\chi^2+1}}}{(\chi^2+1)^{3/2}}. \end{aligned}$$ It is easy to see that, in the limes for $R_c$ (and $R_m$) tending to infinity, we recover the column density profile found above for the infinite cylinder. Another possibility is to assume that the cylinder is immersed in a slab of gas with half thickness $R_m$. The derivation of the column density remains the same and the only difference is that $z_o$ is now fix (it is equal to $R_m$) and does not depend any more on $\chi$ as before. For filaments whose profiles are determined numerically (like the ones found in Sect. \[sec:rotfil\]) the integral: $$\int_0^{z_i}\theta(\chi,z)dz,$$ (where as usual $\chi$ and $z$ are related to $x$ by $x=\sqrt{\chi^2+z^2}$) must be calculated numerically. The contribution to the column density due to the surrounding molecular cloud remains unaltered. [^1]: [email protected] [^2]: [email protected] [^3]: [email protected] [^4]: It is worth stressing that if the filament forms an angle $\beta \neq 0$ with the plane of the sky, an observed radial velocity gradient $\frac{\Delta V_r}{\Delta r}$ corresponds to a real gradient that is $\frac{1}{\cos \beta}$ times larger than that. [^5]: In Paper I, we considered two types of temperature profiles as a function of the filament radius, i.e. $\tau_1(x)=1+Ax$ and $\tau_2(x)=[1+(1+B)x]/(1+x)$, whose constants defined their respective temperature gradients as functions of the normalized radius. Both cases are based on observations. In this paper we will only consider the linear law $\tau=\tau_1(x)$; results obtained with the asymptotically constant law are qualitatively the same.
--- abstract: 'We theoretically map out the ground state phase diagram of interacting dipolar fermions in one-dimensional lattice. Using a bosonization theory in the weak coupling limit at half filing, we show that one can construct a rich phase diagram by changing the angle between the lattice orientation and the polarization direction of the dipoles. In the strong coupling limit, at a general filing factor, we employ a variational approach and find that the emergence of a Wigner crystal phases. The structure factor provides clear signatures of the particle ordering in the Wigner crystal phases.' author: - 'Theja N. De Silva[^1]' title: 'Phase diagram of two-component dipolar fermions in one-dimensional optical lattices' --- I. Introduction =============== The recent experimental progress in creating degenerate cold polar atoms/molecules with large dipolar moments attracted considerable attention due to the rich quantum mechanical phenomena they can exhibit [@edp1; @edp2; @edp3; @edp4; @edp5; @edp6; @edp7; @edp8]. In contrast to the contact interaction, the long-range, anisotropic dipolar-dipolar interaction between dipolar molecules offers promising directions for exploring novel and strongly correlated many-body physics. The experimental exploration of dipolar physics started with the observation of Bose-Einstein condensation of $^{52}$Cr and $^{164}$Dy magnetic dipolar atoms [@edp1; @edp7]. Later, showing promising indication of creating quantum degenerate mixtures of dipolar molecules, a dense gas of $^{40}$K$^{87}$Rb and dual-species Bose-Einstein condensate of $^{87}$Rb and $^{133}$Cs have been realized experimentally [@edp2; @edp8]. The realization of dipolar molecules in an optical lattice [@add] and the first creation of quantum degenerate dipolar Fermi gas of $^{161}$Dy have just been reported [@dy]. For molecules with permanent electric or magnetic dipole moments, the range of the dipole-dipole interactions can be much larger than typical optical lattice spacings. Optical lattices provide rich tunable ingredients such as geometry, dimensionality, and interactions so that one can engineer novel many-body states [@tut]. These states include various superfluid states such as $p_x +i p_y$ and $d$-wave superfluid phases, supersolid phases, vortex lattices, various Wigner crystal phases, charge-density wave and spin-density wave phases [@tdp1; @tdp2; @tdp3; @tdp4; @tdp5; @tdp6; @tdp7; @tdp8; @tdp9; @tdp10; @tdp11; @tdp12; @tdp13; @tdp14; @tdp15; @tdp16; @tdp17; @tdp18; @tdp19; @tdp20; @tdp21; @n1; @n2; @n3; @n4; @n5; @n6; @n7; @n8; @n9; @n10]. Further, cold polar molecules in optical lattices provide a platform for novel spin models and possible applications in quantum computing [@sm; @qc]. The physics of non-polar atoms with only contact interaction in optical lattices can be reasonably described by the Hubbard model [@jak]. In the Hubbard model, atom-atom interaction is approximated by an on-site interaction $U$. However, as the dipole-dipole interaction is long-range, the experiments of polar atoms or molecules fall outside the range of validity of the Hubbard model. A natural extension of the Hubbard model comes from including long-range, off-site interactions between the molecules. The one-dimensional many-body phenomena, such as the break down of Fermi liquid theory and spin-charge separation, can be understood in the framework of bosonization theory [@book1; @book2; @rev3]. The bosonization theory is valid asymptotically at small momenta and low energies. In this Letter, we study the phase diagram of two-component, one-dimensional lattice fermions. While we use the bosonization theory in the weak coupling limit, a variational approach is employed in the strong coupling limit to study the possible Wigner crystal states. Using the bosonization theory at half-filling, we show that one can achieve a rich phase diagram by changing the polarization direction with respect to the lattice orientation. The weak coupling phase diagram includes spin-density wave, charge-density wave, singlet superfluid and triplet superfluid phases. In the strong coupling limit, at smaller filling factors, we find that the long-range interaction induces a Wigner crystal phase. The structure factor or the density-density correlation function which can be measured using Bragg scattering experiments provide a clear signatures of the Wigner crystal phase. The Letter is organized as follows. In section II, we discuss the effective lattice model for the dipolar fermions in one dimension. In section III, we present the bosonization theory for weakly interacting fermions in the presence of long-range off-site interaction. Assuming that the dipoles are polarized along the applied field and taking the angle between the lattice direction and the applied field as a free parameter, the weak coupling limit ground state phase diagram at half-filling is presented in section IV. The section V is devoted to discuss the effect of inter-chain coupling in realistic experimental settings. In section VI, we consider the strong coupling limit and use a variational approach to study the possible Wigner crystal state away from half-filling. Finally in section VII, a summary is provided. II. The model ============= We consider a system of two-component electric or magnetic dipoles confined in a one-dimensional optical lattice oriented along the $x$-direction. The Hamiltonian operator for the fermionic atoms in optical lattice is given by $$\begin{aligned} H = \sum_\sigma \int dx \psi^\dagger_\sigma (x)\biggr[-\frac{\hbar^2}{2m}\frac{\partial^2}{\partial x^2} + V_0(x)\biggr]\psi_\sigma(x) +\frac{1}{2}V_{ci}\int dx \psi^\dagger_\uparrow (x)\psi^\dagger_\downarrow (x)\psi_\downarrow(x)\psi_\uparrow(x) \\ \nonumber + \frac{1}{2}\sum_{\alpha,\beta,\gamma,\delta}\int dx dx^\prime \psi^\dagger_\alpha (x)\psi^\dagger_\beta(x^\prime)\tilde{V}_{dd}(x-x^\prime)\psi_\gamma(x^\prime)\psi_\delta(x),\label{model}\end{aligned}$$ where $\psi^\dagger_\sigma(x) [\psi_\sigma(x)]$ is a fermion field operator which creates (annihilates) a Fermi atom with mass $m$ and pseudo-spin $\sigma = \uparrow, \downarrow$ at position $x$. Here the pseudo-spin $\sigma$ refers to the two hyperfine states of the atom. The optical lattice potential provided by the counter propagating laser is $V_0(x) = V_0\sin^2(kx)$, with the wave amplitude $V_0$ and wavevector $k = 2\pi/\lambda$, where $\lambda$ is the laser wavelength corresponding to a lattice period $d = \lambda/2$. The s-wave contact interaction $V_{ci} = 4\pi\hbar^2a_s/m$, with s-wave scattering length $a_s$ and the effective one-dimensional dipolar-dipolar interaction $\tilde{V}_{dd}(x)$, is related to the three-dimensional dipolar-dipolar interaction, $$\begin{aligned} V_{dd}(r) = D^2 \frac{1-3 \cos^2 \theta_d}{r^3}\label{dd3D}\end{aligned}$$ where $\theta_d$ is the angle between the 1D lattice in the $x$-direction and the dipolar moment of the atoms align along the applied homogeneous electric or magnetic field in the $x-z$ plane. The strength of the dipolar-dipolar interaction is $D^2 = d_0^2/(4 \pi \epsilon_0)$ and $D^2 = \mu_0 d_0^2/(4 \pi)$ for electric and magnetic dipoles respectively. Here $\epsilon_0$ is the electric permittivity, $\mu_0$ is the magnetic permeability, and $d_0$ is the dipolar moment. For a tight one-dimensional geometry, the level spacing in transverse direction is much larger than the energy per particle of the axial direction $x$. The integration of the dipolar-dipolar interaction in Eq. (\[dd3D\]) over the transverse direction leads to the effective one-dimensional dipolar interaction [@tdp6; @tdp9] $$\begin{aligned} \tilde{V}_{dd}(x) = -D^2 \frac{1+3 \cos (2\theta_d)}{x^3}.\label{dd1D}\end{aligned}$$ The single atomic energy eigenstates are Bloch states and localized Wannier functions are a superposition of Bloch states. For a deep optical lattice with atoms trapped in the lowest vibrational states $w(x) = e^{-x^2/(2l^2)}/\sqrt{l\pi^{1/2}}$ with $l = \sqrt{\hbar/(m\omega)}$, the field operators $\psi_{\sigma}$ can be expanded as $\psi_\sigma = \sum_ic_{i\sigma} w(x-x_i)$. The oscillator length $l$ is defined through $\hbar \omega = \sqrt{4E_RV_0}$. Here $\omega$ is the oscillation frequency, obtained using the harmonic approximation around the minima of the optical potential well at each lattice site. The recoil energy is $E_R = \hbar^2k^2/(2m)$. In terms of new fermionic operators $c_{i\sigma}$ at lattice site $i$, the effective lattice Hamiltonian for the polar fermionic system reads [@tut; @tdp14], $$\begin{aligned} H = -t \sum_{\langle ij \rangle,\sigma} (c_{i\sigma}^\dagger c_{j\sigma} + h. c) + U \sum_i n_{i\uparrow}n_{i\downarrow} + \sum_{i,r} V_{ir} n_{i+r}n_{i}.\label{model2}\end{aligned}$$ The parameters $t = \int dx w^\ast(x-x_i)[-\frac{\hbar^2}{2m}\frac{\partial^2}{\partial x^2}+ V_0(x)]w(x-x_j)$ is the hopping matrix element between neighboring sites $i$ and $j$, $U = 4\pi\hbar^2a_s \int dx |w(x)|^4/m$ is the on-site interaction of the two atoms at site $i$, and $V_{ir} = \int dx dx^\prime |w(x-x_i)|^2 \tilde{V}_{dd}(x-x^\prime) |w(x^\prime-x_r)|^2$ is the off-site interaction of two atoms at sites $i$ and $r$. Apart from this “direct” like off-site density-density interaction term, “exchange” like spin-spin interaction term is also present for dipolar gases [@kaden]. Assuming, dc electric and microwave fields in the realistic experimental setups allow one to tune the “direct” like interactions to be dominant [@alex], here we neglect the spin-spin interaction term. In terms of $\omega$, $V_0$, and $l$, the parameters read, $t = e^{-\pi^2 V_0/(2\hbar\omega)}\hbar\omega/2$, $U = 4\pi\hbar^2a_s/(\sqrt{2\pi} ml)$, and $V_{ir} = -V [1+3 \cos(2 \theta_d)]/(|i-r|^3)$. Notice that the tunneling energy is exponentially sensitive to the laser intensity whereas the interactions are weakly sensitive. The on-site interaction can be either repulsive or attractive depending on the sign of the scattering length $a_s$. Furthermore, off-site interaction $V_{ir}$, can be adjusted to be positive (repulsive) or negative (attractive) by changing the direction of the applied field. Here $i-r \neq 0$ is a discrete variable that represents the lattice points. We consider both repulsive and attractive regimes under the assumption that purely attractive regime is achievable in the metastable state as has been experimentally demonstrated for one-dimensional bosonic Cs atoms [@cs]. Perhaps, the residue of small spin-spin interaction term restore the mechanical stability in the attractive regime. III. Bosonization theory ======================== For asymptotic low-energy properties and for the weak coupling regime of the system, the continuum limit is a good approximation. We use the standard bosonization techniques [@bosT] to map the Hamiltonian into the continuum limit by introducing continuous fermion fields $c_{i\sigma}/\sqrt{d} \rightarrow \psi_{L\sigma}(x) + \psi_{R\sigma}(x)$ with $$\begin{aligned} \psi_{\eta\sigma}(x) = \frac{U_{\eta\sigma}}{\sqrt{2\pi\alpha}} e^{i\eta k_F x}e^{i/\sqrt{2} [\eta (\phi_n + \sigma \phi_\sigma)-\theta_n-\sigma \theta_\sigma]}.\label{e5}\end{aligned}$$ The fermion operator $\psi_{\eta\sigma}^\dagger(x)$ creates a Fermi atom of pseudo-spin $\sigma$ on the branch $\eta = R, L = \pm 1$ of the linearized spectrum $E(k) = v_F(\eta k-k_F)$, where $v_F = 2dt\sin(k_Fd)$ is the Fermi velocity. Here $R$ and $L$ refer to right movers and left movers, respectively. The parameter $\alpha$ is the standard bosonization short-range distance cut-off, which is on the order of a lattice constant $d$. The Fermi wavevector is $k_F = \pi n/(2d)$ with particle density $n$. In the continuum limit, $x = jd$ and the length of the chain $L = Nd$ is finite, hence we consider the limits $d \rightarrow 0$ and the number of atoms $N \rightarrow \infty$. The discrete variable $j$ above represents the lattice points. The fields representing particle($\nu =n$) and spin($\nu = \sigma$) fluctuations are $\phi_\nu$ and $\theta_\nu$. They satisfy the commutator $[\phi_\mu(x), \theta_\nu(x^\prime) ] = -i\pi/2 \delta_{\mu,\nu}sgn(x-x^\prime)$. The Hermitian operators $U_{\eta\sigma}$ satisfy the commutator $[U_{\eta\sigma}, U_{\eta^\prime\sigma^\prime} ] = 2 \delta_{\eta\eta^\prime}\delta_{\sigma \sigma^\prime}$. Introducing the velocities $v_\nu$ of particle ($n$) and spin ($\sigma$) sectors and Gaussian couplings $K_\nu$, and following the standard procedure, the 1D particle system can be represented by the sine-Gordon model as [@sam1; @joha], $$\begin{aligned} H = \sum_{\nu =n, \sigma} \frac{v_\nu}{2\pi}\int_0^L dx \biggr[K_\nu (\partial_x\theta_\nu)^2 + \frac{1}{K_\nu}(\partial_x\phi_\nu)^2\biggr] \\ \nonumber + \frac{2g_{1\perp}}{(2\pi\alpha)^2}\int_0^L dx \cos[\sqrt{8} \phi_\sigma(x)] \\ \nonumber + \frac{2g_{3\perp}}{(2\pi\alpha)^2}\int_0^L dx \cos[q\sqrt{8} \phi_n(x) + \delta x] \\ \nonumber + \frac{2g_{3\parallel}}{(2\pi\alpha)^2}\int_0^L dx \cos[q\sqrt{8} \phi_n(x)+ \delta x]\cos[q\sqrt{8} \phi_\sigma(x)].\label{BH}\end{aligned}$$ Here we use the standard notations where $$\begin{aligned} v_\nu &=& v_F[(1+y_{4\nu}/2)^2-(y_\nu/2)^2]^{1/2} \\ \nonumber K_\nu &=& \biggr[\frac{1+y_{4\nu}/2+y_\nu/2}{1+y_{4\nu}/2-y_\nu/2}\biggr]^{1/2} \\ \nonumber g_\nu &=& g_{1\parallel}-g_{2\parallel} \mp g_{2\perp} \\ \nonumber g_{2\nu} &=& g_{2\parallel}\pm g_{2\perp} \\ \nonumber g_{4\nu} &=& g_{4\parallel} \pm g_{4\perp} \\ \nonumber y_\nu &=& g_\nu/(\pi v_F) ,\label{e7}\end{aligned}$$ where the upper sign refers to the particle sector ($n$) and the lower sign refers to the spin sector ($\sigma$). In standard bosonization language, the coupling constants $g_{i\parallel}$ and $g_{i\perp}$ with $i =1,...4$, refer to low-energy processes of the interaction. The coupling $g_1$ couples two fermions on the opposite side of the Fermi surface and the particles switch the sides after the interactions. This process is called backward scattering or $2k_F$ scattering. The coupling $g_2$ couples two fermions on the opposite sides of the Fermi surface which stay on the same side after the scattering. This process is called forward scattering. Notice that the effect of $g_2$ is included in the first term in Eq. (6) [@joha]. The coupling between two fermions on the same side of the Fermi surface is denoted by the coupling constant $g_4$. The subscripts $\parallel$ and $\perp$ refer scattering between fermions with parallel spins and anti-parallel spins, respectively. The scattering corresponding to the coupling constants $g_{3\perp}$ and $g_{3\parallel}$ occurs only in the presence of the lattice. These are the well-known umklapp processes where the momentum is conserved up to the reciprocal lattice vector. The parameters $\delta$ and $q$ control the filling factor $n = N/L$. In this section of the present Letter we treat the half-filling case where $\delta =0$ and $q=1$ so that $n = 1$. In the weak coupling limit of our model in Eq. (6), all the scattering amplitudes can be presented as follows [@sam1; @sam2; @sam3]. The amplitudes of the backward scattering are $g_{1\perp} = Ud +2d\sum_xV_x \cos(2k_Fx)$ and $g_{1\parallel} = 2d\sum_xV_x \cos(2k_Fx)$. Notice that we use $V_{ir} \rightarrow V_x$ to represent the discrete variable $|i - r| \rightarrow dx$. The amplitudes of the forward scattering are $g_{2\perp} = Ud - 2d\sum_xV_x \cos(2k_Fx)$ and $g_{2\parallel} =-g_{1\parallel}$. The amplitudes of the umklapp scattering are $g_{3\perp} = g_{1\perp}$ and $g_{3\parallel} = g_{1\parallel}$. The amplitudes of the other scattering are $g_{4\perp}= g_{2\perp}$ and $g_{4\parallel}= g_{2\parallel}$. For the case of weak coupling, the velocities and the Gaussian coupling in the particle and spin sectors are $v_\nu K_\nu = v_F$, $v_n/K_n = v_F - g_n/\pi$, and $v_\sigma/K_\sigma = v_F - g_\sigma/\pi$. IV. Phase diagram ================= In the absence of the umklapp processes and in the limit $g_{1\perp}\rightarrow 0$, the Hamiltonian is quadratic. In this limit, various correlation functions corresponding to different quantum phases can be easily calculated. These correlation functions show non-universal power law decay with exponents depending on Gaussian couplings $K_n$ and $K_\sigma$ [@joha]. However, in the presence of umklapp processes and the non-zero limit of $g_{1\perp}$, one has to treat the quantum phase transitions by using renormalized group theoretical techniques. ![The phase diagram of one-dimensional polarized dipolar fermions in the weak coupling limit. The angle $\theta_d$ is the polarized angle with respect to the 1D lattice orientation. We set the filling factor to $n = 1$ and the long-ranged dipolar interaction up to 100 lattice sites. The abbreviated phases are SDW: spin-density wave, CDW: charge-density wave, TSF: triplet superfluid phases and SSF: singlet superfluid.[]{data-label="pd"}](PDHF.eps){width="\columnwidth"} ![The effect of the interaction range. Panel (a) shows the boundary line between a charge-density wave phase and a spin-density wave phase for $m = 50$ (black) and $m = 1$ (gray). Panel (b) shows the boundary line between a spin-density wave phase and a triplet superfluid phase for $m = 50$ (black) and $m = 1$ (gray). Notice that $m =1$ represents only the nearest neighbor interaction.[]{data-label="range"}](range.eps){width="\columnwidth"} In the present section, we consider weak coupling limit at half-filling. In the weak coupling limit, the scaling dimension of $g_{3\parallel}$ term is always higher than that of other non-linear terms in our model [@sam1; @joha]. Therefore, we set $g_{3\parallel} =0$ in the present study. The effect of $g_{1\perp}$ and $g_{3\perp}$ is taken into account using renormalized group (RG) equations as has been done in Ref. [@sam1]. Changing the cut-off $\alpha \rightarrow e^{dl} \alpha$ with $l = \ln L$, the RG equations within one-loop order is given by $$\begin{aligned} \frac{dy_{\nu0}(l)}{dl} &=& -y^2_{\nu\phi}(l) \\ \nonumber \frac{dy_{\nu \phi}(l)}{dl} &=& -y_{\nu 0}(l) y_{\nu \phi}(l)\label{RGF}\end{aligned}$$ where $y_{n 0}(0) = 2 (K_n -1)$, $y_{\sigma 0}(0) = 2 (K_\sigma -1)$, $y_{n \phi}(0) = g_{3\perp}/(\pi v_n)$, and $y_{\sigma \phi}(0) = g_{1\perp}/(\pi v_\sigma)$. Notice that we consider the weak coupling limit at half-filling. At these limits, the RG equations for particle and spin sectors are decoupled [@joha]. These equations determine the RG flow diagrams as presented in FIG. 2 of ref. [@sam1]. The RG equations for velocities at these limits are trivial and velocities have no effect on scaling dimension. Following the same arguments as in ref. [@sam3], the weak coupling phase diagram at half-filling can be extracted from the RG flow diagram. For the spin-gap transition ($\nu =\sigma$), Eq. (8) gives $$\begin{aligned} y_{\sigma 0}(l) = \frac{ly_{\sigma 0}(0)}{l y_{\sigma 0}(0)+1}.\label{e9}\end{aligned}$$ This shows that the spin gap opens when $y_{\sigma 0}(l) < 0$. For the weak coupling limit, where $g_\nu/(\pi v_F) \ll 1$, Gaussian coupling $K_\nu = [1-g_\nu/(\pi v_\nu)]^{-1/2}$ can be approximated as $K_\nu \simeq 1+ g_\nu/(2 \pi v_\nu)$. In this limit, the condition $y_{\sigma 0}(l) < 0$ translates into $g_\sigma < 0$. Therefore, the phase boundary between the charge-density wave (CDW) and the spin-density wave (SDW) phases at half-filling is determined by $U = 4V[1 + 3\cos (2 \theta_d)] \sum_{m =1}(-1)^m/m^3$. The sum over $m$ controls the range of the long-range interaction. For example, $m =1$ represents only the nearest neighbor interaction. On the other hand, the condition for the charge gap is $g_{3\perp} > |g_n|$. This condition gives two possible phase boundaries; one is $g_{3\perp} = -g_n < 0$ and the other is $g_{3\perp} = g_n > 0$. As there is no continuous symmetry breaking in one dimension, these phase transitions, due to the opening of a charge gap, are not true phase transitions but have power law decaying correlations. These Berenzinskii-Kosterlitz-Thouless type transitions are due to the SU(2) and hidden SU(2) symmetries in the particle (charge) sector [@bkt1; @bkt2]. The boundary between CDW phase and the singlet super-conducting correlation (SSF) phase is given by the conditions $U < 0 $ and $\sum_x V[1+3 \cos (2\theta_d)]\cos(2k_Fx) =0$. At half-filling, this condition translates into $U < 0$ and $\cos (2\theta_d) = -1/3$. The phase boundary between SDW phase and the triplet supper-conducting correlation (TSF) phase is determined by the conditions $U > 0$ and $U = -4V [1+3 \cos(2\theta_d)]\sum_{m=1} (-1)^m/m^3$. Similar to the quadratic Hamiltonian [@joha], a Gaussian type transition take place between two gapped phases when $y_{n\phi} = 0$ and $y_{n0} <0$. Since non-linear term vanishes on this Gaussian type transition line, this transition between SSF and TSF phase does not emerge from the RG equations [@sam1]. Instead, the scaling dimensions on the Gaussian line determines the transition at $g_n < 0$ at $g_{3\perp} = 0$. Therefore, the phase boundary between SSF and TSF phases are given when $U = 4V [1+3 \cos(2\theta_d)] \sum_{m =1}(-1)^m/m^3$ and $\cos(2\theta_d) < -1/3$. The resulting phase diagram in $U-\theta_d$ plane for $2V =1 $ is shown in FIG. \[pd\]. All the phases in the rich phase diagram in FIG. \[pd\] can be constructed experimentally just by fixing the on-site interaction (i.e fixing the laser intensity of the counter propagating lasers) and then changing the polarization direction with respect to the lattice orientation. However, the interactions have to be weak and the filling factor must be unity. By controlling the total number of particles in experiments, the filling factor at the center of the lattice can be set to unity. In-situ density imaging or Bragg spectroscopy can be used to distinguish the charge-density wave and the spin-density wave phases. The superfluid phases can be detected via pair correlation measurements using noise spectroscopy [@alt]. Notice that the boundary between the singlet superfluid and charge-density wave phases does not depend on the range of the interaction. However, all of the other boundaries have an effect on the range of long-range dipolar interaction. The shift of the boundaries due to the long-range part of the interaction is shown in FIG. \[range\] for a fixed $V = 0.5$. The interaction strength $V$ is always positive so that the qualitative features of the phase diagram do not change with $V$. V. The effect of inter-chain coupling ===================================== In the field of cold-atomic physics, one-dimensional systems are realized by creating an array of many 1D-tubes. Even though the tunneling between tubes is absent for well separated chains, the long-range dipolar-dipolar interaction can still cause coupling between tubes. In the presence of inter-chain coupling, we must modify our original Hamiltonian in Eq. (1) by adding the inter-chain interaction term, $$\begin{aligned} H_I = \frac{1}{2}\sum_{\alpha,\beta,\gamma,\delta,j, j^\prime}\int dx dx^\prime \psi^\dagger_{j,\alpha} (x)\psi^\dagger_{j^\prime,\beta}(x^\prime)V_{dd}(x-x^\prime)\psi_{j,\gamma}(x^\prime)\psi_{j^\prime\delta}(x),\label{e10}\end{aligned}$$ where $j \neq j^\prime$ is the chain index. Generalizing the 1D Wannier function $w(x) \rightarrow w(x, y) = e^{-[x^2 + (y-j R)^2]/(2l^2)}/\sqrt{l\pi^{1/2}}$, the inter-chain coupling can be approximated by $V_\perp = 2 D^2 \sin\theta^2_d/R^2$, where we assume that the neighboring chain is $R$ distance away in the $y$-direction. In the absence of the lattice, the planar array of 1D tube systems has been studied using bosonization theory [@tubes]. It has been shown that the inter-chain interaction is irrelevant, except when $\theta_d \simeq \theta_c$ where the long-range interaction vanishes along the lattice in the $x$-direction. When $\theta_d \simeq \theta_c$, the 1D system approaches the boundary between CDW and SSF phases and the long-range positive interaction between neighboring chains induces a type of CDW phase in the transverse direction [@tubes]. Even in the presence of a lattice, the inter-chain coupling can induce an inter-chain CDW phase. As a result, the intra-chain CDW-SSF phase boundary shifts toward the CDW phase allowing the SSF phase to stabilize over the CDW phase into a larger region of the phase diagram. VI. The strong coupling limit ============================= As has been shown above, for any positive on-site and off-site interactions, the particle system produces a insulating phase at half-filling. For any commensurate filling factors away from half-filling, the umklapp scattering is an irrelevant perturbation [@umk]. For the 1D Hubbard model without the off-site interactions, the system remains in a metallic phase at any filling factor away from half-filling. However in the presence of long-range interactions, when the average particle spacing $1/n$ is comparable to the range of the interaction, atoms may form a self-organized pattern known as a Wigner crystal phase. The transition into this insulating phase occurs at Luttinger parameter $K_n = n^2$. ![The variational parameter $\eta$ for the quarter filling case.[]{data-label="eta"}](eta.eps){width="\columnwidth"} For a continuous 1D system, the form of $1/r^\beta$ interaction has been studied in bosonization theory by treating the long-range forward scattering as a perturbation [@tsu]. This study shows that for $\beta > 0$, the forward scattering is an irrelevant perturbation. For $\beta = 1$, the Fourier transform of the interaction has a logarithmic divergence. At this limit, the bosonization theory predicts the existence of quasi-Wigner crystal phases with $4k_F$ density correlations [@tsu]. At low filling factors, these $4k_F$ correlations are dominant over the $2k_F$ Friedel density correlations. The study in Ref. [@tsu] is based on a perturbation approach. Therefore, the existence of quasi-Wigner crystal at larger interactions for $\beta = 3$ is not ruled out. Indeed, by investigating the structure factor using an exact diagonalization method, the existence of Wigner crystals at strong coupling limits and lower filling factors has been verified in Ref. [@tdp14]. This verification can be justified by the conditions that the dipolar-dipolar interaction $V_{dd}(r) > 0$, $V_{dd}(r) \rightarrow 0$ as $r \rightarrow \infty$, and $V_{dd}(r+1) + V_{dd}(r-1) \geq 2 V_{dd}(r)$ for any $r >1$. Since the bosonization techniques rely on a linear band dispersion, low energies and long wavelengths, our approach above does not predict the Wigner crystal phase even away from half-filling. In order to study the possible existence of a Wigner crystal phase at low filling factors, we use a variational approach. Let’s set $\theta_d =0$ so that the dipolar-dipolar interaction is purely repulsive. We consider the strong coupling regime where $U$, $V_{ir} \gg t$. As our motivation is to study whether Wigner crystal phases are favorable due to the long-range interaction, we consider the limit $U \rightarrow \infty$. In this limit, both double occupancy and mixing of different spin configurations are eliminated so that we can neglect the local interaction term and suppress the spin index of the operators in Eq. (\[model2\]). In other words, the Wigner crystal phase associates with charge ordering at this limit can be described as spinless fermions on the lattice. In Fourier space, the resulting Hamiltonian reads, $$\begin{aligned} H = \sum_k \epsilon_k c^\dagger_k c_k + \frac{1}{2L}\sum_q V(q) n_q n_{-q},\label{e11}\end{aligned}$$ where $\epsilon_k = -2t\cos(kd)$, $n_q = \sum_k c^\dagger_{k+q}c_k$, and $V(q) = 2(qd) K_1(dq)$ is the Fourier transform of the off-site interaction of the form $V_{i-j} = V [L/\pi \sin(\pi |i-j|/L)]^{-3}$. Here $K_1(x)$ is the modified first order Bessel function [@ino]. The open boundary conditions in realistic cold atoms systems may cause edge localization phenomena if the number of lattice sites is small [@obc]. However, in the presence of large number of lattice points, we believe that these effects are absent. Therefore, we assume that the optical lattice obeys periodic boundary conditions and introduced a chord distance between sites $i$ and $j$ to include the periodic boundary conditions. We consider any commensurate filling factors in the form $n = N/L = 1/s$ so that one dipolar particle is occupied in a periodic sequence of unit cells of size $s$ in the Wigner crystal phase. Following Ref. [@vale], we take our variational wave function in the form $|\psi(\eta) \rangle = \exp [-\eta \hat{T}] |\psi_0 \rangle$, where, $\hat{T} = -1/(t) \sum_k \epsilon_k c^\dagger_kc_k$, $\eta$ is the variational parameter and $$\begin{aligned} |\psi_0 \rangle = \prod_{k \in RBZ} \frac{1}{N_k}(c^\dagger_k + c^\dagger_{k+Q})|0\rangle.\label{e12}\end{aligned}$$ Here $Q = 2\pi/s$, $|0\rangle$ is the vacuum state, and $RBZ$ stands for Reduced Brillouin Zone. This wave function is analogous to the well- known Gutzwiller wave function [@gutz] which is used to explain the metal-Mott-insulator transition at half filling. Similar to the suppression of doubly occupied states in Gutzwiller wave function, the exponential operator sitting in front of our variational wave function suppresses high kinetic energy states. The normalization factor $N_k^2 = \exp[-2\eta \epsilon_k/t] + \exp[-2\eta \epsilon_{k+Q}/t] \equiv A_k^2 + B_k^2$. Notice that $|\psi_0 \rangle$ is proportional to the classical ground state of the Wigner crystal phase in the absence of tunneling between sites. The variational parameter $\eta$ is determine by minimizing the ground state energy $E_g = \langle\psi(\eta)|H|\psi(\eta)\rangle \equiv \langle \hat{KE} \rangle + \langle \hat{V} \rangle$, where the first term is the kinetic energy and the second term is the off-site interaction energy. By converting the sums in to integrals over the RBZ, the variational kinetic and interaction energies take the form, $$\begin{aligned} \langle \hat{KE} \rangle &=& \int_{RBZ} \frac{dk}{2\pi}\frac{\epsilon_k A_k^2+\epsilon_{k+Q} B_k^2}{N_k^2} \\ \nonumber \langle \hat{V} \rangle &=& \int_{FBZ} \frac{dq}{4\pi}S(q) V(q). \label{e13}\end{aligned}$$ The structure factor $S(q) = \langle n_q n_{-q} \rangle$ above has the form $$\begin{aligned} S(q) &=& \biggr(\frac{Q}{2\pi}\biggr)^2 + \biggr(\frac{Q}{2\pi}\biggr) - \int_{RBZ} \frac{dk}{2\pi} \frac{A_k^2 A_{k-q}^2 + B_k^2 B_{k-q}^2}{N_k^2 N_{k-q}^2} - \int_{RBZ} \frac{dk}{\pi} \frac{A_k^2 B_k^2}{N_k^4}.\label{e14}\end{aligned}$$ Notice that the $q$ sum in the interaction energy run over the entire Brillouin zone (FBZ) including $q =0$. This is different from electronic systems where the $q =0$ term is omitted due to the divergency of interaction [@wigner]. The positive background charges in the electronic lattice ensures the cancelation of this divergency. As a demonstration, the variational parameter $\eta$ for the quarter filling case ($n = 1/2$) is shown for different values of the interaction strengths in FIG. \[eta\]. Notice that the variational parameter reaches the classical Wigner crystal phase limit ($\eta = 0$) for larger interaction strengths while it reaches the liquid phase value ($\eta \rightarrow \infty$) in the opposite limit. As our variational wave function always represents the Wigner crystal phase, our approach does not allow us to study the phase transition between Wigner crystal and liquid phases. The qualitative behavior of $\eta$ is similar for other filling factors, however as the filling factor decreases, the variational parameter $\eta$ increases. A justification of phase transition from a liquid phase to a Wigner crystal phase is provided at the end of this section. As the density-density correlation function is related to the structure factor, the evolution of the structure factor in FIG. (\[Sfactor\]) shows how the particle modulation builds up as one increases the interaction. We have shown the results for two filling factors $n =1/2$ and $n = 1/4$ corresponding to $Q = \pi$ and $Q = \pi/2$, respectively. The qualitative behavior for other filling factors are the same. The reduction of the structure factor at higher interaction is due to the transfer of some of its weight to the Bragg peak at $q = Q$. The weight transferred to the Bragg peak ($I_s$) can be calculated using $I_s = n-[S(Q)-n^2]$, where $n = Q/(2\pi)$. This peak intensity as a function of the interaction for the quarter filling case is shown in FIG. \[Bpeak\]. As one expects, the peak intensity goes to zero for non-interacting systems. Since our variational approach is valid only for the Wigner crystal phase, the peak intensity is always non-zero for any finite interactions. However, if the Wigner crystal phase is absent, then the peak intensity must be zero. Experimentally, these Bragg peaks can be probed by measuring the structure factor using Bragg scattering or imaging techniques [@bs1; @bs2; @bs3; @bs4; @bs5; @bs6]. ![The weight of the Bragg peak ($I_s$) for the quarter filling case.[]{data-label="Bpeak"}](Bpeak.eps){width="\columnwidth"} Since the Bragg peak at $q = Q$ corresponding to the periodic ordering of the Wigner crystal phase, the average density distribution of the lattice can be written as $n(x) = n + I_s \cos (Q x)$. This quantity for different interaction parameters is shown in FIG. \[den\]. As can be seen from the figure for both $n = 1/2$ and $n = 1/4$ filling factors, the higher interactions enhance the peak structure showing the crystalline structure in the density distribution. This periodic density order can be probed by using a currently available experimental technique, known as quantum gas microscopy [@nate; @mark; @imman]. As we discussed in Sec. V, for a more realistic experimental setups, one has to consider the inter-chain interaction in the form given in Eq. (\[e10\]). For $\theta_d =0$ intra-tube interaction is repulsive, however fermions in different tubes can attract or repel depending on their dipolar moment alignment and the tube separation. For attractive inter-tube interactions, the system forms a clustered Wigner crystal phase [@n7]. This phase is coherent and Wigner crystals in both tubes locked to each other. On the other hand, for repulsive inter-tube interactions, the Wigner crystal phase is in in-coherent state. As we mentioned before, our variational wave function always represents a Wigner crystal phase. In order to justify the phase transition between a liquid state and a Wigner crystal phase, here we compare the energy of the Wigner crystal phase and the liquid phase perturbatively. Taking the unperturbed wave function as a free particle state, the liquid state energy in the first order perturbation is given by $$\begin{aligned} E_l &=& -\frac{2t}{\pi}\sin(Q/2) + \frac{1}{2L}\biggr[V(0)\biggr(\frac{Q}{2\pi}\biggr)^2-\sum_q\frac{V(q)}{2\pi}(Q-q)\biggr].\label{lse}\end{aligned}$$ By comparing the liquid state energy $E_l$ and the Wigner crystal state energy $E_g$, we find that the liquid state is favorable for small $V$ values and phase transition between these states takes place at a finite $V$ values for all filling factors. For example, for the case of quarter filling case, we find the phase transition at $V = 4.74 t$. VII Summary =========== We have studied dipolar fermions in a one-dimensional lattice using the bosonization theory and a variational approach. In the weak coupling limit at half-filling, the bosonization theory predicts the appearance of several quantum phases as one change the polarization direction of the dipoles relative to the one-dimensional lattice orientation. The quantum phase diagram includes a charge-density wave, a spin-density wave, a singlet superfluid, and a triplet superfluid phases. In the strong coupling limit at lower filling factors, our variational method predicts the emergence of a Wigner crystal phase due to long-range interaction. The structure factor and the density distribution clearly indicates the existence of Wigner crystal at larger interactions. The entire rich phase diagram resulting from the competition between kinetic energy and the on-site and off-site long-range interactions can be detected by using currently available experimental techniques. VIII ACKNOWLEDGMENTS ==================== We thank Erik Weiler for carefully reading the manuscript. T. Lahaye, T. Koch, B. Frohlich, M. Fattori, J. Metz, A. Griesmaier, S. Giovanazzi, and T. Pfau, Nature **448**, 672 (2007). K.-K. Ni, S. Ospelkaus, M. H. G. de Miranda, A. Pe’er, B. Neyenhuis, J. J. Zirbel, S. Kotochigova, P. S. Julienne, D. S. Jin, and J.Ye, Science, **322**, 231 (2008). J. Stuhler, A. Griesmaier, T. Koch, M. Fattori, T. Pfau, S. Giovanazzi, P. Pedri, and L. Santos, Phys. Rev. Lett. **95**, 150406 (2005). Emily Altiere, Donald P. Fahey, Michael W. Noel, Rachel J. Smith, and Thomas J. Carroll, Phys. Rev. A **84**, 053431 (2011). K.-K. Ni, S. Ospelkaus, D. J. Nesbitt, J. Ye, D. S. Jin, Physical Chemistry Chemical Physics **11**, 9626 (2009). D. Wang, B. Neyenhuis, M. H. G. de Miranda, K.-K. Ni, S. Ospelkaus, D. S. Jin, and J. Ye, Phys. Rev. A **81**, 061404(R) (2010). Mingwu Lu, Seo Ho Youn, and Benjamin L. Lev, Phys. Rev. Lett. **104**, 063001 (2010). D. J. McCarron, H. W. Cho, D. L. Jenkin, M. P. Köppinger, and S. L. Cornish, Phys. Rev. A **84**, 011603(R) (2011). Amodsen Chotia, Brian Neyenhuis, Steven A. Moses, Bo Yan, Jacob P. Covey, Michael Foss-Feig, Ana Maria Rey, Deborah S. Jin, and JunYe, Phys. Rev. Lett. **108**, 080405 (2012). Mingwu Lu, Nathaniel Q. Burdick, and Benjamin L. Lev, PRL **108**, 215301 (2012). For tutorials on ultracold dipolar gases in optical lattices, see for example; C Trefzger, C Menotti, B Capogrosso-Sansone and M Lewenstein, J. Phys. B: At. Mol. Opt. Phys. **44** (2011) 193001 AND M.A. Baranov, Physics Reports **464** (2008) 71. K. Goral, L. Santos, and M. Lewenstein, Phys. Rev. Lett. **88**, 170406 (2002). G. Pupillo et al., Phys. Rev. Lett. **100**, 050402 (2008). D.W. Wang, M.D. Lukin, and E. Demler, Phys. Rev. Lett. **97**, 180413 (2006). C. Trefzger, C. Menotti, and M. Lewenstein, Phys. Rev. Lett. **103**, 035304 (2009). S. Zollner, G. M. Bruun, C. J. Pethick, and S. M. Reimann, Phys. Rev. Lett. **107**, 035301 (2011). F. Deuretzbacher, J. C. Cremon, and S. M. Reimann, Phys. Rev. A **81**, 063616 (2010). R. Citro, E. Orignac, S. DePalo, and M. L. Chiofalo, Phys. Rev. A **75**, 051602(R) (2007). A. S. Arkhipov, G. E. Astrakharchik, A. V. Belikov, and Y. E. Lozovik, JETP Lett. **82**, 39 (2005). S. Sinha and L. Santos, Phys. Rev. Lett. **99**, 140406 (2007). R.-Z. Qiu, S.-P. Kou, Z.-X. Hu, X. Wan, and S. Yi, Phys. Rev. A **83**, 063633 (2011). S. Yi, T. Li, and C. P. Sun, Phys. Rev. Lett. **98**, 260405 (2007). A. E. Golomedov, G. E. Astrakharchik, and Yu. E. Lozovik, Phys. Rev. A **84**, 033615 (2011). B. Capogrosso-Sansone, C. Trefzger, M. Lewenstein, P. Zoller, and G. Pupillo, Phys. Rev. Lett. **104**, 125301 (2010). Zhihao Xu and Shu Chen, Phys. Rev. A **85**, 033606 (2012). S. G. Bhongale, L. Mathey, Shan-Wen Tsai, Charles W. Clark, Erhai Zhao, Phys. Rev. Lett. **108**, 145301 (2012). Tomasz Sowinski, Omjyoti Dutta, Philipp Hauke, Luca Tagliacozzo, and Maciej Lewenstein, Phys. Rev. Lett. **108**, 115301 (2012). Liang He and Walter Hofstetter, Phys. Rev. A **83**, 053629 (2011). K. Mikelsons and J. K. Freericks, Phys. Rev. A, **83**, 043609 (2011). Chungwei Lin, Erhai Zhao, and W. Vincent Liu, Phys.Rev.B. **81**. 045115 (2010); Phys.Rev.B. **83** 119901 (2011). M. M. Parish and F. M. Marchetti, Phys. Rev. Lett. **108**, 145304 (2012). F. M. Marchetti, M. M. Parish, pre-print arXiv:1207.4068. N. R. Cooper and G. V. Shlyapnikov, Phys. Rev. Lett. **103**, 155302 (2009). A. Pikovski, M. Klawunn, G. V. Shlyapnikov, and L. Santos, Phys. Rev. Lett. **105**, 215302 (2010). N. T. Zinner, B. Wunsch, D. Pekker, and D.-W. Wang, Phys. Rev. A **85**, 013603 (2012). M. Klawunn, J. Duhme, and L. Santos, Phys. Rev. A **81**, 013604 (2010). B. Wunsch, N. T. Zinner, I. B. Mekhov, S.-J. Huang, D.-W. Wang, and E. Demler, Phys. Rev. Lett. **107**, 073201 (2011). P. Lecheminant and H. Nonne, Phys. Rev. B **85**, 195121 (2012). Michael Knap, Erez Berg, Martin Ganahl, and Eugene Demler, Phys. Rev. B **86**, 064501 (2012). Kai Sun, Congjun Wu, and S. Das Sarma, Phys. Rev. B **82**, 075105 (2010). Y. Yamaguchi, T. Sogo, T. Ito, T. Miyakawa, Phys. Rev. A **82**, 013643 (2010). N. T. Zinner, G. M. Bruun, Eur. Phys. J. D **65**, 133 (2011). A. Micheli, G.K. Brennen, P. Zoller, Nature Physics, **2**, 341-347 (2006). D. DeMille, Phys. Rev. Lett. **88**, 067901 (2002). D. Jaksch, C. Bruder, J. I. Cirac, C. W. Gardiner, and P. Zoller, Phys. Rev. Lett. **81**, 3108 (1998). *Quantum Physics in One Dimension*, T. Giamarchi, Oxford University Press, 2004. V. J. Emery, *in Highly Conducting One Dimensional Solids*, edited by J. T. Devreese et al. (Plenum, New York, 1979), p. 327. M. A. Cazalilla, J. Phys. B: At. Mol. Opt. Phys. **37**, S1 (2004). Kaden R. A. Hazzard, Alexey V. Gorshkov, and Ana Maria Rey, Phys. Rev. A **84**, 033608 (2011). Alexey V. Gorshkov, Salvatore R. Manmana, Gang Chen, Jun Ye, Eugene Demler, Mikhail D. Lukin, and Ana Maria Rey, Phys. Rev. Lett. **107**, 115301 (2011). Elmar Haller, Mattias Gustavsson, Manfred J. Mark, Johann G. Danzl, Russell Hart, Guido Pupillo, Hanns-Christoph Nagerl, Science **325**, 1224 (2009). F. D. M. Haldane, J. Phys. C **14**, 2585 (1981). Masaaki Nakamura, Phys. Rev. B **61**, 16377 (2000). M. Tsuchiizu and A. Furusaki, Phys. Rev. Lett. **88**, 056402 (2002). S. Capponi, D. Poilblanc, and T. Giamarchi, Phys. Rev. B **61**, 13410 (2000). Johannes Voit, Phys. Rev. B **45**, 4027 (1992). C. N. Yang and S. C. Zhang, Mod. Phys. Lett. B **4**, 759 (1990). T. Giamarchi and H. J. Schulz, Phys. Rev. B **39**, 4620 (1989). E. Altman, E. Demler, and M. D. Lukin, Phys. Rev. A **70**, 013603 (2004). Yi-Ping Huang and Daw-Wei Wang, Phys. Rev. A **80**, 053610 (2009). M. Brech, J. Voit and H. Buttner, Europhys. Lett., **12**, 289 (1990). Yasumasa Tsukamoto and Norio Kawakami, J. Phys. Soc. Jpn. **69**, 149 (2000). Hitoshi Inoue and Kiyohide Nomura, J. Phys. A: Math. Gen. **39**, 2161 (2006). Ricardo A. Pinto, Masudul Haque, Sergej Flach, Phys. Rev. A **79**, 052118 (2009). B. Valenzuela, S. Fratini, and D. Baeriswyl, Phys. Rev. B **68**, 045112 (2003). M. C. Gutzwiller, Phys. Rev. Lett. **10**, 159 (1963). H. J. Schulz, Phys. Rev. Lett. **71**, 1864 (1993). M. Weidemuller, A. Hemmerich, A. Gorlitz, T. Esslinger and T. W. H¨ansch, Phys. Rev. Lett. **75**, 4583 (1995). G. Raithel, G. Birkl , A. Kastberg, W. D. Phillips and S. L. Rolston, Phys. Rev. Lett. **78**, 630 (1997). H. Miyake, et al. Phys. Rev. Lett. **107**, 175302 (2011). F. Gerbier, et al. Phys. Rev. Lett. **95**, 050404 (2005). S. Folling, et al. Nature **434**, 481 (2005). C-L. Hung et al. New. J. Phys. **13**, 075019 (2011). Nathan Gemelke, Xibo Zhang, Chen-Lung Hung and Cheng Chin, Nature **460**, 995 (2009). Waseem S. Bakr, Jonathon I. Gillen, Amy Peng, Simon Folling and Markus Greiner, Nature **462**, 74 (2009). Jacob F. Sherson, Christof Weitenberg, Manuel Endres, Marc Cheneau, Immanuel Bloch and Stefan Kuhr, Nature **467**, 68 (2010). [^1]: Corresponding author. Tel.: +1 607 777 3853, Fax.: +1 607 777 2546\ E-mail address: [email protected]
--- abstract: 'We give a summary of results for dimensions of spaces of cuspidal Siegel modular forms of degree 2. These results together with a list of dimensions of the irreducible representations of the finite groups ${{\rm GSp}}(4,{\mathbbm{F}}_p)$ are then used to produce bounds for dimensions of spaces of newforms with respect to principal congruence subgroups of odd square-free level.' address: 'Department of Mathematics, Fordham University, Bronx, NY 10458' author: - Jeffery Breeding II title: Dimensions of spaces of Siegel cusp forms of degree 2 --- [^1] Introduction ============ The classical theory of the passage of modular forms to automorphic representations is the starting point for its extension to Siegel modular forms of higher degree. The ${{\rm GL}}(1)$ case is famously described in Tate’s thesis [@Tate]. Let ${\mathbb{A}}$ denote the adèles of the rational numbers ${\mathbbm{Q}}$. Let $\chi_N$ be a Dirichlet character mod $N$. This character can be associated to a continuous character $$\omega: {{\rm GL}}(1,{\mathbbm{Q}}) \backslash {{\rm GL}}(1,{\mathbb{A}})\longrightarrow{\mathbbm{C}}^\times,$$ which can be written in terms of local components $$\omega=\otimes_v\omega_v,$$ where $\omega_v$ is a character of ${{\rm GL}}(1,{\mathbbm{Q}}_v)$. The global $L$-function of $\omega$ is constructed from an analysis of the local components $\omega_v$. Many Dirichlet characters are associated to a single such character $\omega$, but among them is a unique primitive one. Similarly, classical modular forms $f\in \mathcal{S}^1_k(\Gamma(N))$ can be associated to automorphic representations $\pi$ of ${{\rm GL}}(2,{\mathbb{A}})$, which can be written in terms of local components $$\pi=\otimes_v\pi_v,$$ where $\pi_v$ is a representation of ${{\rm GL}}(2,{\mathbbm{Q}}_v)$. $\pi$ can be realized in the action of ${{\rm GL}}(2,{\mathbb{A}})$ by right translation on a certain space of functions on ${{\rm GL}}(2,{\mathbbm{Q}})\backslash{{\rm GL}}(2,{\mathbb{A}})$. Again, many modular forms are associated to a single such representation $\pi$, but among them is a unique primitive $f$ known as a [*newform*]{}. This passage has been studied in great detail. The seminal work for the ${{\rm GL}}(2)$ theory is the book of Jacquet and Langlands [@JL]. A survey of the passage has been written by Kudla [@Kud]. The reader is also encouraged to consult the works of Bump [@Bump], Diamond and Im [@DiaIm], and Gelbart [@Gelb]. In the higher degree case, we can also associate an eigenform $f$ of degree $g$ to an automorphic representation of ${\rm GSp}(2g,\mathbb{A})=G(\mathbb{A})$. In particular, a useful approach to finding dimensions of spaces of Siegel cusp forms of degree 2 is to investigate the representation theory of ${{\rm GSp}}(4)$. Results in this group’s representation theory can be translated to results on spaces of cusp forms. Again, many cusp forms are associated to a single such representation $\pi$, but among them is a unique primitive form known as a [*newform*]{}. These cuspidal automorphic representations $\pi$ can be written in terms of local components $\pi_v$, where $v$ is a place of ${\mathbbm{Q}}$. The local components of the automorphic representation in turn give rise to local components of the cusp form. One can find the dimensions of certain spaces where some of these local components live. The dimensions tell us essentially how many choices we have for the local factors of the representation and therefore the number of choices of local vectors. The number of associated automorphic representations is then the same as the dimension of the space of newforms. The paper is organized as follows. After definitions and notations are established, we give dimension formulas for certain spaces of cuspidal Siegel modular forms that already exist in the literature. For automorphic representations associated to cuspidal Siegel modular forms with respect to principal congruence subgroups, we argue a descent to the representations of the finite group ${{\rm GSp}}(4,{\mathbbm{F}}_p)$. The dimensions of the irreducible representations of this finite group were computed by the author [@JB]. These dimensions are then used to find bounds for dimensions of certain newforms using existing results on dimensions of spaces of fixed vectors in the local components of certain representations associated with Siegel cusp forms. The author wishes to thank his advisor Ralf Schmidt for his guidance and Alan Roche for helpful notes on the supercuspidal case in the theorem below. Definitions and notations ========================= Define the general symplectic group ${{\rm GSp}}(4)$ as ${{\rm GSp}}(4):=\{g\in{{\rm GL}}(4): {}^tgJg=\lambda J\},$ where $J= \begin{pmatrix} &&&1\\ &&1&\\ &-1&&\\ -1&&& \end{pmatrix}$ for some $\lambda\neq0$, which will be denoted by $\lambda(g)$ and called the *multiplier* of $g$. The set of all $g\in{{\rm GSp}}(4)$ such that $\lambda(g)=1$ is the subgroup ${{\rm Sp}}(4)$. Any $g\in G$ can be uniquely written as $$g=\begin{pmatrix} 1 & & & \\ & 1 & & \\ & & \lambda(g) & \\ & & & \lambda(g) \\ \end{pmatrix}\,\cdot\, g',$$ with $g'\in{{\rm Sp}}(4)$. The [*Siegel upper half plane of degree 2*]{} is the set of $2\times2$ symmetric matrices with complex entries that have a positive definite imaginary part, i.e., $$\mathcal{H}_2=\{Z\in M_2({\mathbbm{C}}) : {}^tZ=Z, {\rm Im}(Z)>0\}.$$ The symplectic group ${{\rm Sp}}(4,{\mathbbm{R}})$ acts on $\mathcal{H}_2$ by $$\begin{pmatrix} A&B\\ C&D\\ \end{pmatrix}\cdot Z=(AZ+B)(CZ+D)^{-1}.$$ Let $\Gamma'$ be a discrete subgroup of ${{\rm Sp}}(4,{\mathbbm{R}})$. A [*Siegel cusp form of weight k*]{} is a holomorphic function $f:\mathcal{H}_2\to{\mathbbm{C}}$ such that for all $\gamma=\begin{pmatrix} A&B\\ C&D\\ \end{pmatrix}\in\Gamma'$ and $Z\in\mathcal{H}_2$, $$f(\gamma\cdot Z)={\rm det}(CZ+D)^k f(Z)$$ with Fourier series expansion $$f(Z)=\sum_{S>0} a_S e^{2\pi i \langle S,Z\rangle},$$ where the series ranges over integral-valued half-integral positive definite $2\times2$ matrices $S$ and $\langle S,Z\rangle= {\rm tr}(SZ)$. Siegel cusp forms and dimension formulas ======================================== The dimensions of cusp forms of weight $k\geq 4$ have been found for certain subgroups of the modular group $\Gamma={{\rm Sp}}(4,{\mathbbm{Z}})$ and we summarize these results here. In fact, Hashimoto [@Hash] has given a general formula for ${\rm dim}\, \mathcal{S}_k^2(\Gamma')$ using the Selberg Trace Formula, although it is not explicit. Dimensions of spaces of cusp forms with respect to the following subgroups have been determined using various methods and we summarize those results below. - $\Gamma = {{\rm Sp}}(4,{\mathbbm{Z}})$. - $\Gamma_0(N) = \{ g\in{{\rm Sp}}(4,{\mathbbm{Z}}) : g\equiv \begin{pmatrix} A&B\\ & D\\ \end{pmatrix} ({\rm mod}\, N)\}$. - $K(N) = {{\rm Sp}}(4,{\mathbbm{Q}})\cap\left\{\begin{pmatrix} {\mathbbm{Z}}& {\mathbbm{Z}}& {\mathbbm{Z}}& N^{-1}{\mathbbm{Z}}\\ N{\mathbbm{Z}}& {\mathbbm{Z}}& {\mathbbm{Z}}& {\mathbbm{Z}}\\ N{\mathbbm{Z}}& {\mathbbm{Z}}& {\mathbbm{Z}}& {\mathbbm{Z}}\\ N{\mathbbm{Z}}& N{\mathbbm{Z}}& N{\mathbbm{Z}}& {\mathbbm{Z}}\\ \end{pmatrix}\right\}$. - $\Gamma(N) = \{g\in{{\rm Sp}}(4,{\mathbbm{Z}}) : g\equiv I ({\rm mod}\, N)\}$. Dimensions of $\mathcal{S}_k(\Gamma)$ ------------------------------------- A formula for the dimension of $\mathcal{S}_k(\Gamma)$ was computed by Eie [@Eie] using the Selberg trace formula: $${\rm dim}\, \mathcal{S}_k(\Gamma)=C(k,2)\int_\mathcal{F} \sum_M K_M(Z,\overline{Z})^{-k}({\rm det} Y)^{k-3}dXdY,$$ where $$C(k,2)=2^{-2}(2\pi)^{-3}\left(\prod_{i=0}^1 \Gamma\left(k-\dfrac{1-i}{2}\right)\right)\left( \prod_{i=0}^1 \Gamma\left(k-2+\dfrac{i}{2}\right)\right)^{-1}$$ and $\mathcal{F}$ is a fundamental domain on $\mathcal{H}_2$ for ${{\rm Sp}}(4,{\mathbbm{Z}})$. Eie then finds the following dimension formula by determining the contribution from the conjugacy classes of regular elliptic elements in ${{\rm Sp}}(4,{\mathbbm{Z}})$ using Weyl’s character formula for representations of ${{\rm GL}}(2,{\mathbbm{C}})$, obtaining $${\rm dim}\, \mathcal{S}_k(\Gamma)=N_1+N_2+N_3+N_4,$$ where $N_1 = \left\{ \begin{array}{ll} 2^{-7}\cdot 3^{-3}\cdot & [1131,229,-229,-1131,427,-571,123,-203,203,\\ &-123,571,-427] \\ &{\rm for}\, k\equiv [0,1,2,3,4,5,6,7,8,9,10,11]\, {\rm (mod\, 12)} \end{array} \right.$ $N_2=\left\{ \begin{array}{ll} 5^{-1} & {\rm for}\, k\equiv 0\, {\rm (mod\, 5)},\\ -5^{-1} & {\rm for}\, k\equiv 3\, {\rm (mod\, 5)},\\ 0 & {\rm otherwise} \end{array} \right.$ $N_3 = \left\{ \begin{array}{ll} 2^{-5}\cdot 3^{-3}\cdot & [17k-294,-25k+325,-25k+254,17k-261,17k-86,\\ &-k+53,-k-42,-7k+91,-7k+2,-k-27,-k+166,\\ &17k-181] \\ & {\rm for}\, k\equiv [0,1,2,3,4,5,6,7,8,9,10,11]\, {\rm (mod\, 12)} \end{array} \right.$ $N_4 = \left\{ \begin{array}{ll} 2^{-7}\cdot 3^{-3}\cdot 5^{-1}\cdot (2k^3+96k^2-52k-3231) & {\rm for}\, k\, {\rm even}\\ 2^{-7}\cdot 3^{-3}\cdot 5^{-1}\cdot (2k^3-114k^2+2018k-9051) & {\rm for}\, k\, {\rm odd.} \end{array} \right.$ $\mathcal{S}_k(\Gamma)$ is zero–dimensional for $k<10$. The values for $k=10,11,...,20$ are given in the following table. $k$ 10 11 12 13 14 15 16 17 18 19 20 ------------------------------------- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ${\rm dim}\, \mathcal{S}_k(\Gamma)$ 1 0 1 0 1 0 2 0 2 0 3 Dimensions of $\mathcal{S}_k(\Gamma_0(p))$ ------------------------------------------ Dimensions of $\mathcal{S}_k(\Gamma_0(p))$ have been computed by Hashimoto [@Hash] for weights $k\geq 5$ and by Poor and Yuen [@PYGamma0] for weight $k=4$. Poor and Yuen use vanishing theorems and a restriction technique to compute dimensions of $\mathcal{S}_k(\Gamma_0(p))$ for $k=2, 3, 4$ and small primes $p$. For weight 1, Ibukiyama and Skoruppa [@IbSk] have shown that $\mathcal{S}_1(\Gamma_0(N))=\{0\}$ for all positive integers $N$. For weight 4 and small primes $p\leq13$, the dimensions of $\mathcal{S}_4(\Gamma_0(p))$ are $p$ 2 3 5 7 11 13 ------------------------------------------ --- --- --- --- ---- ---- ${\rm dim}\, \mathcal{S}_4(\Gamma_0(p))$ 0 1 1 3 7 11 Dimensions of $\mathcal{S}_k(K(p))$ ----------------------------------- In [@PYPara], Poor and Yuen discuss Siegel modular cusp forms of weight two for the paramodular group $K(p)$ for primes $p<600$ and give a table of dimensions for weight 4 paramodular forms of small prime level. Ibukiyama computed the dimensions for weights $k\geq 5$ using the Selberg trace formula [@IbRel] and for weights 3, 4 [@IbDim]. In the case of weight $4$, prime level $p\geq 5$ paramodular cusp forms, Ibukiyama determined the dimension formula $${\rm dim}\, \mathcal{S}_4(K(p))=\frac{p^2}{576}+\frac{p}{8}-\frac{143}{576}+\left(\frac{p}{96}-\frac{1}{8} \right)\left(\frac{-1}{p} \right)+\frac{1}{8}\left(\frac{2}{p}\right)+\frac{1}{12}\left(\frac{3}{p}\right)+\frac{p}{36}\left(\frac{-3}{p} \right).$$ For weight 4 paramodular cusp forms, the dimensions for small prime level spaces are given in the following table. $p$ 2 3 5 7 11 13 17 19 ----------------------------------- --- --- --- --- ---- ---- ---- ---- ${\rm dim}\, \mathcal{S}_4(K(p))$ 0 0 0 1 1 2 2 3 Dimensions of $\mathcal{S}_k(\Gamma(p))$ ---------------------------------------- Dimensions of Siegel cusp forms of degree 2 on the principal congruence subgroup $\Gamma(p)$ have been computed by several people, see [@Mor], [@Tsu], [@Yam]. Let $N=p_1\dots p_n$, where $p_1<\dots<p_n$ are distinct odd primes, and let $$M=\prod_{i=1}^n(1-p_i^{-2})(1-p_i^{-4}).$$ Then the dimension of Siegel cusp forms of degree 2, weight $k\geq 4$, and level $N$ is ${\rm dim}\, \mathcal{S}_k(\Gamma(N))=$ $$N^7 2^{-5}3^{-1}\left(N^32^{-5}3^{-2}5^{-1}(2k-2)(2k-3)(2k-4)-N\cdot2^{-1}3^{-1}(2k-3)+1\right)\cdot M.$$ In particular, if $N=p$ is prime, the dimension is $${\rm dim}\, \mathcal{S}_k(\Gamma(p))=\dfrac{\left((2k^3-9k^2+13k-6)p^3+(180-120k)p+360\right)p(p^4-1)(p^2-1)}{2^8 3^3 5}$$ For convenience, we compute the dimensions of some of these spaces with this formula. $p$ 2 3 5 7 11 13 17 ---------------------------------------- --- ---- ------ -------- ---------- ----------- ------------ -- ${\rm dim}\, \mathcal{S}_4(\Gamma(p))$ 0 15 5655 199500 20683575 112567455 1687834800 $k$ 4 5 6 7 8 9 10 ---------------------------------------- ---- ---- ----- ----- ----- ------ ------ -- ${\rm dim}\, \mathcal{S}_k(\Gamma(3))$ 15 76 200 405 709 1130 1686 $k$ 4 5 6 7 8 9 10 ---------------------------------------- ------ ------- ------- ------- -------- -------- -------- -- ${\rm dim}\, \mathcal{S}_k(\Gamma(5))$ 5655 18980 43680 83005 140205 218530 321230 We find that the dimension of the space of cusp forms of weight 4 of the smallest odd square-free level is quite large: $${\rm dim}\, \mathcal{S}_4(\Gamma(15))=69,023,360,250,000,000.$$ Bounds for dimensions of spaces of newforms =========================================== We now give bounds for dimensions of spaces of newforms $\mathcal{S}_k^{new}(\Gamma(p))$. The idea is look at possible dimensions of $\Gamma(p)$-fixed vectors at the local component $\pi_p$ of an associated automorphic representation. The dimension of the space of newforms $\mathcal{S}_k^{new}(\Gamma(p))$ of weight\ $k\geq 4$, odd prime level $p$ is bounded below by $$\frac{\left((2k^3-9k^2+13k-6)p^3+(180-120k)p+360\right)p(p-1)^2}{34560}$$ and bounded above by $$\begin{cases} \dfrac{6k^3-27k^2-k+82}{12} & \text{if $p=3$,} \\ \\ \dfrac{\left((2k^3-9k^2+13k-6)p^3+(180-120k)p+360\right)p(p^4-1)}{17280} &\text{if $p\neq3$.} \end{cases}$$ The case where the local component is non-supercuspidal is discussed in [@JB] and its argument is omitted here. In the case where the local component is supercuspidal, we use the work of Morris [@Morris1], [@Morris2] and of Moy and Prasad [@MoyPrasad]. Let $\Gamma>\Gamma_1$ be congruence subgroup of $G={{\rm GSp}}(4,{\mathbbm{Q}}_p)$. Let $\pi$ be an irreducible smooth supercuspidal representation of $G$. Suppose $\pi^{\Gamma_1}\neq 0$. Then $\pi|_\Gamma$ contains a cuspidal representation $\rho$ of $\Gamma/\Gamma_1\cong{{\rm GSp}}(4,{\mathbbm{F}}_p)$. Then $\pi$ contains an extension $\tilde{\rho}$ of $\rho$ to $Z\Gamma$, where $Z$ is the center, and we have ${\rm Hom}_{Z\Gamma}(\tilde{\rho},\pi|_{Z\Gamma})\neq 0$. By Frobenius reciprocity, $${\rm Hom}_G({\rm ind}_{Z\Gamma}^G \tilde{\rho},\pi)\cong {\rm Hom}_{Z\Gamma}(\tilde{\rho},\pi|_{Z\Gamma}).$$ Since ${\rm ind}_{Z\Gamma}^G\tilde{\rho}$ is irreducible, it must be isomorphic to $\pi$. Now consider the decomposition of ${\rm ind}_{Z\Gamma}^G\tilde{\rho}|_{\Gamma_1}$ using Mackey’s restriction formula, $${\rm ind}_{Z\Gamma}^G\tilde{\rho}|_{\Gamma_1}\cong\bigoplus_{x\in Z\Gamma\backslash G/\Gamma_1} {\rm ind}_{Z\Gamma^x\cap\Gamma_1}^{\Gamma_1} (\tilde{\rho}^x|_{Z\Gamma^x\cap\Gamma_1}).$$ where $Z\Gamma^x=x^{-1}Z\Gamma x$, and $\tilde{\rho}^x$ is the representation of $Z\Gamma^x$ defined by $$\tilde{\rho}^x(x^{-1}hx) = \tilde{\rho}(h)$$ for $h\in Z\Gamma.$ We now want to find when ${\rm ind}_{Z\Gamma^x\cap\Gamma_1}^{\Gamma_1} (\tilde{\rho}^x|_{Z\Gamma^x\cap\Gamma_1})$ contains the trivial representation of $\Gamma_1$, i.e, when $${\rm Hom}_{\Gamma_1}({\rm ind}_{Z\Gamma^x\cap\Gamma_1}^{\Gamma_1}(\tilde{\rho}^x), \textbf{1}_{\Gamma_1})\neq 0$$ or, equivalently, when $${\rm Hom}_{Z\Gamma^x\cap\Gamma_1}(\tilde{\rho}^x|_{Z\Gamma^x\cap\Gamma_1},\textbf{1}_{Z\Gamma^x\cap\Gamma_1})\neq 0.$$ Consider ${\rm ind}_{Z\Gamma}^G\tilde{\rho}|_{\Gamma_1}$. If this contains the trivial representation, then ${\rm ind}_{Z\Gamma}^G\tilde{\rho}|_\Gamma$ contains an irreducible representation, say $\tau$, such that $\tau|_{\Gamma_1}\supset\textbf{1}_{\Gamma_1}$. This implies that $\tau$ is trivial on $\Gamma_1$. So $\pi$ contains $\rho$ and $\tau$. The general theory implies that $\rho,\tau$ intertwine, i.e., there exist $x\in G$ such that $${\rm Hom}_{\Gamma^x\cap\Gamma}(\rho^x,\tau)\neq 0.$$ This implies $x\in Z\Gamma$. So, by the cuspidality of $\rho$ and using representatives for $\Gamma\backslash G/\Gamma$, we have $\rho\cong\tau$. It follows that $\pi^{\Gamma_1}=\tilde{\rho}|_{\Gamma_1}$. In particular, $${\rm dim}\, \pi^{\Gamma_1}={\rm dim}\, \tilde{\rho}={\rm dim}\, \rho.$$ Thus, by considering spaces of $\Gamma(p)$-fixed vectors, the dimensions of the nontrivial irreducible representations of ${{\rm GSp}}(4,{\mathbbm{F}}_p)$ can be used to find bounds for the number of associated automorphic representations, i.e, to find bounds for the dimension of the space of newforms. The dimensions of the nontrivial irreducible representations of ${{\rm GSp}}(4,{\mathbbm{F}}_q)$, determined in [@JB], are given in the following table. [|l|l||l|l|]{} \ & & &\ \ & & &\ $a_1(p)$ & $(p^2+1)(p+1)^2$ & $a_{10}(p)$ & $(p^2+1)(p+1)$\ $a_2(p)$ & $p(p^2+1)(p+1)$ & $a_{11}(p)$ & $p(p^2+1)$\ $a_3(p)$ & $p^2(p^2+1)$ & $a_{12}(p)$ & $(p^2+1)(p-1)$\ $a_4(p)$ & $p^4$ & $a_{13}(p)$ & $\frac{1}{2}p(p+1)^2$\ $a_5(p)$ & $p^4-1$ & $a_{14}(p)$ & $\frac{1}{2}p(p^2+1)$\ $a_6(p)$ & $p^2(p^2-1)$ & $a_{15}(p)$ & $\frac{1}{2}p(p-1)^2$\ $a_7(p)$ & $(p^2-1)^2$ & $a_{16}(p)$ & $p^2+1$\ $a_8(p)$ & $p(p^2+1)(p-1)$ & $a_{17}(p)$ & $p^2-1$\ $a_9(p)$ & $(p^2+1)(p-1)^2$ & &\ We may exclude the dimensions $a_{16}(p)$ and $a_{17}(p)$ from our considerations because they are dimensions of $\Gamma(p)$-fixed vectors of representations that are not unitary. The theorem follows. We note that this method also applies to finding bounds for dimensions of Siegel cusp forms of higher degree $g$. However, one must determine the dimensions of the irreducible representations of the group ${{\rm GSp}}(2g,{\mathbbm{F}}_p)$ in the higher degree case. The dimension of the space of newforms of weight 4, level 3 is 1. Moreover, - $\mathcal{S}_4^{new}(\Gamma(3))=\mathcal{S}_4(\Gamma_0(3))$. - All Siegel cusp forms of weight 4, level 3 are Saito-Kurokawa lifts. - The associated automorphic representation’s component at $p=3$ is isomorphic to the non-supercuspidal representation $\tau(T,\nu^{-1/2}\sigma)$, a constituent of $\nu\times1_{{\mathbbm{Q}}_p^\times}\rtimes\nu^{-1/2}\sigma$, where $\nu$ is the valuation of ${\mathbbm{Q}}_p$. First note that ${\rm dim}\, \mathcal{S}_4(\Gamma(3))=15$. By our theorem, we have $$\dfrac{3}{32}\leq{\rm dim}\, \mathcal{S}_4^{new}(\Gamma(3))\leq\dfrac{5}{2}.$$ Moreover, we can determine the dimension exactly in this case. Let $f$ be an eigenform of weight 4 and level 3 and let $\pi=\pi_f=\otimes_v \pi_v$ be its associated automorphic representation. Since $f$ has level 3, $\pi_p$ is spherical for finite primes $p\neq3$ and $\pi_3$ has a non-trivial finite-dimensional subspace of $\Gamma(3)$-fixed vectors. It is clear that only solution of the Diophantine equation $$\sum_{n=1}^{15}c_na_n(3)={\rm dim}\, \mathcal{S}_4(\Gamma(3))=15$$ is $c_{14}=1$ and $c_i=0$ for $i\neq14$. This means that there is only one automorphic representation associated to this space. Hence, the dimension of the space of newforms is 1. Furthermore, the irreducible representations of ${{\rm GSp}}(4,{\mathbbm{F}}_3)$ that have dimension $a_{14}(3)$ are the non-cuspidal representations ${\rm Ind}(\theta_{11})_a, {\rm Ind}(\theta_{12})_b$, and their twists, see [@JB]. These representations descend from the non-supercuspidal representations $\tau(T,\nu^{-1/2}\sigma)$ or $L(\nu^{1/2}{{\rm St}}_{{{\rm GL}}(2)},\nu^{-1/2}\sigma)$. To determine which one it is, we note that $\tau(T,\nu^{-1/2}\sigma)$ has a non-zero $\Gamma_0(3)$-fixed vector and $L(\nu^{1/2}{{\rm St}}_{{{\rm GL}}(2)},\nu^{-1/2}\sigma)$ does not, see [@ST]. Also, $L(\nu^{1/2}{{\rm St}}_{{{\rm GL}}(2)},\nu^{-1/2}\sigma)$ has a non-zero $\Gamma^{K(3)}$-fixed vector and $\tau(T,\nu^{-1/2}\sigma)$ does not. So the correct local component can be identified if we know ${\rm dim}\, \mathcal{S}_4(\Gamma_0(3))$ or ${\rm dim}\, \mathcal{S}_4(K(3))$. From [@PYGamma0] and [@PYPara] we have ${\rm dim}\, \mathcal{S}_4(\Gamma_0(3))=1$ and\ ${\rm dim}\, \mathcal{S}_4(K(3))=0$. So $\tau(T,\nu^{-1/2}\sigma)$, a Saito-Kurokawa lifting, is the local component. Similarly, we also have bounds for dimensions of newforms of odd square-free level. Let $N=p_1\dots p_n$, where $p_1<\dots<p_n$ are distinct odd primes, and let $$M=\prod_{i=1}^n(1-p_i^{-2})(1-p_i^{-4})$$ The dimension of the space of newforms $\mathcal{S}_k^{new}(\Gamma(N))$ of weight\ $k\geq 4$ and odd square-free level $N=p_1\dots p_n$, where $p_1<\dots<p_n$ are distinct odd primes, is bounded below by $$\dfrac{N^7 2^{-5}3^{-1}\left(N^32^{-5}3^{-2}5^{-1}(2k-2)(2k-3)(2k-4)-N\cdot2^{-1}3^{-1}(2k-3)+1\right)}{\sum_{i=1}^n (p_i^2+1)(p_i+1)^2}\cdot M.$$ The dimension is bounded above by $$\dfrac{N^7 2^{-5}3^{-1}\left(N^32^{-5}3^{-2}5^{-1}(2k-2)(2k-3)(2k-4)-N\cdot2^{-1}3^{-1}(2k-3)+1\right)}{6+\sum_{i=2}^n (p_i^2-1)}\cdot M$$ if $3|N$ or by $$\dfrac{N^7 2^{-5}3^{-1}\left(N^32^{-5}3^{-2}5^{-1}(2k-2)(2k-3)(2k-4)-N\cdot2^{-1}3^{-1}(2k-3)+1\right)}{\sum_{i=1}^n (p_i^2-1)}\cdot M$$ if $3\nmid N$. [10]{} J. Breeding II, [*Irreducible non-cuspidal characters of ${{\rm GSp}}(4,{\mathbbm{F}}_q)$*]{}, Ph.D. thesis, University of Oklahoma, Norman, OK, 2011. D. Bump, [*Automorphic Forms and Representations*]{}, Cambridge University Press, Cambridge, UK, 1997. F. Diamond and J. Im, [*Modular forms and modular curves*]{}, Seminar on Fermat’s Last Theorem, pp. 39–133, CMS Conf. Proc., 17, [*Amer. Math. Soc.*]{}, Providence, RI, 1995. M. Eie, [*Contributions from conjugacy classes of regular elliptic elements in ${{\rm Sp}}(n,{\mathbbm{Z}})$ to the dimension formula*]{}, Trans. Amer. Math. Soc. 285 (1984), no. 1, 403–410. S. Gelbart, [*Automorphic forms on adèle groups*]{}, Annals of Mathematics Studies, No. 83. Princeton University Press, Princeton, N.J.; University of Tokyo Press, Tokyo, 1975. K. Hashimoto, [*The dimension of the spaces of cusp forms on the Siegel upper half-plane of degree two. I*]{}, J. Fac. Sci. Univ. Tokyo Sect. IA Math. **30** (1983), no. 2, 403–488. T. Ibukiyama, [*On relations of dimensions of automorphic forms of ${{\rm Sp}}(2,{\mathbbm{R}})$ and its compact twist ${{\rm Sp}}(2)$. I.*]{}, Automorphic forms and number theory (Sendai 1983), 7–30, Adv. Stud. Pure Math., 7, North-Hollana, Amsterdam, 1985. T. Ibukiyama, [*Dimensions formulas of Siegel modular forms of weight 3 and supersingular abelian surfaces*]{}, Siegel Modular Forms and Abelian Varieties, Proceedings of the 4th Spring Conference on Modular Forms and Related Topics, 2007, pp. 39–60. T. Ibukiyama and N.-P. Skoruppa, [*A vanishing theorem for Siegel modular forms of weight one.*]{}, Abh. Math. Sem. Univ. Hamburg **77** (2007), 229–235. H. Jacquet and R. Langlands, [*Automorphic forms on GL(2)*]{}, Lecture Notes in Mathematics, Vol. 114, Springer-Verlag, Berlin-New York, 1970. S. Kudla, [*From modular forms to automorphic representations*]{}, An introduction to the Langlands program (Jerusalem, 2001), pp. 131–151, Birkhäuser Boston, Boston, MA, 2003. Y. Morita, [*An explicit formula for the dimension of spaces of Siegel modular forms of degree two*]{}, J. Fac. Sci. Univ. Tokyo Sect. IA Math. **21** (1974), 167–248. L. Morris, [*Tamely ramified supercuspidal representations*]{}, Ann. Sci. École Norm. Sup. (4) **29** (1996), no. 5, 639–667. L. Morris, [*Level zero **G**-types*]{}, Compositio Math. **118** (1999), no. 2, 135–157. A. Moy and G. Prasad, [*Jacquet functors and unrefined minimal **K**-types*]{}, Comment. Math. Helv. **71** (1996), no. 1, 98–121. C. Poor and D. Yuen, [*Dimensions of cusp forms for $\Gamma_0(p)$ in degree two and small weights.*]{}, Abh. Math. Sem. Univ. Hamburg **77** (2007), 59–80. C. Poor and D. S.Yuen, [*Paramodular Cusp Forms*]{}, arXiv:0912.0049v1 (2009). P. Sally and M. Tadić, [*Induced representations and classifications for GSp(2, F) and Sp(2, F)*]{}, Mém. Soc. Math. France (N.S.) No. 52 (1993), 75–133. J. Tate, [*Fourier analysis in number fields and Hecke’s zeta-functions*]{}, Ph.D. thesis, Princeton University, Princeton, NJ, 1950. R. Tsushima, [*On the spaces of Siegel cusp forms of degree two*]{}, Amer. J. Math. **104** (1982), no. 4, 843–885. T. Yamazaki, [*On Siegel modular forms of degree two*]{}, Amer. J. Math. **98** (1976), no. 1, 39–53. [^1]:
--- abstract: 'We report a study of the decay $D^0 \rightarrow \bar{K}^0\pi^-e^+\nu_{e}$ based on a sample of $2.93~\mathrm{fb}^{-1}$ $e^+e^-$ annihilation data collected at the center-of-mass energy of 3.773 GeV with the BESIII detector at the BEPCII collider. The total branching fraction is determined to be $\mathcal{B}(D^0\rightarrow \bar{K}^0\pi^-e^+\nu_{e})=(1.434\pm0.029({\rm stat.})\pm0.032({\rm syst.}))\%$, which is the most precise to date. According to a detailed analysis of the involved dynamics, we find this decay is dominated with the $K^{*}(892)^-$ contribution and present an improved measurement of its branching fraction to be $\mathcal{B}(D^0\rightarrow K^{*}(892)^-e^+\nu_e)=(2.033\pm0.046({\rm stat.})\pm0.047({\rm syst.}))\%$. We further access their hadronic form-factor ratios for the first time as $r_{V}=V(0)/A_1(0)=1.46\pm0.07({\rm stat.})\pm0.02({\rm syst.})$ and $r_{2}=A_2(0)/A_1(0)=0.67\pm0.06({\rm stat.})\pm0.01({\rm syst.})$. In addition, we observe a significant $\bar{K}^0\pi^-$ $S$-wave component accounting for $(5.51\pm0.97({\rm stat.})\pm0.62({\rm syst.}))\%$ of the total decay rate.' author: - | M. Ablikim$^{1}$, M. N. Achasov$^{10,d}$, S.  Ahmed$^{15}$, M. Albrecht$^{4}$, M. Alekseev$^{56A,56C}$, A. Amoroso$^{56A,56C}$, F. F. An$^{1}$, Q. An$^{53,42}$, Y. Bai$^{41}$, O. Bakina$^{27}$, R. Baldini Ferroli$^{23A}$, Y. Ban$^{35}$, K. Begzsuren$^{25}$, D. W. Bennett$^{22}$, J. V. Bennett$^{5}$, N. Berger$^{26}$, M. Bertani$^{23A}$, D. Bettoni$^{24A}$, F. Bianchi$^{56A,56C}$, E. Boger$^{27,b}$, I. Boyko$^{27}$, R. A. Briere$^{5}$, H. Cai$^{58}$, X. Cai$^{1,42}$, A. Calcaterra$^{23A}$, G. F. Cao$^{1,46}$, S. A. Cetin$^{45B}$, J. Chai$^{56C}$, J. F. Chang$^{1,42}$, W. L. Chang$^{1,46}$, G. Chelkov$^{27,b,c}$, G. Chen$^{1}$, H. S. Chen$^{1,46}$, J. C. Chen$^{1}$, M. L. Chen$^{1,42}$, P. L. Chen$^{54}$, S. J. Chen$^{33}$, X. R. Chen$^{30}$, Y. B. Chen$^{1,42}$, W. Cheng$^{56C}$, X. K. Chu$^{35}$, G. Cibinetto$^{24A}$, F. Cossio$^{56C}$, H. L. Dai$^{1,42}$, J. P. Dai$^{37,h}$, A. Dbeyssi$^{15}$, D. Dedovich$^{27}$, Z. Y. Deng$^{1}$, A. Denig$^{26}$, I. Denysenko$^{27}$, M. Destefanis$^{56A,56C}$, F. De Mori$^{56A,56C}$, Y. Ding$^{31}$, C. Dong$^{34}$, J. Dong$^{1,42}$, L. Y. Dong$^{1,46}$, M. Y. Dong$^{1,42,46}$, Z. L. Dou$^{33}$, S. X. Du$^{61}$, P. F. Duan$^{1}$, J. Fang$^{1,42}$, S. S. Fang$^{1,46}$, Y. Fang$^{1}$, R. Farinelli$^{24A,24B}$, L. Fava$^{56B,56C}$, F. Feldbauer$^{4}$, G. Felici$^{23A}$, C. Q. Feng$^{53,42}$, M. Fritsch$^{4}$, C. D. Fu$^{1}$, Q. Gao$^{1}$, X. L. Gao$^{53,42}$, Y. Gao$^{44}$, Y. G. Gao$^{6}$, Z. Gao$^{53,42}$, B.  Garillon$^{26}$, I. Garzia$^{24A}$, A. Gilman$^{49}$, K. Goetzen$^{11}$, L. Gong$^{34}$, W. X. Gong$^{1,42}$, W. Gradl$^{26}$, M. Greco$^{56A,56C}$, L. M. Gu$^{33}$, M. H. Gu$^{1,42}$, Y. T. Gu$^{13}$, A. Q. Guo$^{1}$, L. B. Guo$^{32}$, R. P. Guo$^{1,46}$, Y. P. Guo$^{26}$, A. Guskov$^{27}$, Z. Haddadi$^{29}$, S. Han$^{58}$, X. Q. Hao$^{16}$, F. A. Harris$^{47}$, K. L. He$^{1,46}$, F. H. Heinsius$^{4}$, T. Held$^{4}$, Y. K. Heng$^{1,42,46}$, Z. L. Hou$^{1}$, H. M. Hu$^{1,46}$, J. F. Hu$^{37,h}$, T. Hu$^{1,42,46}$, Y. Hu$^{1}$, G. S. Huang$^{53,42}$, J. S. Huang$^{16}$, X. T. Huang$^{36}$, X. Z. Huang$^{33}$, Z. L. Huang$^{31}$, T. Hussain$^{55}$, W. Ikegami Andersson$^{57}$, W. Imoehl$^{22}$, M, Irshad$^{53,42}$, Q. Ji$^{1}$, Q. P. Ji$^{16}$, X. B. Ji$^{1,46}$, X. L. Ji$^{1,42}$, H. L. Jiang$^{36}$, X. S. Jiang$^{1,42,46}$, X. Y. Jiang$^{34}$, J. B. Jiao$^{36}$, Z. Jiao$^{18}$, D. P. Jin$^{1,42,46}$, S. Jin$^{33}$, Y. Jin$^{48}$, T. Johansson$^{57}$, N. Kalantar-Nayestanaki$^{29}$, X. S. Kang$^{34}$, M. Kavatsyuk$^{29}$, B. C. Ke$^{1}$, I. K. Keshk$^{4}$, T. Khan$^{53,42}$, A. Khoukaz$^{50}$, P.  Kiese$^{26}$, R. Kiuchi$^{1}$, R. Kliemt$^{11}$, L. Koch$^{28}$, O. B. Kolcu$^{45B,f}$, B. Kopf$^{4}$, M. Kuemmel$^{4}$, M. Kuessner$^{4}$, A. Kupsc$^{57}$, M. Kurth$^{1}$, W. Kühn$^{28}$, J. S. Lange$^{28}$, P.  Larin$^{15}$, L. Lavezzi$^{56C}$, S. Leiber$^{4}$, H. Leithoff$^{26}$, C. Li$^{57}$, Cheng Li$^{53,42}$, D. M. Li$^{61}$, F. Li$^{1,42}$, F. Y. Li$^{35}$, G. Li$^{1}$, H. B. Li$^{1,46}$, H. J. Li$^{1,46}$, J. C. Li$^{1}$, J. W. Li$^{40}$, K. J. Li$^{43}$, Kang Li$^{14}$, Ke Li$^{1}$, Lei Li$^{3}$, P. L. Li$^{53,42}$, P. R. Li$^{46,7}$, Q. Y. Li$^{36}$, T.  Li$^{36}$, W. D. Li$^{1,46}$, W. G. Li$^{1}$, X. L. Li$^{36}$, X. N. Li$^{1,42}$, X. Q. Li$^{34}$, Z. B. Li$^{43}$, H. Liang$^{53,42}$, Y. F. Liang$^{39}$, Y. T. Liang$^{28}$, G. R. Liao$^{12}$, L. Z. Liao$^{1,46}$, J. Libby$^{21}$, C. X. Lin$^{43}$, D. X. Lin$^{15}$, B. Liu$^{37,h}$, B. J. Liu$^{1}$, C. X. Liu$^{1}$, D. Liu$^{53,42}$, D. Y. Liu$^{37,h}$, F. H. Liu$^{38}$, Fang Liu$^{1}$, Feng Liu$^{6}$, H. B. Liu$^{13}$, H. L Liu$^{41}$, H. M. Liu$^{1,46}$, Huanhuan Liu$^{1}$, Huihui Liu$^{17}$, J. B. Liu$^{53,42}$, J. Y. Liu$^{1,46}$, K. Y. Liu$^{31}$, Ke Liu$^{6}$, L. D. Liu$^{35}$, Q. Liu$^{46}$, S. B. Liu$^{53,42}$, X. Liu$^{30}$, Y. B. Liu$^{34}$, Z. A. Liu$^{1,42,46}$, Zhiqing Liu$^{26}$, Y.  F. Long$^{35}$, X. C. Lou$^{1,42,46}$, H. J. Lu$^{18}$, J. G. Lu$^{1,42}$, Y. Lu$^{1}$, Y. P. Lu$^{1,42}$, C. L. Luo$^{32}$, M. X. Luo$^{60}$, P. W. Luo$^{43}$, T. Luo$^{9,j}$, X. L. Luo$^{1,42}$, S. Lusso$^{56C}$, X. R. Lyu$^{46}$, F. C. Ma$^{31}$, H. L. Ma$^{1}$, L. L.  Ma$^{36}$, M. M. Ma$^{1,46}$, Q. M. Ma$^{1}$, X. N. Ma$^{34}$, X. Y. Ma$^{1,42}$, Y. M. Ma$^{36}$, F. E. Maas$^{15}$, M. Maggiora$^{56A,56C}$, S. Maldaner$^{26}$, S. Malde$^{51}$, Q. A. Malik$^{55}$, A. Mangoni$^{23B}$, Y. J. Mao$^{35}$, Z. P. Mao$^{1}$, S. Marcello$^{56A,56C}$, Z. X. Meng$^{48}$, J. G. Messchendorp$^{29}$, G. Mezzadri$^{24A}$, J. Min$^{1,42}$, T. J. Min$^{33}$, R. E. Mitchell$^{22}$, X. H. Mo$^{1,42,46}$, Y. J. Mo$^{6}$, C. Morales Morales$^{15}$, N. Yu. Muchnoi$^{10,d}$, H. Muramatsu$^{49}$, A. Mustafa$^{4}$, S. Nakhoul$^{11,g}$, Y. Nefedov$^{27}$, F. Nerling$^{11,g}$, I. B. Nikolaev$^{10,d}$, Z. Ning$^{1,42}$, S. Nisar$^{8}$, S. L. Niu$^{1,42}$, X. Y. Niu$^{1,46}$, S. L. Olsen$^{46}$, Q. Ouyang$^{1,42,46}$, S. Pacetti$^{23B}$, Y. Pan$^{53,42}$, M. Papenbrock$^{57}$, P. Patteri$^{23A}$, M. Pelizaeus$^{4}$, J. Pellegrino$^{56A,56C}$, H. P. Peng$^{53,42}$, Z. Y. Peng$^{13}$, K. Peters$^{11,g}$, J. Pettersson$^{57}$, J. L. Ping$^{32}$, R. G. Ping$^{1,46}$, A. Pitka$^{4}$, R. Poling$^{49}$, V. Prasad$^{53,42}$, H. R. Qi$^{2}$, M. Qi$^{33}$, T. Y. Qi$^{2}$, S. Qian$^{1,42}$, C. F. Qiao$^{46}$, N. Qin$^{58}$, X. S. Qin$^{4}$, Z. H. Qin$^{1,42}$, J. F. Qiu$^{1}$, S. Q. Qu$^{34}$, K. H. Rashid$^{55,i}$, C. F. Redmer$^{26}$, M. Richter$^{4}$, M. Ripka$^{26}$, A. Rivetti$^{56C}$, M. Rolo$^{56C}$, G. Rong$^{1,46}$, Ch. Rosner$^{15}$, A. Sarantsev$^{27,e}$, M. Savrié$^{24B}$, K. Schoenning$^{57}$, W. Shan$^{19}$, X. Y. Shan$^{53,42}$, M. Shao$^{53,42}$, C. P. Shen$^{2}$, P. X. Shen$^{34}$, X. Y. Shen$^{1,46}$, H. Y. Sheng$^{1}$, X. Shi$^{1,42}$, J. J. Song$^{36}$, W. M. Song$^{36}$, X. Y. Song$^{1}$, S. Sosio$^{56A,56C}$, C. Sowa$^{4}$, S. Spataro$^{56A,56C}$, F. F.  Sui$^{36}$, G. X. Sun$^{1}$, J. F. Sun$^{16}$, L. Sun$^{58}$, S. S. Sun$^{1,46}$, X. H. Sun$^{1}$, Y. J. Sun$^{53,42}$, Y. K Sun$^{53,42}$, Y. Z. Sun$^{1}$, Z. J. Sun$^{1,42}$, Z. T. Sun$^{1}$, Y. T Tan$^{53,42}$, C. J. Tang$^{39}$, G. Y. Tang$^{1}$, X. Tang$^{1}$, M. Tiemens$^{29}$, B. Tsednee$^{25}$, I. Uman$^{45D}$, B. Wang$^{1}$, B. L. Wang$^{46}$, C. W. Wang$^{33}$, D. Wang$^{35}$, D. Y. Wang$^{35}$, H. H. Wang$^{36}$, K. Wang$^{1,42}$, L. L. Wang$^{1}$, L. S. Wang$^{1}$, M. Wang$^{36}$, Meng Wang$^{1,46}$, P. Wang$^{1}$, P. L. Wang$^{1}$, W. P. Wang$^{53,42}$, X. F. Wang$^{1}$, Y. Wang$^{53,42}$, Y. F. Wang$^{1,42,46}$, Y. Q. Wang$^{16}$, Z. Wang$^{1,42}$, Z. G. Wang$^{1,42}$, Z. Y. Wang$^{1}$, Zongyuan Wang$^{1,46}$, T. Weber$^{4}$, D. H. Wei$^{12}$, P. Weidenkaff$^{26}$, S. P. Wen$^{1}$, U. Wiedner$^{4}$, M. Wolke$^{57}$, L. H. Wu$^{1}$, L. J. Wu$^{1,46}$, Z. Wu$^{1,42}$, L. Xia$^{53,42}$, X. Xia$^{36}$, Y. Xia$^{20}$, D. Xiao$^{1}$, Y. J. Xiao$^{1,46}$, Z. J. Xiao$^{32}$, Y. G. Xie$^{1,42}$, Y. H. Xie$^{6}$, X. A. Xiong$^{1,46}$, Q. L. Xiu$^{1,42}$, G. F. Xu$^{1}$, J. J. Xu$^{1,46}$, L. Xu$^{1}$, Q. J. Xu$^{14}$, X. P. Xu$^{40}$, F. Yan$^{54}$, L. Yan$^{56A,56C}$, W. B. Yan$^{53,42}$, W. C. Yan$^{2}$, Y. H. Yan$^{20}$, H. J. Yang$^{37,h}$, H. X. Yang$^{1}$, L. Yang$^{58}$, R. X. Yang$^{53,42}$, S. L. Yang$^{1,46}$, Y. H. Yang$^{33}$, Y. X. Yang$^{12}$, Yifan Yang$^{1,46}$, Z. Q. Yang$^{20}$, M. Ye$^{1,42}$, M. H. Ye$^{7}$, J. H. Yin$^{1}$, Z. Y. You$^{43}$, B. X. Yu$^{1,42,46}$, C. X. Yu$^{34}$, J. S. Yu$^{20}$, J. S. Yu$^{30}$, C. Z. Yuan$^{1,46}$, Y. Yuan$^{1}$, A. Yuncu$^{45B,a}$, A. A. Zafar$^{55}$, Y. Zeng$^{20}$, B. X. Zhang$^{1}$, B. Y. Zhang$^{1,42}$, C. C. Zhang$^{1}$, D. H. Zhang$^{1}$, H. H. Zhang$^{43}$, H. Y. Zhang$^{1,42}$, J. Zhang$^{1,46}$, J. L. Zhang$^{59}$, J. Q. Zhang$^{4}$, J. W. Zhang$^{1,42,46}$, J. Y. Zhang$^{1}$, J. Z. Zhang$^{1,46}$, K. Zhang$^{1,46}$, L. Zhang$^{44}$, S. F. Zhang$^{33}$, T. J. Zhang$^{37,h}$, X. Y. Zhang$^{36}$, Y. Zhang$^{53,42}$, Y. H. Zhang$^{1,42}$, Y. T. Zhang$^{53,42}$, Yang Zhang$^{1}$, Yao Zhang$^{1}$, Yu Zhang$^{46}$, Z. H. Zhang$^{6}$, Z. P. Zhang$^{53}$, Z. Y. Zhang$^{58}$, G. Zhao$^{1}$, J. W. Zhao$^{1,42}$, J. Y. Zhao$^{1,46}$, J. Z. Zhao$^{1,42}$, Lei Zhao$^{53,42}$, Ling Zhao$^{1}$, M. G. Zhao$^{34}$, Q. Zhao$^{1}$, S. J. Zhao$^{61}$, T. C. Zhao$^{1}$, Y. B. Zhao$^{1,42}$, Z. G. Zhao$^{53,42}$, A. Zhemchugov$^{27,b}$, B. Zheng$^{54}$, J. P. Zheng$^{1,42}$, W. J. Zheng$^{36}$, Y. H. Zheng$^{46}$, B. Zhong$^{32}$, L. Zhou$^{1,42}$, Q. Zhou$^{1,46}$, X. Zhou$^{58}$, X. K. Zhou$^{53,42}$, X. R. Zhou$^{53,42}$, X. Y. Zhou$^{1}$, Xiaoyu Zhou$^{20}$, Xu Zhou$^{20}$, A. N. Zhu$^{1,46}$, J. Zhu$^{34}$, J.  Zhu$^{43}$, K. Zhu$^{1}$, K. J. Zhu$^{1,42,46}$, S. Zhu$^{1}$, S. H. Zhu$^{51}$, X. L. Zhu$^{44}$, Y. C. Zhu$^{53,42}$, Y. S. Zhu$^{1,46}$, Z. A. Zhu$^{1,46}$, J. Zhuang$^{1,42}$, B. S. Zou$^{1}$, J. H. Zou$^{1}$\ (BESIII Collaboration)\ title: 'Study of the decay $D^0\rightarrow \bar{K}^0\pi^-e^+\nu_e$' --- Introduction ============ The studies on semileptonic(SL) decay modes of charm mesons provide valuable information on the weak and strong interactions in mesons composed of heavy quarks [@physrept494]. The semileptonic partial decay width is related to the product of the hadronic form factor describing the strong-interaction in the initial and final hadrons, and the Cabibbo-Kobayashi-Maskawa (CKM) matrix elements $|V_{cs}|$ and $|V_{cd}|$, which parametrize the mixing between the quark flavors in the weak interaction [@prl10_531]. The couplings $|V_{cs}|$ and $|V_{cd}|$ are tightly constrained by the unitarity of the CKM matrix. Thus, detailed studies of the dynamics of the SL decays allow measurements of the hadronic form factors, which are important for calibrating the theoretical calculations of the involved strong interaction. The relative simplicity of theoretical description of the SL decay $D\rightarrow \bar{K}\pi e^+ \nu_e$ [@chargeneutral] makes it a optimal place to study the $\bar{K}\pi$ system, and to further determine the hadronic transition form factors. Measurements of $\bar{K}\pi$ resonant and non-resonant amplitudes in the decay $D^+\rightarrow K^-\pi^+e^+\nu_e$ have been reported by the CLEO [@prd74_052001], BABAR [@prd83_072001] and BESIII [@prd94_032001] collaborations. In these studies a nontrival $S$-wave component is observed along with the dominant $P$-wave one. A study of the dynamics in the isospin-symmetric mode $D^0\rightarrow \bar{K}^0\pi^-e^+\nu_e$ will provide complementary information on the $\bar{K}\pi$ system. Furthermore, the form factors in the $D\rightarrow Ve^+\nu_{e}$ transition, where $V$ refers to a vector meson, have been measured in decays of $D^+\rightarrow \bar{K}^{*0}e^+\nu_e$ [@prd74_052001; @prd83_072001; @prd94_032001], $D\rightarrow \rho e^+\nu_e$ [@prl110_131802] and $D^+\rightarrow \omega e^+\nu_e$ [@prd92_071101], while no form factor in $D^0\rightarrow K^{*}(892)^-e^+\nu_e$ has been studied yet. Therefore, the study of the dynamics in the decay $D^0\rightarrow K^{*}(892)^- e^+\nu_e$ provides essentially additional information on the family of $D\rightarrow V e^+\nu_e$ decays. In this paper, an improved measurement of the absolute branching fraction (BF) and the first measurement of the form factors of the decay $D^0\rightarrow \bar{K}^0\pi^-e^+\nu_e$ are reported. These measurements are performed using an $e^+e^-$ annihilation data sample corresponding to an integrated luminosity of $2.93~\mathrm{fb}^{-1}$ produced at $\sqrt{s}=3.773$ GeV with the BEPCII collider and collected with the BESIII detector [@Ablikim:2009aa]. BESIII Detector and Monte Carlo Simulation ========================================== The BESIII detector is a cylindrical detector with a solid-angle coverage of 93% of $4\pi$. The detector consists of a Helium-gas based main drift chamber (MDC), a plastic scintillator time-of-flight (TOF) system, a CsI(Tl) electromagnetic calorimeter (EMC), a superconducting solenoid providing a 1.0T magnetic field and a muon counter. The charged particle momentum resolution is 0.5% at a transverse momentum of 1${\,\unit{GeV}/c}$. The photon energy resolution in EMC is 2.5% in the barrel and 5.0% in the end-caps at energies of 1GeV. More details about the design and performance of the detector are given in Ref. [@Ablikim:2009aa]. A [geant4]{}-based [@geant4] simulation package, which includes the geometric description of the detector and the detector response, is used to determine signal detection efficiencies and to estimate potential backgrounds. The production of the $\psi(3770)$, initial state radiation production of the $\psi(2S)$ and $J/\psi$, and the continuum processes $e^+e^-\rightarrow \tau^+\tau^-$ and $e^+e^-\rightarrow q\bar{q}$ ($q=u$, $d$ and $s$) are simulated with the event generator [kkmc]{} [@kkmc]. The known decay modes are generated by [evtgen]{} [@nima462_152] with the branching fractions set to the world-average values from the Particle Data Group [@pdg16], while the remaining unknown decay modes are modeled by [lundcharm]{} [@lundcharm]. The generation of simulated signals $D^0\rightarrow \bar{K}^0\pi^-e^+\nu_e$ incorporates knowledge of the form factors, which are obtained in this work. Analysis ======== The analysis makes use of both “single-tag” (ST) and “double-tag” (DT) samples of $D$ decays. The single-tag sample is reconstructed in one of the final states listed in Table \[tab:numST\], which are called the tag decay modes. Within each ST sample, a subset of events is selected where the other tracks in the event are consistent with the decay $D^0\rightarrow \bar{K}^0\pi^-e^+\nu_e$. This subset is referred as the DT sample. For a specific tag mode $i$, the ST and DT event yields are expressed as $$N^{i}_{\rm ST}=2N_{D^0\bar{D}^0}\mathcal{B}^i_{\rm ST}\epsilon^i_{\rm ST}, ~~~ N^{i}_{\rm DT}=2N_{D^0\bar{D}^0}\mathcal{B}^i_{\rm ST}\mathcal{B}_{\rm SL}\epsilon^i_{\rm DT},$$ where $N_{D^0\bar{D}^0}$ is the number of $D^0\bar{D}^0$ pairs, $\mathcal{B}^i_{\rm ST}$ and $\mathcal{B}_{\rm SL}$ are the BFs of the $\bar{D}^0$ tag decay mode $i$ and the $D^0$ SL decay mode, $\epsilon^i_{\rm ST}$ is the efficiency for finding the tag candidate, and $\epsilon^i_{\rm DT}$ is the efficiency for simultaneously finding the tag $\bar{D}^0$ and the SL decay. The BF for the SL decay is given by $$\mathcal{B}_{\rm SL}=\frac{N_{\rm DT}}{\sum_i N^{i}_{\rm ST}\times\epsilon^i_{\rm DT}/\epsilon^i_{\rm ST}}=\frac{N_{\rm DT}}{N_{\rm ST}\times\epsilon_{\rm SL}}, \label{eq:branch}$$ where $N_{\rm DT}$ is the total yield of DT events, $N_{\rm ST}$ is the total ST yield, and $\epsilon_{\rm SL}=(\sum_i N^{i}_{\rm ST}\times\epsilon^i_{\rm DT}/\epsilon^i_{\rm ST})/\sum_i N^{i}_{\rm ST}$ is the average efficiency of reconstructing the SL decay, weighted by the measured yields of tag modes in data. Selection criteria for photons, charged pions and charged kaons are the same as those used in Ref. [@prl118_112001]. To reconstruct a $\pi^0$ candidate in the decay mode $\pi^0\rightarrow \gamma\gamma$, the invariant mass of the candidate photon pair must be within $(0.115,~0.150)$ GeV$/c^2$. To improve the momentum resolution, a kinematic fit is performed to constrain the $\gamma\gamma$ invariant mass to the nominal $\pi^0$ mass [@pdg16]. The $\chi^2$ of this kinematic fit is required to be less than 20. The fitted $\pi^0$ momentum is used for reconstruction of the $\bar{D}^0$ tag candidates. The ST $\bar{D}^0$ decays are identified using the beam constrained mass, $$M_{\rm BC}=\sqrt{(\sqrt{s}/2)^2-|\vec {p}_{\bar D^0}|^2},$$ where $\vec {p}_{\bar D^0}$ is the momentum of the $\bar{D}^0$ candidate in the rest frame of the initial $e^+e^-$ system. To improve the purity of the tag decays, the energy difference $\Delta E=\sqrt{s}/2-E_{\bar{D}^0}$ for each candidate is required to be within approximately $\pm3\sigma_{\Delta E}$ around the fitted $\Delta E$ peak, where $\sigma_{\Delta E}$ is the $\Delta E$ resolution and $E_{\bar{D}^0}$ is the reconstructed $\bar{D}^0$ energy in the initial $e^+e^-$ rest frame. The explicit $\Delta E$ requirements for the three ST modes are listed in Table \[tab:numST\]. The distributions of the variable $M_{\rm BC}$ for the three ST modes are shown in Fig. \[fig:tag\_md0\]. Maximum likelihood fits to the $M_{\rm BC}$ distributions are performed. The signal shape is derived from the convolution of the MC-simulated signal template function with a double-Gaussian function to account for resolution difference between MC simulation and data. An ARGUS function [@plb241_278] is used to describe the combinatorial background shape. For each tag mode, the ST yield is obtained by integrating the signal function over the $D^0$ signal region specified in Table \[tab:numST\]. In addition to the combinatorial background, there are also small wrong-sign (WS) peaking backgrounds in the ST $\bar{D}^0$ samples, which are from the doubly Cabibbo-suppressed decays of $\bar{D}{}^0\rightarrow K^-\pi^+$, $K^-\pi^+\pi^0$ and $K^-\pi^+\pi^+\pi^-$. The $\bar{D}{}^0\rightarrow K^0_SK^-\pi^+$, $K^0_S\rightarrow \pi^+\pi^-$ decay shares the same final states as the WS background of $\bar{D}^0\rightarrow K^-\pi^+\pi^+\pi^-$. The sizes of these WS peaking backgrounds are estimated from simulation, and are subtracted from the corresponding ST yields. The background-subtracted ST yields are listed in Table \[tab:numST\]. The total ST yield summed over all three ST modes is $N_{\rm ST}=(2277.2\pm2.3)\times10^{3}$, where the uncertainty is statistical only. [lccc]{} Decay & $\Delta E$ (GeV) & Signal Region & $N_{\rm ST}$ ($\times 10^3$)\ Mode & & (GeV/$c^2$) &\ $K^+\pi^-$ & \[$-$0.025, 0.028\] & \[$1.860, 1.875$\] &  $540.2~\pm0.8~$\ $K^+\pi^-\pi^-\pi^+$ & \[$-$0.020, 0.023\] & \[$1.860, 1.875$\] &  $701.1~\pm1.7~$\ $K^+\pi^-\pi^0$ & \[$-$0.044, 0.066\] & \[$1.858, 1.875$\] & $1035.9~\pm1.3~$\ \[tab:numST\] ![(Color online) The $M_{\rm BC}$ distributions for the three ST modes. The points are data, the (red) solid curves are the projection of the sum of all fit components and the (blue) dashed curves are the projection of the background component of the fit.[]{data-label="fig:tag_md0"}](MD0.eps){width="\linewidth"} Candidates for the SL decay $D^0\rightarrow \bar{K}^0\pi^-e^+\nu_e$ are selected from the remaining tracks recoiling against the ST $\bar{D}^0$ mesons. The $\bar{K}^0$ meson is reconstructed as a $K^0_S$. The $K^0_S$ mesons are reconstructed from two oppositely charged tracks and the invariant mass of the $K^0_S$ candidate is required to be within $(0.485,~0.510)$ GeV$/c^2$. For each $K_S^0$ candidate, a fit is applied to constrain the two charged tracks to a common vertex, and this $K^0_S$ decay vertex is required to be separated from the interaction point by more than twice the standard deviation of the measured flight distance. A further requirement is that there must only be two other tracks in the event and that they must be of opposite charge. The electron hypothesis is assigned to the track that has the same charge as that of the kaon on the tag side. For electron particle identification (PID), the specific ionization energy losses measured by the MDC, the time of flight, and the shower properties from the electromagnetic calorimeter (EMC) are used to construct likelihoods for electron, pion and kaon hypotheses ($\mathcal{L}_e$, $\mathcal{L}_\pi$ and $\mathcal{L}_K$). The electron candidate must satisfy $\mathcal{L}_{e} > 0.001$ and $\mathcal{L}_e/(\mathcal{L}_e+\mathcal{L}_{\pi}+\mathcal{L}_K)>0.8$. Additionally, the EMC energy of the electron candidate has to be more than 70% of the track momentum measured in the MDC ($E/p>0.7c$). The energy loss due to bremsstrahlung is partially recovered by adding the energy of the EMC showers that are within 5$^{\circ}$ of the electron direction and not matched to other particles [@bes3electronSL]. The pion hyphotesis is assigned to the remaining charged track and must satisfy the same criteria as in Ref. [@prl118_112001]. The background from $D^0\rightarrow \bar{K}^0\pi^+\pi^-$ decays reconstructed as $D^0\rightarrow \bar{K}^0\pi^-e^+\nu_e$ is rejected by requiring the $\bar{K}^0\pi^-e^+$ invariant mass ($M_{\bar K^0\pi^-e^+}$) to be less than 1.80 GeV/$c^2$. The backgrounds associated with fake photons are suppressed by requiring the maximum energy of any unused photon ($E_{\gamma\,{\rm max}}$) to be less than 0.25 GeV. The energy and momentum carried by the neutrino are denoted by $E_{\rm miss}$ and $\vec{p}_{\rm miss}$, respectively. They are calculated from the energies and momenta of the tag ($E_{\bar{D}^0}$, $\vec{p}_{\bar{D}^0}$) and the measured SL decay products ($E_{\rm SL}=E_{\bar{K}^0}+E_{\pi^-}+E_{e^+}$, $\vec{p}_{\rm SL}=\vec{p}_{\bar{K}^0}+\vec{p}_{\pi^-}+\vec{p}_{e^+}$) using the relations $E_{\rm miss}=\sqrt{s}/2-E_{\rm SL}$ and $\vec{p}_{\rm miss}=\vec{p}_{D^0}-\vec{p}_{\rm SL}$ in the initial $e^+e^-$ rest frame. Here, the momentum $\vec{p}_{D^0}$ is given by $\vec{p}_{D^0}=-\hat{p}_{\rm tag}\sqrt{(\sqrt{s}/2)^2-m^2_{\bar{D}^0}},$ where $\hat{p}_{\rm tag}$ is the momentum direction of the ST $\bar{D}^0$ and $m_{\bar{D}^0}$ is the nominal $\bar{D}^0$ mass [@pdg16]. Information on the undetected neutrino is obtained by using the variable $U_{\rm miss}$ defined by $$U_{\rm miss} \equiv E_{\rm miss}-|\vec{p}_{\rm miss}| .$$ The $U_{\rm miss}$ distribution is expected to peak at zero for signal events. Figure \[fig:formfactor\](a) shows the $U_{\rm miss}$ distribution of the accepted candidate events for $D^0\rightarrow \bar{K}^{0}\pi^-e^+\nu_e$ in data. To obtain the signal yield, an unbinned maximum likelihood fit of the $U_{\rm miss}$ distribution is performed. In the fit, the signal is described with a shape derived from the simulated signal events convolved with a Gaussian function, where the width of the Gaussian function is determined by the fit. The background is described by using the shape obtained from the MC simulation. The yield of DT $D^0\rightarrow \bar{K}^0\pi^-e^+\nu_e$ events is determined to be $3131\pm64({\rm stat.})$. The backgrounds from the non-$D^0$ and non-$K_S^0$ decays are estimated by examining the ST candidates in the $M_{\rm BC}$ sideband, defined in the range $(1.830, 1.855)$ GeV/$c^2$, and the SL candidates in the $K^0_S$ sidebands, defined in the ranges $(0.450, 0.475)$ GeV/$c^2$ or $(0.525, 0.550)$ GeV/$c^2$ in data, respectively. The yield of this type of background is estimated to be $19.4\pm5.3$. After subtracting these background events, we evaluate the number of the signal DT events to be $N_{\rm DT}=3112\pm64({\rm stat.})$. The detection efficiency $\varepsilon_{\rm SL}$ is estimated to be $(9.53\pm0.01)\%$, and the BF of $D^0\rightarrow \bar{K}^{0}\pi^-e^+\nu_e$ is determined as $\mathcal B({D^0\rightarrow \bar{K}^{0}\pi^-e^+\nu_e})=(1.434\pm0.029({\rm stat.}))\%$. Due to the double tag technique, the BF measurement is insensitive to the systematic uncertainty in the ST efficiency. The uncertainties due to the pion and electron tracking efficiencies are estimated to be 0.5% [@prd92_072012] and the uncertainties due to their PID efficiencies are estimated to be 0.5% [@prd92_072012], where the tracking and PID uncertainties are conservatively estimated to account for the possible differences of the momentum spectra in Ref. [@prd92_072012]. The uncertainty due to the $\bar{K}^0$ reconstruction is 1.5% [@prl118_112001]. The uncertainty due to the $E/p$ requirement is 0.4% [@prd94_032001]. The uncertainty associated with the $E_{\gamma\,{\rm \max}}$ requirement is estimated to be 0.4% by analyzing the DT $D^0\bar{D}^0$ events where both $D$ mesons decay to hadronic final states. The uncertainty due to the modeling of the signal in simulated events is estimated to be 0.8% by varying the input form factor parameters by $\pm 1\sigma$ as determined in this work. The uncertainty associated with the fit of the $U_{\rm miss}$ distribution is estimated to be 0.7% by varying the fitting ranges and the shapes which parametrize the signal and background. The uncertainty associated with the fit of the $M_{\rm BC}$ distributions used to determine $N_{\rm ST}$ is 0.5% and is evaluated by varying the bin size, fit range and background distributions. Further systematic uncertainties are assigned due to the statistical precision of the simulation (0.2%), the background subtraction (0.2%), and the input BF of the decay $K^0_S\rightarrow \pi^+ \pi^-$ (0.1%). The systematic uncertainty contributions are summed in quadrature, and the total systematic uncertainty on the BF measurement is 2.2% of the central value. $D^0\rightarrow \bar{K}^{0}\pi^- e^+\nu_{e}$ Decay rate formalism ================================================================= The differential decay width of $D^0\rightarrow \bar{K}^{0}\pi^- e^+\nu_{e}$ can be expressed in terms of five kinematic variables: the square of the invariant mass of the $\bar{K}^0\pi^-$ system $m_{\bar{K}^0\pi^-}^2$, the square of the invariant mass of the $e^+\nu_e$ system ($q^2$), the angle between the $\bar{K}^0$ and the $D^0$ direction in the $\bar{K}^0\pi^-$ rest frame ($\theta_{\bar{K}^0}$), the angle between the $\nu_{e}$ and the $D^0$ direction in the $e^+\nu_e$ rest frame ($\theta_e$), and the acoplanarity angle between the two decay planes ($\chi$). Neglecting the mass of $e^+$, the differential decay width of $D^0\rightarrow \bar{K}^{0}\pi^- e^+\nu_{e}$ can be expressed as [@prd46_5040] $$\begin{aligned} d^5\Gamma&=&\frac{G^2_F|V_{cs}|^2}{(4\pi)^6m^3_{D^0}}X\beta \mathcal{I}(m_{\bar{K}^0\pi^-}^2, q^2, \theta_{\bar{K}^0}, \theta_e, \chi) \nonumber \\ && dm_{\bar{K}^0\pi^-}^2dq^2d{\rm cos}\theta_{\bar{K}^0}d{\rm cos}\theta_ed\chi,\end{aligned}$$ where $X=p_{\bar{K}^{0}\pi^-}m_{D^0}$, $\beta=2p^{*}/m_{\bar{K}^{0}\pi^-}$, and $p_{\bar{K}^{0}\pi^-}$ is the momentum of the $\bar{K}^{0}\pi^-$ system in the rest $D^0$ system and $p^*$ is the momentum of $\bar{K}^{0}$ in the $\bar{K}^{0}\pi^-$ rest frame. The Fermi coupling constant is denoted by $G_F$. The dependence of the decay density $\mathcal{I}$ is given by $$\begin{aligned} \mathcal{I}&=&\mathcal{I}_1+\mathcal{I}_2{\rm cos2}\theta_e+\mathcal{I}_3{\rm sin}^2\theta_e{\rm cos}2\chi+\mathcal{I}_4{\rm sin}2\theta_e{\rm cos}\chi \nonumber\\ &+&\mathcal{I}_5{\rm sin}\theta_e{\rm cos}\chi+\mathcal{I}_6{\rm cos}\theta_e+\mathcal{I}_7{\rm sin}\theta_e{\rm sin}\chi \nonumber \\ &+&\mathcal{I}_8{\rm sin}2\theta_e{\rm sin}\chi+\mathcal{I}_9{\rm sin}^2\theta_e{\rm sin}2\chi, \label{eq:Ifunc}\end{aligned}$$ where $\mathcal{I}_{1,...,9}$ depend on $m_{\bar{K}^{0}\pi^-}^2$, $q^2$ and $\theta_{\bar{K}^0}$ [@prd46_5040] and can be expressed in terms of three form factors, $\mathcal{F}_{1,2,3}$. The form factors can be expanded into partial waves including $S$-wave ($\mathcal{F}_{10}$), $P$-wave ($\mathcal{F}_{i1}$) and $D$-wave ($\mathcal{F}_{i2}$), to show their explicit dependences on $\theta_{\bar{K}^0}$. Analyses of the decay $D^+\rightarrow K^+\pi^-e^+\nu_e$ by using much higher statistics performed by the BABAR [@prd83_072001] and BESIII [@prd94_032001] collaborations do not observe a $D$-wave component and hence it is not considered in this analysis. Consequently, the form factors can be written as $$\mathcal{F}_1=\mathcal{F}_{10}+\mathcal{F}_{11}\cos\theta_{\bar{K}^0},\mathcal{F}_2=\frac{1}{\sqrt{2}}\mathcal{F}_{21},\mathcal{F}_3=\frac{1}{\sqrt{2}}\mathcal{F}_{31}, \label{eq:F1}$$ where $\mathcal{F}_{11}$, $\mathcal{F}_{21}$ and $\mathcal{F}_{31}$ are related to the helicity basis form factors $H_{0,\pm}(q^2)$ [@prd46_5040; @RevModPhys67_893]. The helicity form factors can in turn be related to the two axial-vector form factors, $A_1(q^2)$ and $A_2(q^2)$, as well as the vector form factor $V(q^2)$. The $A_{1,2}(q^2)$ and $V(q^2)$ are all taken as the simple pole form $A_i(q^2)=A_{1,2}(0)/(1-q^2/M^2_A)$ and $V(q^2)=V(0)/(1-q^2/M^2_V)$, with pole masses $M_V=M_{D_s^*(1^-)}=2.1121$ GeV/$c^2$ [@pdg16] and $M_A=M_{D_s^*(1^+)}=2.4595$ GeV/$c^2$ [@pdg16]. The form factor $A_1(q^2)$ is common to all three helicity amplitudes. Therefore, it is natural to define two form factor ratios as $r_V=V(0)/A_1(0)$ and $r_2=A_2(0)/A_1(0)$ at the momentum square $q^2=0$. The amplitude of the $P$-wave resonance $\mathcal{A}(m)$ is expressed as [@prd94_032001; @prd83_072001] $$\mathcal{A}(m)=\frac{m_0\Gamma_0(p^*/p^*_0)}{m_0^2-m^2_{\bar{K}^{0}\pi^-}-im_0\Gamma(m_{\bar{K}^{0}\pi^-})}\frac{B(p^*)}{B(p^*_0)},$$ where $B(p)=\frac{1}{\sqrt{1+R^2p^2}}$ with $R=3.07$ GeV$^{-1}$ [@prd94_032001] and $\Gamma \left(m_{\bar{K}^{0}\pi^-}\right)=\Gamma_0\left(\frac{p^*}{p^*_0}\right)^3\frac{m_0}{m_{\bar{K}^{0}\pi^-}}\left[\frac{B\left(p^*\right)}{B\left(p^*_0\right)}\right]^2$, where $p^*_0$ is the momentum of $\bar{K}^0$ at the pole mass of the resonance $m_0$, and $\alpha=\sqrt{3\pi \mathcal{B}_{K^*}/(p^*_0\Gamma_0)}$, $\mathcal{B}_{K^*}=\mathcal{B}(K^{*}(892)^-\rightarrow \bar{K}^0\pi^-)$. The $S$-wave related $\mathcal{F}_{10}$ is described by [@prd94_032001; @prd83_072001] $$\mathcal{F}_{10}=p_{\bar{K}^{0}\pi^-}m_{D^0}\frac{1}{1-\frac{q^2}{m^2_A}}\mathcal{A}_S(m),$$ where the term $\mathcal{A}_S(m)$ corresponds to the mass-dependent $S$-wave amplitude, and the same expression of $\mathcal{A}_S(m)=r_SP(m)e^{i\delta_S(m)}$ as in Refs. [@prd94_032001; @prd83_072001] is adopted, in which $P(m)=1+xr_S^{(1)}$ with $x=\sqrt{\left(\frac{m}{m_{\bar{K}^0}+m_{\pi^-}}\right)^2-1}$, and $\delta_S(m)=\delta^{1/2}_{\rm BG}$ with $\cot(\delta^{1/2}_{\rm BG})=1/(a^{1/2}_{\rm S,BG}p^*)+b^{1/2}_{\rm S,BG}p^*/2$. ![ (Color online) (a) Fit to $U_{\rm miss}$ distribution of the SL candidate events. Projections onto five kinematic variables (b) $M_{\bar{K}^0\pi^-}$, (c) $q^2$, (d) $\cos\theta_{e^+}$, (e) $\cos\theta_{\bar{K}^0}$, and (f) $\chi$ for $D^0\rightarrow \bar{K}^0\pi^-e^+\nu_e$. The dots with error bars are data, the red curve/histograms are the fit results, and the shadowed histograms are the simulated background.[]{data-label="fig:formfactor"}](FF.eps){width="\linewidth"} An unbinned five-dimensional maximum likelihood fit to the distributions of $m_{\bar{K}^0\pi^-}$, $q^2$, $\cos\theta_{e^+}$, $\cos\theta_{\bar{K}^0}$, and $\chi$ for the $D^0\rightarrow \bar{K}^{0}\pi^- e^+\nu_{e}$ events within $-0.10<U_{\rm miss}<0.15$ GeV is performed in a similar manner to Ref. [@prd94_032001]. The projected distributions of the fit onto the fitted variables are shown in Figs. \[fig:formfactor\](b-f). In this fit, the parameters of $r_V$, $r_2$, $m_0$, $\Gamma_0$, $r_S$ and $a^{1/2}_{\rm S,BG}$ are float, while $r_S^{(1)}$ and $b^{1/2}_{\rm S,BG}$ are fixed to $0.08$ and $-0.81$ (GeV/$c$)$^{-1}$ due to limited statistics, respectively, based on the analysis of $D^+\rightarrow K^+\pi^-e^+\nu_e$ at BESIII [@prd94_032001]. The fit results are summarized in Table \[tab:FitResults\]. The goodness of fit is estimated by using the $\chi^2/{\rm ndof}$, where ${\rm ndof}$ denotes the number of degrees of freedom. The $\chi^2$ is calculated from the comparison between the measured and expected number of events in the five-dimensional space of the kinematic variables $m_{\bar{K}^0\pi^-}$, $q^2$, $\cos\theta_{e^{+}}$, $\cos\theta_{\bar{K}^0}$, and $\chi$ which are initially divided into 2, 2, 3, 3, and 3 bins, respectively. The bins are set with different sizes, so that they contain sufficient numbers of signal events for credible $\chi^2$ calculation. Each five-dimensional bin is required to contain at least ten events; otherwise, it is combined with an adjacent bin. The $\chi^2$ value is calculated as $$\chi^2=\displaystyle{\sum_i^{\rm N_{\rm bin}}\frac{(n_i^{\rm data}-n_i^{\rm fit})^2}{n_i^{\rm fit}}},$$ where $N_{\rm bin}$ is the number of bins, $n_i^{\rm data}$ denotes the measured number of events of the $i$-th bin, and $n_i^{\rm fit}$ denotes the the expected number of events of the $i$th bin. The ${\rm ndof}$ is the number of bins minus the number of fit parameters minus 1. The $\chi^2/{\rm ndof}$ obtained is 96.3/98, which shows a good fit quality. The fit procedure is validated using a large simulated sample of inclusive events, where the pull distribution of each fitted parameter is found to be consistent with a normal distribution. [ll]{} Variable                        &                Value\ $M_{K^{*}(892)^-}$ (MeV/$c^2$) &    $891.7~\pm0.6~\pm0.2$\ $\Gamma_{K^{*}(892)^-}$ (MeV) &      $48.4~\pm1.5~\pm0.5$\ $r_S$ (GeV)$^{-1}$ &  $-11.21\pm1.03\pm1.15$\ $a^{1/2}_{\rm S,BG}$ (GeV/$c$)$^{-1}$ &      $1.58\pm0.22\pm0.18$\ $r_V$ &      $1.46\pm0.07\pm0.02$\ $r_2$ &      $0.67\pm0.06\pm0.01$\ \[tab:FitResults\] [lcccccccccc]{} Parameter     &  $E_{\gamma\,{\rm \max}}$  &   $E/p$   &     $f$     &  ${\rm Tracking\&PID}$  &  $D$-wave  &   $r_S^{(1)}$   &  $b^{1/2}_{\rm S,BG}$  &   ${\rm Total}$  \ $M_{K^{*}(892)^-}$ & 0.00 & 0.01 & 0.00 & 0.00 & 0.01 & 0.01 & 0.01 &  0.02\ $\Gamma_{K^{*}(892)^-}$ & 0.52 & 0.95 & 0.23 & 0.04 & 0.12 & 0.08 & 0.12 &  1.12\ $r_S$ & 4.45 & 1.85 & 2.58 & 0.24 & 0.76 & 8.57 & 1.26 & 10.27\ $a^{1/2}_{\rm S,BG}$ & 7.66 & 3.52 & 1.36 & 0.26 & 0.87 & 0.11 & 7.78 & 11.59\ $r_V$ & 0.34 & 0.83 & 0.37 & 0.57 & 0.12 & 0.29 & 0.42 &  1.21\ $r_2$ & 0.95 & 0.27 & 0.30 & 0.02 & 0.27 & 0.03 & 0.60 &  1.22\ $f_{K^{*}(892)^-}$ & 0.52 & 0.22 & 0.28 & 0.03 & 0.10 & 0.07 & 0.16 &  0.66\ $f_{S-{\rm wave}}$ & 8.89 & 3.81 & 4.72 & 0.54 & 1.81 & 1.09 & 2.54 & 11.27\ \[tab:Syserr\] The fit fraction of each component can be determined by the ratio of the decay intensity of the specific component and that of the total. The fractions of $S$-wave and $P$-wave ($K^{*}(892)^-$) are found to be $f_{S-{\rm wave}}=(5.51\pm0.97({\rm stat.}))\%$ and $f_{K^{*}(892)^-}=(94.52\pm0.97({\rm stat.}))\%$, respectively. The systematic uncertainties of the fitted parameters and the fractions of $S$-wave and $K^{*}(892)^-$ components are defined as the difference between the fit results in nominal conditions and those obtained after changing a variable or a condition by an amount which corresponds to an estimate of the uncertainty in the determination of this quantity. The systematic uncertainties due to the $E_{\gamma\,{\rm \max}}$ and $E/p$ requirements are estimated by using alternative requirements of $E_{\gamma\,{\rm \max}}<0.20$ GeV and $E/p>0.75$, respectively. The systematic uncertainty because of the background fraction ($f$) is estimated by varying its value by $\pm 10\%$ which is the difference of the background fractions in the selected ST $\Delta E$ regions between data and MC simulation. The systematic uncertainties arising from the requirements placed on the charged pion, the electron and the $K^0_S$ are estimated by varying the pion/electron tracking and PID efficiencies, and $K_S^0$ detection efficiency by $\pm0.5\%$, $\pm0.5\%$ and $\pm1.5\%$, respectively. The systematic uncertainty due to neglecting a possible contribution from the $D$-wave component is estimated by incorporating the $D$-wave component in Eq. (\[eq:F1\]). The systematic uncertainties in the fixed parameters of $r_S^{(1)}$ and $b^{1/2}_{\rm S,BG}$ are estimated by varying their nominal values by $\pm1\sigma$. All of the variations mentioned above will result in differences of the fitted parameters and the extracted fractions of $S$-wave and $K^{*}(892)^-$ components from that under the nominal conditions. These differences are assigned as the systematic uncertainties and summarized in Table III, where the total systematic uncertainty is obtained by adding all contributions in quadrature. Summary ======= In summary, using $2.93~\mathrm{fb}^{-1}$ of data collected at $\sqrt{s}=3.773$ GeV by the BESIII detector, the absolute BF of $D^0\rightarrow \bar{K}^0\pi^-e^+\nu_{e}$ is measured to be $\mathcal{B}(D^0\rightarrow \bar{K}^0\pi^-e^+\nu_{e})=(1.434\pm0.029({\rm stat.})\pm0.032({\rm syst.}))\%$, which is significantly more precise than the current world-average value [@pdg16]. The first analysis of the dynamics of $D^0\rightarrow \bar{K}^0\pi^-e^+\nu_{e}$ decay is performed and the $S$-wave component is observed with a fraction $f_{S-{\rm wave}}=(5.51\pm0.97({\rm stat.})\pm0.62({\rm syst.}))\%$, leading to $\mathcal{B}[D^0\rightarrow (\bar{K}^0\pi^-)_{S-{\rm wave}}e^+\nu_e]=(7.90\pm1.40({\rm stat.})\pm0.91({\rm syst.}))\times 10^{-4}$. The $P$-wave component is observed with a fraction of $f_{K^{*}(892)^-}=(94.52\pm0.97({\rm stat.})\pm0.62({\rm syst.}))\%$ and the corresponding BF is given as $\mathcal{B}(D^0\rightarrow K^{*-}e^+\nu_e)=(2.033\pm0.046({\rm stat.})\pm0.047({\rm syst.}))\%$. It is consistent with, and more precise than, the result from the CLEO collaboration [@prl95_181802]. In addition, the form factor ratios of the $D^0\rightarrow K^{*}(892)^-e^+\nu_{e}$ decay are determined to be $r_V=1.46\pm0.07({\rm stat.})\pm0.02({\rm syst.})$ and $r_2=0.67\pm0.06({\rm stat.})\pm0.01({\rm syst.})$. They are consistent with the measurements from the FOCUS collaboration [@plb607_67] using the decay $D^0\rightarrow \bar{K}^0\pi^-\mu^+\nu_{\mu}$ within uncertainties, but with significantly improved precision. The BESIII collaboration thanks the staff of BEPCII and the IHEP computing center for their strong support. This work is supported in part by National Key Basic Research Program of China under Contract No. 2015CB856700; National Natural Science Foundation of China (NSFC) under Contracts Nos. 11335008, 11425524, 11505010, 11625523, 11635010, 11735014, 11775027; the Chinese Academy of Sciences (CAS) Large-Scale Scientific Facility Program; the CAS Center for Excellence in Particle Physics (CCEPP); Joint Large-Scale Scientific Facility Funds of the NSFC and CAS under Contracts Nos. U1532257, U1532258, U1732263; CAS Key Research Program of Frontier Sciences under Contracts Nos. QYZDJ-SSW-SLH003, QYZDJ-SSW-SLH040; 100 Talents Program of CAS; INPAC and Shanghai Key Laboratory for Particle Physics and Cosmology; German Research Foundation DFG under Contracts Nos. Collaborative Research Center CRC 1044, FOR 2359; Istituto Nazionale di Fisica Nucleare, Italy; Koninklijke Nederlandse Akademie van Wetenschappen (KNAW) under Contract No. 530-4CDP03; Ministry of Development of Turkey under Contract No. DPT2006K-120470; National Science and Technology fund; The Swedish Research Council; U. S. Department of Energy under Contracts Nos. DE-FG02-05ER41374, DE-SC-0010118, DE-SC-0010504, DE-SC-0012069; University of Groningen (RuG) and the Helmholtzzentrum fuer Schwerionenforschung GmbH (GSI), Darmstadt. This paper is also supported by Beijing municipal government under Contract Nos. KM201610017009, 2015000020124G064, CIT&TCD201704047. [99]{} M. Antonelli $et$ $al$., Phys. Rep. [**494**]{}, 197 (2010). N. Cabibbo, Phys. Rev. Lett. [**10**]{}, 531 (1963); M. Kobayashi and T. Maskawa, Prog. Theor. Phys. [**49**]{}, 652 (1973). Throughout this paper, mesons with no explicit charge denote either a charged or neutral meson; the charge-conjugate modes are implied, unless explicitly noted. M. R. Shepherd $et$ $al$. (CLEO Collaboration), Phys. Rev. D [**74**]{}, 052001 (2006); Phys. Rev. D [**81**]{}, 112001 (2006). P. del Amo Sanchez $et$ $al$. (BABAR Collaboration), Phys. Rev. D [**83**]{}, 072001 (2011). M. Ablikim $et$ $al$. (BESIII Collaboration), Phys. Rev. D [**94**]{}, 032001 (2016). S. Dobbs $et$ $al$. (CLEO Collaboration), Phys. Rev. Lett. [**110**]{}, 131802 (2013). M. Ablikim $et$ $al$. (BESIII Collaboration), Phys. Rev. D [**92**]{}, 071101(R) (2015). M. Ablikim $et~al.$ (BESIII Collaboration), Nucl. Instrum. Methods Phys. Res., Sect. A [**614**]{}, 345 (2010). S. Agostinelli $et~al.$ (GEANT4 Collaboration), Nucl. Instrum. Methods Phys. Res., Sect. A [**506**]{}, 250 (2003). S. Jadach, B. F. L. Ward and Z. Was, Comput. Phys. Commun. [**130**]{}, 260 (2000); Phys. Rev. D [**63**]{}, 113009 (2001). D. J. Lange, Nucl. Instrum. Methods Phys. Res., Sect. A [**462**]{}, 152 (2001); R. G. Ping, Chin. Phys. C [**32**]{}, 599 (2008). M. Tanabashi $et~al.$ (Particle Data Group), Phys. Rev. D [**98**]{}, 030001 (2018). J. C. Chen, G. S. Huang, X. R. Qi, D. H. Zhang and Y. S. Zhu, Phys. Rev. D [**62**]{}, 034003 (2000). M. Ablikim $et~al.$ (BESIII Collaboration), Phys. Rev. Lett. [**118**]{}, 112001 (2017). H. Albrecht $et~al.$ (ARGUS Collaboration), Phys. Lett. B. [**241**]{}, 278 (1990). M. Ablikim $et~al.$ (BESIII Collaboration), Phys. Rev. D [**92**]{}, 112008 (2015); Phys. Rev. Lett. [**115**]{}, 221805 (2015). M. Ablikim $et~al.$ (BESIII Collaboration), Phys. Rev. D [**92**]{}, 072012 (2015). C. L. Y. Lee, M. Lu and M. B. Wise, Phys. Rev. D [**46**]{}, 5040 (1992). J. D. Richman and P. R. Burchat, Rev. Mod. Phys. [**67**]{}, 893 (1995). T. E. Coan $et$ $al$. (CLEO Collaboration), Phys. Rev. Lett. [**95**]{}, 181802 (2005). J. M. Link $et$ $al$. (FOCUS Collaboration), Phys. Lett. B [**67**]{}, 607 (2005).
--- abstract: | [ We present analytical results on the two-loop anomalous dimensions of currents for baryons, containing two heavy quarks $J = [Q^{iT}C\Gamma\tau Q^j]\Gamma^{'}q^k\varepsilon_{ijk}$ with arbitrary Dirac matrices $\Gamma$ and $\Gamma^{'}$ in the framework of NRQCD in the leading order over both the relative velocity of heavy quarks and the inverse heavy quark mass. It is shown, that in this approximation the anomalous dimensions do not depend on the Dirac structure of the current under consideration. ]{} --- =-20mm =-27mm \#1[0= 0=0 1= 1=1 0&gt;1 \#1 / ]{} [**Two-loop anomalous dimensions for currents of baryons with two heavy quarks in NRQCD.** ]{}\ V.V. Kiselev, A.I. Onishchenko\ [State Research Center of Russia “Institute for High Energy Physics”]{}\ [*Protvino, Moscow region, 142284 Russia*]{}\ Fax: +7-095-2302337\ E-mail: [email protected] Introduction ============ The necessary feature of QCD applications to various fields of particle physics is a study of a scale dependence for operators as it is governed by the renormalization-group (RG). In the present paper we investigate the RG properties of currents for baryons with two heavy quarks in the framework of Non-Relativistic Quantum Chromodynamics (NRQCD) [@1],[@2] and its dimensionally regularized version [@3]. In the two-loop approximation we analytically calculate the anomalous dimensions of currents associated with the ground-state baryons, containing two heavy quarks[^1]. The dependence of QCD operators and matrix elements on the relative velocity $v$ of heavy quarks inside the hadron as well as on the inverse heavy quark mass $1/M_Q$ can be systematically treated in the framework of justified effective expansions in QCD. So, we apply the expansion in $1/M_Q$, as it was developed in Heavy Quark Effective Theory (HQET) [@4; @5; @6] for operators, corresponding to the interaction of heavy quarks with the light quark. For the heavy-heavy subsystem, the power tool is the NRQCD-expansion in both the relative velocity and the inverse mass. Here we consider the leading order in both $v$ and $1/M_Q$, that can serve as a good approximation for the anomalous dimensions of currents under consideration. The anomalous dimensions of composite operators can be desirably used in QCD sum rules [@7], which will allow us to evaluate the masses of these baryons together with their residues in terms of basic non-perturbative QCD parameters. For example, calculating the two-point correlators of baryonic currents in the Operator Product Expansion (OPE) in NRQCD, we have to insert the anomalous dimensions, obtained here in the static approximation, to relate the result to QCD. This procedure is caused by a different ultraviolate behaviuor of loop corrections in the full QCD and the effective theory. The latter contains the divergences absent in QCD, since it was constructed in the way to provide correct infrared properties of local QCD fields. The regularized quantities of effective theory depend on the normalization point under the RG equations with the corresponding anomalous dimensions. The ambiguity in the initial conditions of such differential equations is eliminated by the matching to full QCD at a scale, which is generally chosen as the heavy quark mass. The latter procedure means, that using the effective theory, we can systematically take into account the virtualities greater than the heavy quark mass. So, the knowledge of the two-loop anomalous dimensions is also important, when one discusses the matching of baryonic currents, obtained in this approximation with the corresponding currents in full QCD. Our analysis in this paper is close to what was presented in [@9], devoted to the baryons with a single heavy quark[^2]. While being very similar, these analyses also have some differences, which we would like to stress. The main technical obstacle of calculations is related to that the kinetic term is thought to be a necessary ingredient in the quark propagator for the evaluation of RG quantities in NRQCD, unlike to HQET, $$\frac{1}{k_0+i\varepsilon}\longrightarrow \frac{1}{k_0-\frac{{\bf k}^2}{2m}+i\varepsilon}.$$ If a hard cut-off is used $(\mu\ll m)$, we can easily see that such NRQCD-calculations can be performed just like in HQET, since $k^0\gg {\bf k}^2/m$ in the ultraviolet regime. However, if the dimensional regularization is used, the high energy modes $(k > m)$ are not explicitly suppressed and they give non-vanishing contributions. This can be seen because the behavior of the NRQCD propagator changes at energies greater than the mass. In spite of this, one would like to use dimensional regularization because it keeps all of the QCD symmetries and, moreover, the calculations are technically simpler. The difference between NRQCD and HQET can be explicitely highlighted in the consideration of effective Lagrangian, derived to the tree level in the $1/m$-expansion: $$\begin{aligned} \label{nrqcd} {\cal L}_{\rm NRQCD} &=& \psi^\dagger \left(i D^0+\frac{\boldsymbol{D}^2}{2 m} \right)\psi + \frac{1}{8 m^3}\,\psi^\dagger\boldsymbol{D}^4\psi- \frac{g_s}{2 m}\,\psi^\dagger\boldsymbol{\sigma}\cdot \boldsymbol{B}\psi \nonumber\\ &&\hspace*{-1.5cm} -\,\frac{g_s}{8 m^2}\,\psi^\dagger\left(\boldsymbol{D}\cdot\boldsymbol{E}- \boldsymbol{E}\cdot\boldsymbol{D}\right)\psi- \frac{i g_s}{8 m^2}\,\psi^\dagger\boldsymbol{\sigma}\cdot\left( \boldsymbol{D}\times\boldsymbol{E}-\boldsymbol{E}\times\boldsymbol{D}\right) \psi \nonumber\\[0.4cm] &&\hspace*{-1.5cm} + O(1/m^3)+\,\,\mbox{antiquark terms}\,+ {\cal L}_{\rm light}\end{aligned}$$ For a single heavy quark, interacting at low virtualities $D\sim \Lambda_{QCD}$, the kinetic term is suppressed and can be treated perturbatively. This results in the HQET prescription to the heavy quark propagator. However, in the heavy-heavy system there is the Coulomb-like interaction, wherein $D^0\sim \boldsymbol{D}^2/m\sim\alpha_s^2 m$. Therefore, we must include the kinetic term into the initial “free” Lagrangian of NRQCD. So, the loop corrections in $\alpha_s$ look to be different in HQET and NRQCD. Nevertheless, the physical reason to distinguish these effective theories is still the Coulomb-like corrections near the production threshold, which should make no influence on the ultraviolate properties. We would note that the question is, in a sense, analogous to that in the theory of massive gauge fields in the spontaneously broken theories, where the explicite introduction of mass seems to destroy good RG properties of massless vector fields (the question was removed by the appropriate redefinitions of fields due to the surviving the gauge invariance). Several authors have addressed the similar problem of NRQCD in the connection to matching calculations [@10], and recently an appealing solution has been proposed [@11]: it is claimed that the matching in NRQCD using the dimensional regularization should be performed just like in HQET, namely the kinetic term must be treated as a perturbation vertex: $$\frac{1}{k_0-\frac{{\bf k}^2}{2m}+i\varepsilon} = \frac{1}{k_0}+\frac{{\bf k}^2}{2m(k_0)^2}+...\label{2}$$ The derivation is based on the appropriate redefinition of the heavy quark field [@11]: $$\begin{aligned} Q&\longrightarrow & [1-\frac{D_{\perp}^2}{8m^2}-\frac{g\sigma_{\alpha\beta}G^{\alpha\beta}}{16m^2}+ \frac{D_{\perp}^{\alpha}(iv\cdot D)D_{\alpha\perp}}{16m^3}+\frac{gv_{\lambda}D_{\perp\alpha}G^{\alpha\lambda}} {16m^3}\\ && -i\frac{\sigma_{\alpha\beta}D_{\perp}^{\alpha}(iv\cdot D) D_{\perp}^{\beta}}{16m^3}-i\frac{gv_{\lambda}\sigma_{\alpha\beta}D_{\perp}^{ \alpha}G^{\beta\lambda}}{16m^3}]Q,\nonumber\end{aligned}$$ where the $\sigma$ matrices are projected by $P_v\sigma P_v$, $P_v = \frac{1+\hat v}{2}$ and $D_{\perp}^{\mu} = D^{\mu}-v^{\mu}v\cdot D$. The substitution converts the HQET Lagrangian to the NRQCD one, so that the loop renormalization of perturbative terms is the same. Here, we propose to use the same prescription for the heavy quark propagator as it stands in (\[2\]), not only in the matching procedure, but also for the calculations of anomalous dimensions for the NRQCD currents in $\overline{\rm MS}$-renormalization scheme. To support this point let us consider the matching procedure in some details. The matching condition can be written down as $$Z_{J,QCD}^{-1}Z_{2,QCD}^{on-shell}Z_{V,QCD}^{h.m.}\Gamma_{QCD}^{'} = C_0 Z_{2,NRQCD}^{on-shell}Z_{J,NRQCD}^{-1}\Gamma_{NRQCD}^{(0)},\label{3}$$ $$Z_{V,QCD}^{h.m}\Gamma_{QCD}^{'} = \Gamma_{QCD}^{(0)},$$ where $Z_V^{h.m.}$ denotes poles, associated with the hard momenta region for the bare single-particle irreducible vertex $\Gamma_{QCD}^{(0)}$ in full QCD, $Z_{J,QCD}$ and $Z_{J,NRQCD}$ are the renormalization constants of currents in QCD and NRQCD, correspondingly, $Z_{2,QCD}$ and $Z_{2,NRQCD}$ include the renormalization of wave functions, and, finally, $\Gamma_{NRQCD}^{(0)}$ denotes the bare vertex in NRQCD. On this stage we use prescription (\[2\]) for the treating the heavy quark propagators. On the other hand, one can write the following indentity $$\Gamma_{QCD} = Z_{J,QCD}^{-1}Z_{2,QCD}^{\overline{\rm MS}}Z_{V,QCD}^{h.m.}Z_{V,QCD}^{s.m.}\Gamma_{QCD}^{''}, \label{4}$$ where we have collected all divergences in $Z$-factors and use the convention of (\[2\]) for the expansion of heavy quark propagators in powers of the kinetic term. $Z_{V,QCD}^{s.m.}$ denotes the contribution from a small momenta region. Calculating the contribution from the small momenta, we have to set the external legs to be off-shell, in order to exclude the contribution from the infrared region as it was done in the case of matching. To proceed further, let us introduce the following definitions $$\begin{aligned} Z_{2,QCD}^{on-shell} &=& Z_{2,QCD}^{\overline{\rm MS}}Z_{inf.r.},\\ Z_{2,NRQCD}^{on-shell} &=& Z_{2,NRQCD}^{\overline{\rm MS}}Z_{inf.r.},\end{aligned}$$ where $Z_{inf.r.}$ is a contribution to the wave-function renormalization from the infrared region, which is the same in both theories. Using these notations and the fact, that $Z_{2,NRQCD}^{on-shell} = 1$, we can rewrite Eq. (\[4\]) as $$\Gamma_{QCD} = Z_{J,QCD}^{-1}Z_{2,QCD}^{on-shell}Z_{V,QCD}^{h.m.}Z_{V,QCD}^{s.m.}Z_{2,NRQCD}^{ \overline{\rm MS}}\Gamma_{QCD}^{''}. \label{5}$$ Now we can easily see from Eqs. (\[3\]) and (\[5\]), that the NRQCD anomalous dimensions in the $\overline{\rm MS}$-renormalization scheme can be computed in two ways: either from the matching condition (\[3\]) or using the HQET Feynman rules and setting the external legs off-shell in order to avoid the infrared divergencies. We have explicitly checked this conjecture to one-loop for the heavy-heavy vector current, however, for a full confidence we feel a need for such a check in the two-loop approximation. So, in our approach we exploit the same reasoning for the calculation of RG quantities and work in the leading order of this expansion. Moreover, it is theoretically sound, because in the $\overline{\rm MS}$-renormalization scheme used, the anomalous dimensions of currents do not depend on the masses of particles. The following fact also supports our claim: the values of Wilson coefficients, calculated in the matching procedure, are directly connected to the anomalous dimensions of operators multiplying these coefficients in the Lagrangian. And, finally, the high energy behavior in the effective theory with several scales does not depend on relative weight of the lower scales. Thus, we only need $$m\gg |{\bf p}|, E, \Lambda_{QCD},$$ where there is no matter what are relations between $|\bf p|$, $E$ and $\Lambda_{QCD}$. So, in our calculations we use the HQET propagators for the heavy quarks, setting the quark momenta in a way to avoid infrared divergencies. As will be explained in details below, the two-loop contribution to the anomalous dimensions of currents under consideration consists of three parts. The first corresponds to the set of graphs, wherein the two-loop contributions are associated with one of the heavy-light subsystems. For this contribution we use the result of [@9]. Then there is the subset of two-loop graphs that are associated with the heavy-heavy system. The expression for this contribution is a generalization of what was obtained in [@12]. And, finally, there are the irreducible contributions, where the two-loops connect all three quark lines. This contribution is calculated in this paper. We evaluate the two loop diagrams with the use of package, written by us on MATHEMATICA, and the recurrence-relations in HQET [@13]. This paper is organized as follows. In section 2 we discuss the choice of currents for the baryons with two heavy quarks and give some comments on the renormalization properties of composite operators under consideration. In section 3 we furnish some remarks on the anomalous dimensions and present the results on the one-loop anomalous dimensions. In section 4 we discuss general features of two-loop renormalization procedure and present our two-loop results. We work in the $\overline{\rm MS}$-renormalization scheme throughout the paper. As concerns the treatment of $\gamma_5$ we will show that the final expression does not depend on the scheme used. Section 5 contains our conclusion. Baryonic Currents ================= The currents of baryons with two heavy quarks $\Xi_{cc}^{\diamond}$, $\Xi_{bb}^{\diamond}$ and $\Xi^{\prime \diamond}_{bc}$, where $\diamond$ means different charges depending on the light quark charge, are associated with the spin-parity quantum numbers $j^P_d=1^+$ and $j^P_d=0^+$ for the heavy diquark system with the symmetric and antisymmetric flavor structure, respectively. Adding the light quark to the heavy quark system, one obtains $j^P=\frac{1}{2}^+$ for the $\Xi^{\prime \diamond}_{bc}$ baryons and the pair of degenerate states $j^P=\frac{1}{2}^+$ and $j^P=\frac{3}{2}^+$ for the baryons $\Xi_{cc}^{\diamond}$, $\Xi_{bc}^{\diamond}$, $\Xi_{bb}^{\diamond}$ and $\Xi_{cc}^{*\diamond}$, $\Xi_{bc}^{*\diamond}$, $\Xi_{bb}^{*\diamond}$. The structure of baryon currents with two heavy quarks is generally chosen as $$J = [Q^{iT}C\Gamma\tau Q^j]\Gamma^{'}q^k\varepsilon_{ijk}.$$ Here $T$ means transposition, $C$ is the charge conjugation matrix with the properties $C\gamma_{\mu}^TC^{-1} = -\gamma_{\mu}$ and $C\gamma_5^TC^{-1} = \gamma_5$, $i,j,k$ are colour indices and $\tau$ is a matrix in the flavor space. The effective static field of the heavy quark is denoted by $Q$. To obtain the corresponding NRQCD currents one has to perform the above-mentioned redefinition of local field. But as we are working in the leading order over both the relative velocity of heavy quarks and their inverse masses, this local redefinition does not change the structure of the currents. Here, unlike the case of baryons with a single heavy quark, there is the only independent current component $J$ for each of the ground state baryon currents. They equal $$\begin{aligned} J_{\Xi^{\prime \diamond}_{QQ^{\prime}}} &=& [Q^{iT}C\tau\gamma_5 Q^{j\prime}]q^k\varepsilon_{ijk},\nonumber\\ J_{\Xi_{QQ}^{\diamond}} &=& [Q^{iT}C\tau\boldsymbol{\gamma} Q^j]\cdot\boldsymbol{\gamma}\gamma_5 q^k\varepsilon_{ijk},\\ J_{\Xi_{QQ}^{*\diamond}} &=& [Q^{iT}C\tau\boldsymbol{\gamma} Q^j]q^k\varepsilon_{ijk}+\frac{1}{3}\boldsymbol{\gamma} [Q^{iT}C\boldsymbol{\gamma} Q^j]\cdot\boldsymbol{\gamma} q^k\varepsilon_{ijk},\nonumber\end{aligned}$$ where $J_{\Xi_{QQ}^{*\diamond}}$ satisfies the spin-3/2 condition $\boldsymbol{\gamma} J_{\Xi_{QQ}^{*\diamond}} = 0$. The flavor matrix $\tau$ is antisymmetric for $\Xi^{\prime \diamond}_{bc}$ and symmetric for $\Xi_{QQ}^{\diamond}$ and $\Xi_{QQ}^{*\diamond}$. The currents written down in Eq. (6) are taken in the rest frame of hadrons. The corresponding expressions in a general frame moving with a velocity four-vector $v^{\mu}$ can be obtained by the substitution of $\boldsymbol{\gamma}\to \gamma_{\perp}^{\mu}=\gamma^{\mu}-\hat vv^{\mu}$. Now we would like to give some comments on the renormalization properties of these currents. As we have the only one light leg in this problem, all of $\gamma$ matrices, which will appear in calculations, will stay on a single side of our composite operators, not touching their Dirac structure. This will lead to the fact that the anomalous dimensions of all our currents in this approximation are the same, i.e. they do not depend on $\Gamma$-matrices in (4). From this reasoning, we also can conclude that the result does not depend on the $\gamma_5$ scheme used. Common notations in renormalization =================================== The local operators $O_0$ composed of bare physical fields contain the ultra-violet divergences, which can be absorbed by the renormalization factors $Z_O$, being a series in powers of coupling constant, so that $O=Z_O O_0$ is a finite quantity, while the regularization parameters do not tend to peculiar values. In the dimensional regularization using the $\overline{\rm MS}$-scheme of subtractions in $D=4-2\epsilon$ dimensions [@tHVt], $Z_O$ is expanded in inverse powers of $\epsilon$, so that $$Z=1+\sum_{m=1}^\infty\sum_{k=1}^m\left(\frac{\alpha_s}{4\pi}\right)^m \frac1{\epsilon^k}Z_{m,k}=1+\sum_{k=1}^\infty\frac1{\epsilon^k}Z_k.$$ The dependence on the dimensionful subtraction point $\mu$ defines the anomalous dimension of renormalized operator $O$ $$\gamma=\frac{d\ln Z(\alpha(\mu),a;\epsilon)}{d\ln (\mu)},$$ where $a$ is the renormalized gauge parameter in the general covariant gauge (with a gluon propagator proportional to $-g_{\mu\nu}+(1-a)k_\mu k_\nu/k^2$) and $\alpha(\mu)$ is the renormalized coupling constant in four-dimensional space, so that $$\begin{aligned} \label{conn} \alpha_0=\alpha(\mu)\mu^{2\epsilon}Z_\alpha(\alpha(\mu),a;\epsilon),\qquad a_0=aZ_3(\alpha(\mu),a;\epsilon),\end{aligned}$$ and the corresponding $Z_{\{\alpha,3\}}$-factors determine the anomalous dimensions, which are generally denoted by $\{-\beta,-\delta\}$, respectively. The $\gamma$-quantities are finite at $D\to 4$, so we define the coefficients of series $$\label{series} \gamma=\sum_{m=1}^\infty\left(\frac{\alpha_s}{4\pi}\right)^m\gamma^{(m)}.$$ One can check that [@PaTa] $$\label{anom} \gamma=-2\frac{\partial Z_{1}}{\partial\ln\alpha_s},$$ and for $k>0$ $$\label{consistency} -2\frac{\partial Z_{k+1}}{\partial\ln\alpha_s} =\left(\gamma-\beta\frac\partial{\partial\ln\alpha_s} -\delta\frac\partial{\partial\ln a}\right)Z_{k}.$$ The latter provides the consistency condition, when the former produces a simple extraction of the anomalous dimensions to the two-loop accuracy $$\label{andim12} \gamma^{(1)}=-2Z_{1,1}\quad\mbox{and}\quad\gamma^{(2)}=-4Z_{2,1}.$$ One-loop result --------------- Consider the one-loop renormalization of currents of baryons with two heavy quarks. In the $\overline{\rm MS}$-scheme with $D=4-2\epsilon$ space-dimensions we have the following squares of renormalization factors for the bare quark fields: $$\label{ZqQ} Z_q=1-a_0\frac{g_0^2C_F}{(4\pi)^2\epsilon},\qquad Z_Q=1+(3-a_0)\frac{g_0^2C_F}{(4\pi)^2\epsilon},$$ where we use the usual definitions for $SU(N)$, i.e.  $C_F=(N_c^2-1)/2N_c$, $C_A=N_c$, $C_B=(N_c+1)/2N_c$, and $T_F=1/2$ for $N_c=3$, $N_F$ being the number of light quarks. One-loop $\overline{\rm MS}$-results for the factors $Z_\alpha$ and $Z_3$ have been given e.g. in [@PaTa]: $$\begin{aligned} Z_\alpha&=&1-\frac{\alpha_s}{4\pi\epsilon} \left[\frac{11}3C_A-\frac43T_FN_F\right],\label{zal}\\ Z_3&=&1+\frac{\alpha_s}{4\pi\epsilon} \left[\frac{13-3a}6C_A-\frac43T_FN_F\right].\label{z3}\end{aligned}$$ The bare current is renormalized by the factor $Z_J$: $$\label{renormj} J_0=(Q_0^TC\Gamma\tau Q_0)\Gamma'q_0=Z_QZ^{1/2}_qZ_VJ=Z_JJ,$$ which straightforwardly means that $$\label{gammaj} \gamma_J=2\gamma_Q+\gamma_q+\gamma_V,$$ i.e. the anomalous dimension of the full current $J$ is a sum of three terms given by the renormalization of the light and heavy quark fields, and the renormalization of the vertex. For the vertex $(Q_0^TC\Gamma\tau Q_0)\Gamma'q_0$, we find $$\label{zgam1} Z_V=1+\frac{\alpha_sC_B}{4\pi\epsilon}(3a-3),$$ which results in $$\label{an1} \gamma^{(1)}_V=-2C_B(3a-3).$$ The one-loop anomalous dimensions $\gamma_q^{(1)}$ and $\gamma_Q^{(1)}$ are equal to $$\label{gamma1legs} \gamma_q^{(1)}=C_Fa,\qquad\gamma_Q^{(1)}=C_F(a-3).$$ Thus, the one-loop anomalous dimension of the baryonic current is given by $$\label{an11} \gamma_J=\frac{\alpha_s}{4\pi}\Big(-2C_B(3a-3)+3C_F(a-2)\Big) +O(\alpha_s^2).$$ Two-loop calculations ===================== In this section we apply the two-loop renormalization of the baryon current with two heavy quarks in the $\overline{\rm MS}$-scheme and restrict ourselves by the Feynman gauge. The two-loop anomalous dimensions of the quark fields are given by [@13; @Tara; @Jone; @Gime; @JiMu]. $$\label{gammaq} \gamma^{(2)}_q=C_F\left(\frac{17}2C_A-2T_FN_F-\frac32C_F\right),\qquad \gamma^{(2)}_Q=C_F\left(-\frac{38}3C_A+\frac{16}3T_FN_F\right).$$ Since the baryonic currents are renormalized multiplicatively in the effective theory[^3], the Dirac structure of vertex repeats the Born-term. Technically we perform the calculations in terms of bare coupling and gauge parameter, so that to isolate the two-loop contribution to the anomalous dimension, we need also the one-loop result, wherein we have to include the one-loop expressions written down through the renormalized quantities $\alpha_s$ and $a$, which will add the contribution to the corresponding $\alpha_s^2/\epsilon$-term. The procedure described leads to the relations: $$\label{zvrel} Z_{1,1}=V_{1,1},\qquad Z_{2,2}=V_{2,2},\qquad Z_{2,1}=V_{2,1}-V_{1,1}V_{1,0}.$$ As expected, $Z_{2,1}$ has to include the one-loop contributions. In the Introduction we have described three subgroups of two-loop diagrams for the vertex, which can be expressed as $$V_0=2V^{(hl)}_0+V^{(hh)}_0+V^{(ir)}_0,$$ whose evaluation is presented in the rest of this section. The heavy-light subsystem. -------------------------- As for the problem of evaluation the bare proper vertex $V^{(hl)}$ of composite operator $(qQ)$ with a massless quark field $q$ and the effective static heavy quark field $Q$, we can easily see that the result does not depend on the Dirac structure of the vertex. For this reason, in our calculations we have used the result of [@9], where this vertex was calculated to two-loop order in the Feynman gauge $(a = 1)$ with the use of algorithm developed in [@13]: $$\begin{aligned} V_{1,1}^{(hl)} = C_Ba,&&\quad V_{1,0}^{(hl)} = 0,\\ V_{2,2}^{(hl)} = C_B(\frac{1}{2}C_B - C_A),&&\quad V_{2,1}^{(hl)} = -C_B(C_B(1-4\zeta (2))-C_A(1-\zeta (2))).\nonumber\end{aligned}$$ Then from the relations (\[zvrel\]) one can calculate the coefficients $Z_{n,k}$, which determine the two-loop anomalous dimension for the subset of the heavy-light graphs $$\gamma_{(hl)}^{(2)} = C_B^2(4-16\zeta (2))-C_BC_A(4-4\zeta (2)).$$ It is worth to note that this expression was calculated for antisymmetric baryonic color configuration $q^iQ^jQ^k\epsilon_{ijk}$ unlike the case of colour-singlet $\bar q^iQ^j\delta_j^i$ mesonic configuration. The expression for the latter case can be obtained by substitution of $C_B\to C_F$, which reconstructs the required results, as it was checkeds by authors of [@9]. The heavy-heavy subsystem. -------------------------- To evaluate this contribution we have used the results of [@12], where the expression for the anomalous dimension of NRQCD mesonic vector current was presented $$\begin{aligned} \gamma_J^M &=& 2\gamma_Q+\gamma_{(hh)} = \frac{d\ln Z_J}{d\ln\mu}\\ &=& -C_F(2C_F+3C_A)\frac{\pi^2}{6}\Bigl(\frac{\alpha_s}{\pi}\Bigr)^2 + O(\alpha_s^3).\nonumber\end{aligned}$$ In our case we have the similar problem, but the different color structure. Thus, following [@12] we can consider the matching of QCD vector current with the antisymmetric color structure on the NRQCD one. Unlike the meson case with the singlet-color structure, the QCD vector current with the antisymmetric color structure need not be conserved, so we allow for its renormalization. In terms of the on-shell matrix elements, the matching equation can be written down as[^4] $$Z_{2,QCD}Z_{J,QCD}^{-1}\Gamma_{QCD} = C_0Z_{2,NRQCD}Z_{J,NRQCD}^{-1}\Gamma_{NRQCD} + O(v^2),$$ where $Z_{J,QCD}$ has the following expression [@9] $$\begin{aligned} Z_{J,QCD} &=& 1-\frac{C_BC_F}{\epsilon^2}\biggl(\frac{\alpha_s}{4\pi}\biggr)^2+\frac{1}{ \epsilon}((C_B-C_F)\biggl(\frac{\alpha_s}{4\pi}\biggr)+\nonumber\\ && (-\frac{1}{4}C_B(-17C_A+3C_B +4(1+N_F)T_F)\\ && +\frac{1}{4}C_F(-17C_A+3C_F+4(1+N_F)T_F))\biggl(\frac{\alpha_s} {4\pi}\biggr)^2)\nonumber\end{aligned}$$ The anomalous dimension of NRQCD current, obtained in this way, may be used in the calculations of anomalous dimensions for the baryonic currents with two heavy quarks, as it does not depend on the Dirac structure of the vertex. The contributions of different two-loop diagrams with the antisymmetric color structure of the vertex $q^iQ^jQ^k\epsilon_{ijk}$ in the notations of [@12] are shown in Appendix. To obtain the anomalous dimension $\gamma_{(hh)}^{(2)}$ of composite operator under consideration one has to perform the following steps: 1\) sum all of these contributions, including the one-loop term, multiplyed by the two-loop QCD on-shell wave function renormalization constant [@14], $Z_{J,QCD}^{-1}$ and one-loop NRQCD-current renormalization constant, 2\) perform the one-loop renormalization of coupling and mass. After these manipulations the coefficient at $\frac{1}{\epsilon}$ multiplied by $-4$ will give us the sum $\gamma_{(hh)}^{(2)}+2\gamma_Q^{(2)}$. For the two-loop anomalous dimension $\gamma_{(hh)}^{(2)}$ in the heavy-heavy subsystem, we find the following result $$\begin{aligned} \gamma_{(hh)}^{(2)} &=& -\frac{4}{3}C_B((-19+6\pi^2)C_A+4(\pi^2C_B+2N_FT_F)).\end{aligned}$$ The light-heavy-heavy irreducible vertex ---------------------------------------- In this case one needs to calculate the three-quark irreducible vertex $V_0^{(ir)}$. There are 8 diagrams in the two-loop order. We have shown four of them in Fig. 1, the other four can be obtained by exchanging two heavy quark legs. We set the heavy quarks off shell in order to avoid any infrared singularities. Using the partial fractioning of the integrand in momentum integrals and recurrence-relations of [@13], we arrive to the following expressions for the diagrams depicted on Fig.1 $$\begin{aligned} V_{0}^{(ir)[1]} &=& 2\cdot C_B^2\biggl(\frac{\alpha_s}{4\pi}\biggr)^2 \biggl(\frac{1}{2}\frac{1}{\epsilon^2} - (1+\frac{\pi^2}{3})\frac{1}{\epsilon} + \frac{216 + 35\pi^2 - 48\psi^{(2)}(1) - 96\psi^{(2)}(2)}{36}\biggr),\\ V_{0}^{(ir)[2]} &=& 0,\\ V_{0}^{(ir)[3]} &=& 2\cdot C_B^2\biggl(\frac{\alpha_s}{4\pi}\biggr)^2 \biggl(-\frac{1}{\epsilon^2} - \frac{2}{\epsilon} - 4 - \frac{\pi^2}{6}\biggr),\\ V_{0}^{(ir)[4]} &=& 2\cdot C_B^2\biggl(\frac{\alpha_s}{4\pi}\biggr)^2 \biggl(-\frac{1}{\epsilon^2} + \frac{2}{\epsilon} - 4 - \frac{3\pi^2}{2}\biggr),\end{aligned}$$ where $\psi^{(n)}(z) = d^n\psi (z)/dz^n$, $\psi (z) = \Gamma^{'}(z)/\Gamma (z)$ and the factor of 2 accounts for the contributions of the remaining four reflected diagrams not included in Fig. 1. For the $Z$-factors and anomalous dimension, we obtain $$\begin{aligned} Z_{22}^{(ir)} &=& -3\cdot C_B^2,\\ \gamma_{(ir)}^{(2)} &=& -4Z_{2,1}^{(ir)} = 8\cdot C_B^2(1+\frac{\pi^2}{3}).\end{aligned}$$ Anomalous dimension combined ---------------------------- Now we are ready to calculate the anomalous dimension of baryonic currents with two heavy quarks. As we have already said above it does not depend on the Dirac structure of the current under consideration. Collecting the results for the heavy-light, heavy-heavy and irreducible light-heavy-heavy vertices, we find $$\begin{aligned} \gamma_V^{(2)} &=& -\frac{4}{3}C_B((-13+30\zeta (2))C_A + 6(-2+6\zeta (2))C_B + 8N_FT_F).\end{aligned}$$ And, finally, to obtain the full two-loop result for the anomalous dimension one has to add the anomalous dimensions of heavy and light quarks. The result is $$\begin{aligned} \gamma_J^{(2)} &=& \frac{1}{6}(-48(-2+6\zeta (2))C_B^2+C_A((104-240\zeta (2))C_B-101C_F)\\ && -64C_BN_FT_F+C_F(-9C_F+52N_FT_F)).\nonumber\end{aligned}$$ With this formula we are finishing our analytical calculations. Landing to the SU(3) group of QCD, we get $$\begin{aligned} \gamma^{(1)} & = & -4 ,\\ \gamma^{(2)} & = & -\frac{254}{9}-\frac{152\pi^2}{9}+\frac{20}{9}N_F\approx -194.909+2.222 N_F,\end{aligned}$$ which indicate a rather strong sensitivity of those currents to the choice of reference scale $\mu$. Conclusion ========== We have calculated the two-loop anomalous dimensions of NRQCD baryonic currents with two heavy quarks in the leading order in both the relative velocity of heavy quarks and the inverse heavy quark mass. It is shown, that the results do not depend on the Dirac structure of the currents and on the $\gamma_5$ prescription used in the calculations. These results will be useful for derivation of QCD sum rules for baryons with two heavy quarks in the same static approximation in both the leading and next-to-leading orders. We suppose to address this problem in the nearby future. This work is in part supported by the Russian Foundation for Basic Research, grants 96-02-18216 and 96-15-96575. [\*\*]{} , Phys.Lett. B167, 437 (1986). , Phys.Rev. D51, 1125 (1995);\ Phys.Rev. D55, 5853 (1997). , CERN-TH/7-315, hep-ph/9711391 , DESY-98-080, hep-ph/9807375;\ [*D. Ebert, R.N. Faustov, V.O. Galkin, A.P. Martynenko, V.A. Saleev*]{}, Z.Phys. C76, 111 (1997). , DESY-98-079, hep-ph/9807354. , Phys.Rev. D49, 555 (1994);\ [*V.V. Kiselev, A.K. Likhoded, M.V. Shevlyagin*]{}, Phys.Lett. B332, 411 (1994);\ [*A.V. Berezhnoy, V.V. Kiselev, A.K. Likhoded*]{}, Z.Phys. A356, 89 (1996);\ [*A.V. Berezhnoy, V.V. Kiselev, A.K. Likhoded*]{}, Phys.Atom.Nucl. 59, 870 (1996);\ [*A.V. Berezhnoy, V.V. Kiselev, A.K. Likhoded and A.I. Onishchenko*]{}, Phys.Rev. D57, 4385 (1998). , Yad.Fiz. 45, 463 (1987);\ Sov.J.Nucl.Phys. 45, 292 (1987).\ [*H.D. Polizer and M.B. Wise*]{}, Phys.Lett. B206, 681 (1988); Phys.Lett. B208, 504 (1988).\ [*N. Isgur and M.B. Wise*]{}, Phys.Lett. B232, 113 (1989); Phys.Lett. B237, 527 (1990).\ [*E.Eichen and B.Hill*]{}, Phys.Lett. B234, 511 (1990)\ [*H.Georgi*]{}, Phys.Lett. B240, 447 (1990). , Nucl.Phys. B339, 253 (1990). , Phys.Rep. 245, 259 (1994). , Nucl.Phys. B147, 385 (1979);\ B147, 488 (1979). , Phys.Rev. D54 (1996) 3447. , Phys.Rev. D55, 4129 (1997).\ [*M. Luke and M.J. Savage*]{}, Phys.Rev. D57, 413 (1998).\ [*B. Grinshtein and I.Z. Rothstein*]{}, Phys.Rev. D57, 78 (1998). , Phys.Rev. D56, 230 (1997). , Phys.Rev.Lett. 80, 2531 (1998);\ [*M.Beneke, A.Singer, V.A.Smirnov*]{}, Phys.Rev.Lett. 80, 2535 (1998). , Phys.Lett. B267, 105 (1991). , Nucl.Phys. B44, 189 (1972); B50, 318 (1972).\ [*G.’t Hooft*]{}, Nucl.Phys. B61, 455 (1973); B62, 444 (1973).\ [*P. Breitenlohner and D. Maison*]{}, Commun.Math.Phys. 52, 11,39,55 (1977) , “Renormalization for the practitioner”,\ Lecture Notes in Physics 194 (Springer, Berlin, 1984) , Theor.Math.Phys. 41, 26 (1979). , Nucl.Phys. B75, 531 (1974). , Nucl.Phys. B375, 582 (1992). , Phys.Lett. B257, 409 (1991). , Z.Phys. C52, 111 (1991). Appendix ======== In this appendix we present the generalization of expressions for the hard contributions to the diagrams of Fig. 1 of [@12] with the antisymmetric color structure of vertex evaluated at the threshold $q^2 = 4m^2$. Below you can find the coefficients of $(\alpha_s/\pi)^2(e^{\gamma_E}m_Q^2/(4\pi\mu^2))^{-2\epsilon}$ $$\begin{aligned} D_1 &=& C_B^2 \bigg[~\frac{9}{32\epsilon^2}-(\frac{27}{64} + \frac{5\pi^2}{24})\frac{1}{\epsilon} - \frac{81}{128}-\frac{133\pi^2}{96} - \frac{5\pi^2\ln 2}{12} -\frac{35\zeta(3)}{8}~\bigg],\\ D_2 &=& C_BC_F \bigg[~-\frac{3}{16\epsilon^2} - \frac{43}{32}\frac{1}{\epsilon} +\frac{733}{192}+\frac{971\pi^2}{576}~\bigg],\\ D_3 &=& C_BC_A \bigg[~\frac{15}{32\epsilon^2} - (\frac{5}{64}+\frac{\pi^2}{16})\frac{1}{\epsilon} + \frac{715}{384}-\frac{319\pi^2}{576}-\frac{\pi^2\ln 2}{8}- \frac{21\zeta(3)}{16}~\bigg],\\ D_4 &=& C_B(C_A-2C_B) \bigg[~(\frac{3}{16}-\frac{\pi^2}{16})\frac{1}{\epsilon} -\frac{39}{32}-\frac{251\pi^2}{1152}-\frac{3\pi^2\ln 2}{8}- \frac{31\zeta(3)}{16}~\bigg],\\ D_5 &=& C_B(C_A-2C_F) \bigg[~-\frac{9}{32\epsilon^2}-\frac{19}{64}\frac{1}{\epsilon} + \frac{761}{384}+\frac{1157\pi^2}{1152}+\frac{\pi^2\ln 2}{6}- \frac{3\zeta(3)}{4}~\bigg],\\ D_6 &=& C_BT_FN_F \bigg[~-\frac{1}{8\epsilon^2}+\frac{5}{48}\frac{1}{\epsilon}-\frac{355}{288}- \frac{5 \pi^2}{48}~\bigg],\\ D_7 &=& C_BC_A \bigg[~\frac{19}{128\epsilon^2} - \frac{53}{768}\frac{1}{\epsilon} + \frac{6787}{4608}+\frac{95\pi^2}{768}~\bigg],\\ D_8 &=& C_BC_A \bigg[~\frac{1}{128\epsilon^2} + \frac{1}{768}\frac{1}{\epsilon} + \frac{361}{4608}+\frac{5\pi^2}{768}\bigg],\\ D_9 &=& C_BT_F \bigg[~-\frac{1}{4\epsilon^2} + \frac{13}{48}\frac{1}{\epsilon} -\frac{145}{96}+\frac{5}{72}~\bigg].\end{aligned}$$ [^1]: We do not consider the problems concerning the spectroscopy, decays and production mechanisms of baryons with two heavy quarks. This can be found in [@A],[@B] and [@C], correspondingly. [^2]: We generally accept a set of basic notations used in [@9]. [^3]: See discussion in ref.[@9]. [^4]: Since the matching coefficient contains only short-distance effects, the matching can be done by comparing the matrix elements of these currents over a free quark-antiquark pair of the on-shell quarks at a small relative velocity.
--- author: - 'G. Bihain' - 'R.-D. Scholz' - 'J. Storm' - 'O. Schnurr' date: 'Received 25 June 2013; accepted 10 July 2013 ' title: 'An overlooked brown dwarf neighbour (T7.5 at $d$$\sim$5 pc) of the Sun and two additional T dwarfs at about 10 pc[^1]' --- [Although many new brown dwarf (BD) neighbours have recently been discovered thanks to new sky surveys in the mid- and near-infrared (MIR, NIR), their numbers are still more than five times lower than those of stars in the same volume.]{} [Our aim is to detect and classify new BDs to eventually complete their census in the immediate solar neighbourhood.]{} [We combined multi-epoch data from sky surveys at different wavelengths to detect BD neighbours of the Sun by their high proper motion (HPM). We concentrated on relatively bright MIR ($w2$$<$13.5) BD candidates from the Wide-field Infrared Survey Explorer (WISE) expected to be so close to the Sun that they may also be seen in older NIR (Two Micron All Sky Survey (2MASS) DEep Near-Infrared Survey (DENIS)) or even red optical (Sloan Digital Sky Survey (SDSS) $i$- and $z$-band, SuperCOSMOS Sky Surveys (SSS) $I$-band) surveys. With low-resolution NIR spectroscopy we classified the new BDs and estimated their distances and velocities.]{} [We have discovered the HPM ($\mu$$\sim$470 mas/yr) T7.5 dwarf, WISE J0521$+$1025, which is at $d$=5.0$\pm$1.3 pc from the Sun the nearest known T dwarf in the northern sky, and two early T dwarfs, WISE J0457$-$0207 (T2) and WISE J2030$+$0749 (T1.5), with proper motions of $\sim$120 and $\sim$670 mas/yr and distances of 12.5$\pm$3.1 pc and 10.5$\pm$2.6 pc, respectively. The last one was independently discovered and also classified as a T1.5 dwarf by Mace and coworkers. All three show thin disc kinematics. They may have been overlooked in the past owing to overlapping images and because of problems with matching objects between different surveys and measuring their proper motions.]{} Introduction ============ The progress in discovering brown dwarfs (BDs) with ever cooler temperatures, that correspond to four spectral classes (M, L, T, and Y), is closely connected with the shift of all-sky surveys to longer wavelengths, from the optical, to the near- and mid-infrared (NIR, MIR). As BDs change their spectral types during their lifetime when cooling down (Burrows et al. [@burrows01]), the majority of BDs in the solar neighbourhood with typical ages of several Gyr are expected to be T- and Y-type BDs. This has now been confirmed by the latest observations. Updating the stellar and substellar census within 8 pc from the Sun after the recently completed MIR WISE survey (Wide-field Infrared Survey Explorer; Wright et al. [@wright10]), Kirkpatrick et al. ([@kirkpatrick12]) listed 3 L-type, 22 T-type, and 8 Y-type objects. The last class was only recently established by Cushing et al. ([@cushing11]) and consists exclusively of WISE discoveries and will certainly be filled with many more discoveries. The WISE survey detected 7$+$1 new T and L dwarfs, respectively, in this volume, whereas former NIR surveys, the Two Micron All Sky Survey (2MASS; Skrutskie et al. [@skrutskie06]) and the DEep Near-Infrared Survey (DENIS; Epchtein et al. [@epchtein97]), contributed 8$+$1 and 1$+$1 T and L dwarfs, respectively. Six T dwarfs were found by other surveys, according to their discovery names listed in Kirkpatrick et al. ([@kirkpatrick12]). ![image](22141_f1.jpg){width="12.2cm"} Because of the small number density of L dwarfs and the optical faintness of T dwarfs, none of the L/T discoveries from the Sloan Digital Sky Survey (SDSS) with its ongoing data releases (e.g. Abazajian et al. [@abazajian09], Aihara et al. [@aihara11]) fall into the 8 pc sample, but one peculiar L6p/T7.5 binary, SDSS J1416$+$1348AB (Bowler, Liu & Dupuy [@bowler10], Scholz [@scholz10a], Burgasser, Looper & Rayner [@burgasser10a]), is missing according to the information given in the DwarfArchives (Gelino, Kirkpatrick & Burgasser [@gelino12]). However, the new accurate trigonometric parallax of this binary determined by Dupuy & Liu ([@dupuy12]) placed it at 9.11 pc, clearly outside the 8 pc horizon. Only the nearest ($d$=3.626 pc) early T dwarf binary, $\varepsilon$ Indi Ba,Bb (Scholz et al. [@scholz03], McCaughrean et al. [@mjm04]), was originally discovered in the optical as an unresolved high proper motion (HPM) object using two $I$-band photographic Schmidt plates with an epoch difference of several years that were scanned within the SuperCOSMOS Sky Surveys (SSS; Hambly et al. [@hambly01]). Also clearly seen on photographic Schmidt plates is the unresolved pair WISE J1049$-$5319AB of two late-L dwarfs detected at the record-breaking distance of only 2 pc (Luhman [@luhman13], Mamajek [@mamajek13]). Kirkpatrick et al. ([@kirkpatrick12]) found that there are currently about six times more stars than BDs within 8 pc. They also expressed their expectation that this factor will decrease with time as new discoveries are catalogued, and Luhman ([@luhman13]) provided the first evidence that these expectations are justified. His discovery was based on an HPM survey taking advantage of the WISE data obtained in different seasons (with a mission lifetime of 13 months) and subsequent comparison with other surveys. Note that Luhman’s object was possibly overlooked in previous BD searches using 2MASS and DENIS, and even photographic Schmidt plates, bacause of image crowding and resulting problems with the cross-matching of measured objects from different surveys. Our BD search is also based on the identification of HPM objects; we first use WISE colour criteria and magnitude cuts and then check the candidates for shifted counterparts in other surveys with different epochs. This allowed us to detect two very nearby ($d$$\sim$5 pc) late T dwarfs (Scholz et al. [@scholz11]) when the preliminary WISE data release first became available. Now we have used the WISE All-Sky data release with similar selection criteria and have paid special attention to possible mismatches with other surveys, which may prevent us from finding the correct counterparts. Three newly found nearby BDs, one of which is a previously overlooked close neighbour, are presented in this paper. Candidate selection and cross-identification {#Cselpm} ============================================ We used the WISE All-Sky source catalogue with a mean observing epoch in the first half of 2010 for the selection of bright MIR candidates with colours typical of T dwarfs and hints on their possible HPM according to their cross-identification with 2MASS (epoch $\sim$2000) sources: - Candidates were selected to have \[$w1$$-$$w2$$>$1.5 (later than $\sim$T5) and $w2$$<$13.5\] or \[0.5$<$$w1$$-$$w2$$<$1.5 ($\sim$T0-T5) and $w2$$<$12.5\], aiming at nearby ($d$$<$15 pc) T or Y dwarfs according to Figs. 1 and 29 in Kirkpatrick et al. ([@kirkpatrick11]). - To reduce crowding effects, only point sources outside the Galactic plane ($|b|$$>$5$\degr$) were included. - To exclude extragalactic sources, only those with $w2$$-$$w3$$<$2.5 were considered (see Wright et al. [@wright10]). - Only objects without a 2MASS counterpart (within 3 arcsec) or with a counterpart’s separation between 1 arcsec and 3 arcsec were selected as potential HPM candidates. With the first two conditions we relied on the WISE MIR photometry of point sources, which may however be affected by saturation for the brightest objects and by overlapping background objects not resolved by WISE, and effectively excluded most of the earlier-type BDs and stars from our target list. As we applied a relatively bright WISE magnitude cut, we expected to see these objects also in the 2MASS, if they were not as cool as Y dwarfs. Therefore, our fourth condition was aimed at finding either HPM objects with $\mu$$>$0.3 arcsec/yr or with 0.1$<$$\mu$$<$0.3 arcsec/yr given the WISE-2MASS epoch difference of about ten years. However, we considered the 2MASS counterparts with 1-3 arcsec shifts as suspicious and wanted to visually inspect the corresponding WISE sources for alternative HPM counterparts outside of the search radius of 3 arcsec. About 2000 candidates were found with the above conditions. With the help of the IRSA Finder Charts tool[^2]), we were able to inspect all these candidates to identify HPM objects. These were then checked for known objects in DwarfArchives (Gelino, Kirkpatrick & Burgasser [@gelino12]) and SIMBAD[^3]. Although most of the 2000 initial candidates were rejected as ghosts/stripes, reddened or extended/diffuse objects, we found some variable stars (e.g. a new Galactic Nova; Scholz et al. [@scholz12b]) and many previously known BD and stellar neighbours of the Sun: more than 40 T dwarfs, about 20 L dwarfs, but also about 20 M dwarfs and earlier-type stars. Among about ten new candidates, we selected three with photometrically estimated distances of less than about 10 pc and moderately low declinations for spectroscopic follow up (see Sect. \[NIRsp\]) with the Large Binocular Telescope (LBT) (other early T-type and red L-type candidates were placed in different observing programmes and will be published elsewhere). We matched them with 2MASS and also with later WISE observations, and two could be identified in other NIR/optical surveys as well (Table \[table:1\]). Finally, we used the recently measured positions of our targets on the LBT acquisition images (Sect. \[NIRsp\]) calibrated with the PPMXL (Röser, Demleitner & Schilbach [@roeser10]) to confirm the proper motions and improve their accuracy. **WISE J052126.29$+$102528.4**\ (hereafter WISE J0521$+$1025) - For this late T candidate ($w1$$-$$w2$=$+$1.8) the WISE catalogue lists a 2MASS counterpart separated by 1.4 arcsec. This is obviously a background object that is also visible in the DSS (Fig. \[fig1\_fc\]). However, the brighter 2MASS object north of it appears blue in the NIR and has no optical counterpart, indicating already on the basis of the 2MASS data alone a HPM T-type BD candidate. Both objects are flagged in the 2MASS as deblended in $J$ and $K_s$, and as the astrometry may also be affected, we measured the 2MASS position of the blue object visually using the ESO Skycat tool. We also found a second epoch in the WISE 3-band cryo data (Table \[table:1\]). **WISE J045746.08$-$020719.2**\ (hereafter WISE J0457$-$0207) - In this case, the 2MASS counterpart shifted by 1.6 arcsec is not seen in the optical (Fig. \[fig2\_fc\]) and is moderately red, $J$$-$$K_s$=$+$0.9, consistent with an early T dwarf with a relatively small proper motion. The colours $w1$$-$$w2$=$+$1.0 and $J$$-$$w2$=$+$2.5 agree with this classification. In addition, this object is detected by DENIS and by the Galactic Clusters Survey (GCS) within the UKIRT InfraRed Deep Sky Surveys (UKIDSS)[^4]. Later we found another detection in the WISE 3-band cryo data (Table \[table:1\]). **WISE J203042.79$+$074934.7**\ (hereafter WISE J2030$+$0749) - No 2MASS counterpart ($<$3 arcsec) was listed for this one, but the finding charts in Fig. \[fig3\_fc\] show a clear HPM object with growing separation from 2MASS to older DSS IR. From the SSS we found three $I$-band positions, and the object was also detected in the SDSS $iz$ bands (Table \[table:1\]). Its colours ($i$$-$$z$=$+$4.6, $J$$-$$K_s$=$+$0.9, $J$$-$$w2$=$+$2.1, $w1$$-$$w2$=$+$0.8) fit a T2 dwarf (Hawley et al. [@hawley02], Kirkpatrick et al. [@kirkpatrick11]). However, there is only one T2 dwarf listed in Hawley et al. ([@hawley02]) that has $i$$-$$z$=$+$4.2, whereas the average values of $<$T2 and $>$T2 dwarfs are generally smaller and reach $i$$-$$z$=$+$4.0 only for the latest-given class of T6 dwarfs. From WISE post-cryo single exposures we determined an additional mean position at a later epoch (Table \[table:1\]). ![Finding charts as in Fig. \[fig1\_fc\] for WISE J0457$-$0207. []{data-label="fig2_fc"}](22141_f2.jpg){width="9.2cm"} ![Finding charts as in Fig. \[fig1\_fc\] for WISE J2030$+$0749. []{data-label="fig3_fc"}](22141_f3.jpg){width="9.2cm"} Param. J0521$+$1025 J0457$-$0207 J2030$+$0749 ---------------------------- ------------------ ------------------ -------------------- 05 21 26.349 04 57 46.114 20 30 42.897 $+$10 25 27.41 $-$02 07 19.59 $+$07 49 34.44 ep 2012.773 2012.770 2012.855 $\alpha$ 05 21 26.2967 04 57 46.0884 20 30 42.7986 $\delta$ $+$10 25 28.494 $-$02 07 19.239 $+$07 49 34.741 ep 2010.175 2010.156 2010.332 $\alpha$ 05 21 26.3165 04 57 46.1024 20 30 42.8069 $\delta$ $+$10 25 28.439 $-$02 07 19.186 $+$07 49 34.602 ep 2010.701 2010.682 2010.830 $\alpha$ n/a 04 57 46.0785 n/a $\delta$ n/a $-$02 07 19.202 n/a ep n/a 2010.019 n/a $\alpha$ n/a n/a 20 30 42.357 $\delta$ n/a n/a $+$07 49 35.64 ep n/a n/a 2000.748 $\alpha$ 05 21 26.147 04 57 46.022 20 30 42.357 $\delta$ $+$10 25 32.74 $-$02 07 17.95 $+$07 49 35.83 ep 2000.118 1998.707 2000.444 $\alpha$ n/a 04 57 46.038 n/a $\delta$ n/a $-$02 07 18.34 n/a ep n/a 1998.953 n/a $I$ $\alpha$ n/d n/d 20 30 42.149 $I$ $\delta$ n/d n/d $+$07 49 36.18 $I$ ep 1995.874 2001.862 1995.654 $I$ $\alpha$ n/a n/a 20 30 42.051 $I$ $\delta$ n/a n/a $+$07 49 37.35 $I$ ep n/a n/a 1993.545 $\mu_{\alpha}\cos{\delta}$ $+$232$\pm$9 $+$82$\pm$9 $+$653$\pm$6 $\mu_{\delta}$ $-$418$\pm$6 $-$97$\pm$8 $-$138$\pm$16 $I$ n/d n/d $\sim$19.5 $I$ n/a n/a $\sim$18.9$\pm$0.3 $i$ n/a n/a 21.810$\pm$0.140 $z$ n/a n/a 17.195$\pm$0.014 $J$ n/a 14.879$\pm$0.12 n/a $J$ 15.262 14.897$\pm$0.040 14.227$\pm$0.029 $H$ 15.222$\pm$0.103 14.198$\pm$0.046 13.435$\pm$0.033 $K_s$ 14.665 14.022$\pm$0.055 13.319$\pm$0.039 $H$ n/a 14.190$\pm$0.003 n/a $K$ n/a 13.975$\pm$0.003 n/a $w1$ 14.098$\pm$0.031 13.391$\pm$0.026 12.956$\pm$0.025 $w2$ 12.286$\pm$0.026 12.443$\pm$0.025 12.122$\pm$0.025 $w3$ 10.306$\pm$0.085 11.020$\pm$0.114 10.964$\pm$0.110 0.246 (T7) 0.509 (T2) 0.599 (T0/T1) 0.155 (T7/T8) 0.798 (T2/T3) 0.859 (T2) 0.084 (T7/T8) 0.488 (T3) 0.595 (T2) T7.5 T2 T1.5 T7.5 T2 T1.5 $d$ 5.0$\pm$1.3 12.5$\pm$3.1 10.5$\pm$2.6 $v_{tan}$ 11$\pm$3 8$\pm$2 33$\pm$8 : Positions (J2000), proper motions \[mas/yr\], photometry \[mag\], spectral indices/types, distances \[pc\], and tangential velocities \[km/s\][]{data-label="table:1"} ![image](22141_f4.jpg){width="14.0cm"} ![image](22141_f5.jpg){width="14.0cm"} ![image](22141_f6.jpg){width="14.0cm"} Near-infrared spectroscopic classification {#NIRsp} ========================================== Our three targets were observed with the LBT NIR spectrograph LUCI 1 (Mandel et al. [@mandel08]; Seifert et al. [@seifert10]; Ageorges et al. [@ageorges10]) in long-slit spectroscopic mode with the $HK$ (200 lines/mm + order separation filter) and $zJHK$ gratings (210 lines/mm + $J$ filter). The dwarf WISE J0521$+$1025 was observed on 2012-Oct-09 with total integration times of 40 min in $HK$ and 20 min in $J$, WISE J0457$-$0207 and WISE J2030$+$0749 on 2012-Oct-08 and 2012-Nov-08, respectively, but both with only 16 min ($HK$) and 10 min ($J$). As in Scholz et al. ([@scholz11; @scholz12a]), central wavelengths were chosen at 1.835 $\mu$m ($HK$) and 1.25 $\mu$m ($J$) yielding a coverage of 1.38–2.26 and 1.18–1.33 $\mu$m, respectively. The slit width was always 1 arcsec, corresponding to a spectral resolving power of $R$=$\lambda$/$\Delta$$\lambda$$\approx$4230, 940, and 1290 at $\lambda$$\approx$1.24, 1.65, and 2.2 $\mu$m, respectively. Observations consisted of individual exposures of 60 s in $HK$ (75 s for WISE J0521$+$1025) and 150 s in $J$ with shifting the target along the slit using an ABBA pattern until the total integration time was reached. For more details and a description of the spectroscopic data reduction we refer the reader to Scholz et al. ([@scholz11; @scholz12a]). Note that the above given wavelength coverage is not wide enough at both the blue and red ends to compute spectrophotometric colours in the 2MASS system (using spectral response curves from Cohen et al. [@cohen03]). The $J$ band is also too narrow to compute spectral indices for classifying T dwarfs according to Burgasser et al. ([@burgasser06]) so that only $HK$ indices can be used. In Figs. \[fig4\_spec\], \[fig5\_spec\], and \[fig6\_spec\], we show $J$- and $HK$-band spectra normalised at 1.2-1.3 $\mu$m and 1.52-1.61 $\mu$m, respectively. The $J$-band spectrum of WISE J0521$+$1025 fits best to that of a T8 standard, but is more similar to T7/T7.5 in the $HK$ band, with a better fit to T7.5 at 1.7 $\mu$m (Fig. \[fig4\_spec\]). Except for the $H$ band, we note a good agreement, including the K I doublet (at 1.24/1.25 $\mu$m) in the $J$ band and the high peak in the $K$ band, with Ross 458C (discovered by Goldman et al. [@goldman10] and Scholz [@scholz10b]) observed with the same instrument (Fig. \[fig5\_spec\]). Because of these features, Ross 458C was characterised as a young (low surface gravity) and super-solar metallicity T8 dwarf by Burgasser et al. ([@burgasser10b]), whereas Burningham et al. ([@burningham11]) typed it as T8.5p. We visually classified WISE J0521$+$1025 as T7.5 in good agreement with the measured spectral indices in the $HK$ band (Table \[table:1\]) as defined in Burgasser et al. ([@burgasser06]). The spectra of WISE J0457$-$0207 (with a remarkably high $K$-band peak that cannot be explained by uncertainties of the flux calibration) and WISE J2030$+$0749 are of earlier ($\sim$T2) type (Fig. \[fig6\_spec\]), fitting in parts better to the T1, T2, or T3 standard. As standards are single, this may indicate possible close binary components with different types or peculiarities related to age or metallicity. The extreme $i$$-$$z$ index of WISE J2030$+$0749 makes this object even more interesting. Visually we classified WISE J0457$-$0207 as T2 and WISE J2030$+$0749 as T1.5 and adopted these types consistent with those obtained from spectral indices. Using mean absolute WISE magnitudes of single T7.5 and T1/T2 dwarfs from Dupuy & Liu ([@dupuy12]), we estimated distances of 5.0$\pm$1.3 pc for WISE J0521$+$1025, 12.5$\pm$3.1 pc for WISE J0457$-$0207, and 10.5$\pm$2.6 pc for WISE J2030$+$0749. Conclusions {#Sconc} =========== We have discovered three new BDs close to the Sun in an HPM search using MIR, NIR, and optical surveys: WISE J0457$-$0207 has a relatively small proper motion for an object at the 10 pc horizon (cf. Fig. 1 in Scholz et al. [@scholz11]) not detectable in the past because of similar 2MASS and DENIS epochs. WISE J2030$+$0749, with similar 2MASS and SDSS epochs, was previously not associated with its SSS measurement, whereas WISE J0521$+$1025 was probably overlooked in previous BD and HPM searches because of problems matching partly blended images in different surveys. Using NIR spectroscopy with LBT/LUCI we classified WISE J0521$+$1025 as a new T7.5 dwarf at a distance of about 5 pc. It is currently the nearest T dwarf in the northern hemisphere and may also be the closest free-floating neighbour of its spectral sub-class. The dwarfs WISE J0457$-$0207 and WISE J2030$+$0749 lie, according to their T2 and T1.5 types, slightly beyond 10 pc, but may still fall in the 10 pc sample given their error bars, if they are not unresolved binaries. The latter was independently discovered by Mace et al. ([@mace13]), who also classified it as a T1.5 dwarf. However, they did not mention its large proper motion, proximity, and very red $i$$-$$z$ colour from the SDSS. The small tangential velocities of all three new BDs are typical of the Galactic thin disc population. They are promising targets for trigonometric parallax programmes and adaptive optics observations. The authors thank Jochen Heidt, Barry Rothberg, and all observers at the LBT for assistance during the preparation and execution of LUCI observations, Adam Burgasser for providing template spectra at http://pono.ucsd.edu/$\sim$adam/browndwarfs/spexprism, and the anonymous referee for a quick and helpful report and Victor J. Sanchez Bejar for some important hints. This research has made use of the WFCAM Science Archive providing UKIDSS, the NASA/IPAC Infrared Science Archive, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration, and of data products from WISE, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration, from 2MASS, and from SDSS DR7 and DR8. Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy. The SDSS-III web site is http://www.sdss3.org/. This research has benefitted from the M, L, T, and Y dwarf compendium housed at DwarfArchives.org. We have also used SIMBAD and VizieR at the CDS/Strasbourg. Abazajian, K. N., Adelman-McCarthy, J. K., Agüeros, M. A., et al. 2009, ApJS, 182, 54 Ageorges, N., Seifert, W., Jütte, M., et al. 2010, SPIE, 7735, 53 Aihara, H., Allende Prieto, C., An, D., et al. 2011, ApJS, 193, 29 Bowler, B. P., Liu, M. C., & Dupuy, T. J. 2010, ApJ, 710, 45 Burrows, A., Hubbard, W. B., Lunine, J. I., & Liebert, J. 2001, Reviews of Modern Physics, 73, 719 Burgasser, A. J., Geballe, T. R., Leggett, S. K., Kirkpatrick, J. D., & Golimowski, D. A. 2006, , 637, 1067 Burgasser, A. J., Looper, D., Rayner, J. T. 2010a, AJ, 139, 2448 Burgasser, A. J., Simcoe, R. A., Bochanski, J. J., et al. 2010b, ApJ, 725, 1405 Burgasser, A. J., McElwain, M. W., Kirkpatrick, J. D., et al. 2004, AJ, 127, 2856 Burningham, B., Leggett, S. K., Homeier, D., et al. 2011, MNRAS, 414, 3590 Casali, M., Adamson, A., Alves de Oliveira, C., et al. 2007, A&A, 467, 777 Cohen, M., Wheaton, W. A., Megeath, S. T.  2003, AJ, 126. 1090 Cushing, M. C., Kirkpatrick, J. D., Gelino, C. R., et al. 2011, ApJ, 743, 50 Dupuy, T. J., & Liu, M. C. 2012, ApJ Suppl. Ser., 201, 19 Epchtein, N., de Batz, B., Capoani, L., et al. 1997, The Messenger, 87, 27 Gelino, C. R., Kirkpatrick, J. D., & Burgasser, A. J. 2012, online database for 804 L and T dwarfs at DwarfArchives.org (status: 6 November 2012) Goldman, B., Marsat, S., Henning, T., Clemens, C., & Greiner, J. 2010, MNRAS, 405, 1140 Hambly, N. C., MacGillivray, H. T., Read M. A., et al. 2001, MNRAS, 326, 1279 Hambly, N. C., Collins, R. S., Cross, N. J. G., et al. 2008, MNRAS, 384, 637 Hawley, S. L., Covey, K. R., Knapp, G. R., et al. 2002, AJ, 123, 3409 Hewett, P. C., Warren, S. J., Leggett, S. K., & Hodgkin, S. T. 2006, MNRAS, 367, 454 Kirkpatrick, J. D., Cushing, M. C., Gelino, C. R., et al. 2011, ApJ Suppl. Ser., 197, 19 Kirkpatrick, J. D., Gelino, C. R., Cushing, M. C., et al. 2012, ApJ, 753, 156 Lawrence, A., Warren, S. J., Almaini, O., et al. 2007, MNRAS, 379, 1599 Luhman, K. L. 2013, ApJ, 767, L1 Mace, G. N., Kirkpatrick, J. D., Cushing, M. C., et al. 2013, ApJ Suppl. Ser, 205, 6 Mamajek, E. E. 2013, arXiv:1303.5345 Mandel, H., Seifert, W., Hofmann, R., et al. 2008, SPIE, 7014, 124 McCaughrean, M. J., Close, L. M., Scholz, R.-D., et al. 2004, A&A, 413, 1029 Röser, S., Demleitner, M., Schilbach, E. 2010, AJ, 139, 2440 Scholz, R.-D., McCaughrean, M. J., Lodieu, N., & Kuhlbrodt, B. 2003, A&A, 398, L29 Scholz, R.-D. 2010a, A&A, 510, L8 Scholz, R.-D. 2010b, A&A, 515, A92 Scholz, R.-D., Bihain, G., Schnurr, O., & Storm, J. 2011, A&A, 532, L5 Scholz, R.-D., Bihain, G., Schnurr, O., & Storm, J. 2012a, A&A, 541, A163 Scholz, R.-D., Granzer, T., Schwarz, R., et al. 2012b, The Astronomer’s Telegram, 4268, 1 Seifert, W., Ageorges, N., Lehmitz, M., et al. 2010, SPIE, 7735, 256 Skrutskie, M. F., Cutri, R. M., Stiening, R., et al. 2006, AJ, 131, 1163 Tokunaga, A. T., Simons, D. A., & Vacca, W. D. 2002, PASP, 114, 180 Wright, E. L., Eisenhardt, P. R. M., Mainzer, A. K., et al. 2010, AJ, 140, 1868 [^1]: based on observations with the Large Binocular Telescope (LBT) [^2]: http://irsa.ipac.caltech.edu/applications/finderchart/ providing DSS, 2MASS, and WISE images for a given object at a glance (see e.g. Figs. \[fig1\_fc\]-\[fig3\_fc\] [^3]: http://simbad.u-strasbg.fr/ [^4]: The UKIDSS project is defined in Lawrence et al. ([@lawrence07]). UKIDSS uses the UKIRT Wide Field Camera (WFCAM; Casali et al. [@casali07]) and the photometric system described in Hewett et al. ([@hewett06]), which is situated in the Mauna Kea Observatories (MKO) system (Tokunaga et al. [@tokunaga02]). The pipeline processing and science archive are described in Hambly et al. ([@hambly08]) and Irwin et al. (in prep.).
--- abstract: 'Most models of Bose-Einstein correlations in multiple particle production processes can be ascribed to one of the following three broad classes: models based on the original idea of the Goldhabers, Lee and Pais, hydrodynamic models and string models. We present for discussion some basic questions concerning each of these classes of models.' author: - | K.Zalewski\ M.Smoluchowski Institute of Physics\ Jagellonian University, Cracow\ and\ Institute of Nuclear Physics, Cracow title: 'Some questions concerning Bose-Einstein correlations in multiple particle production processes.' --- Introduction ============ Since the pioneering work of G. Goldhaber, S. Goldhaber, W. Lee and A. Pais, published over forty years ago [@GGL] and known as the GGLP model, Bose-Einstein correlations in multiple particle production processes have been studied in hundreds of papers. Many references can be found in the recent review articles [@WIH] and [@WEI]. These correlations have been popular in particular for two reasons. They give impressive bumps in two-particle and many particle distributions and, if the regime of Einstein’s condensation can be reached, there are even more spectacular phenomena waiting to be discovered [@PR1], [@PR2], [@BZ1], [@BZ2]. What is more, Bose-Einstein correlations are believed to yield important information, which seems hard, if not impossible, to obtain by other means. They have been used to find the sizes and shapes of the regions where hadrons are produced, as well as to obtain detailed information about the evolution in time of the hadron production processes (see the reviews [@WIH], [@WEI]). There is little, doubt, however, that the problem is hard. As an example of a sceptical opinion let me quote one of the creators of this field of research, G. Goldhaber, who said at the Marburg conference in 1990: “What is clear is that we have been working on this effect for thirty years. What is not as clear is that we have come much closer to a precise understanding of the effect”. Everybody agrees that the GGLP paper was very important and various extensions of the model proposed there are still being used. There are, however, other approaches. The GGLP model contains static sources of particles. In the more recent “hydrodynamic” models the flow of the sources is of great importance. Another very promising approach are string models, where the random phase assumption used in previous models is not necessary and the description looks closer to QCD. In the present paper we will characterize the GGLP models, the hydrodynamic models and the string models, stressing in each case open and potentially important problems. GGLP models =========== Let us consider two identical bosons, e.g. two $\pi^+$ mesons, created: one at point $\vec{r}_1$ and the other at point $\vec{r}_2$. If the two bosons were distinguishable, a crude approximation for the probability amplitude of observing both of them at point $\vec{r}$ could be $$\label{} A_D = e^{i\phi_1 + i\vec{p}_1\cdot(\vec{r} - \vec{r}_1)}e^{i\phi_1 + i\vec{p}_2\cdot(\vec{r} - \vec{r}_2)}.$$ Here the interaction between the two bosons is neglected, so that the two-particle amplitude is a product of single particle amplitudes. Only the phase factors are kept and each phase is the sum of the phase obtained by the boson at birth and of the phase acquired while propagating with given momentum from the birth point to point $\vec{r}$. For identical bosons, however, this amplitude has not the right symmetry with respect to the exchange of the two bosons and the least one must do is to symmetrize it. Thus for identical bosons the corresponding approximation is $$\label{} A = \frac{1}{\sqrt{2}}e^{i(\phi_1 + \phi_2) +i(\vec{p}_1+\vec{p}_2)\cdot\vec{r}} \left(e^{-i(\vec{p}_1\cdot\vec{r}_1+ \vec{p}_2\cdot\vec{r}_2)} + e^{-i(\vec{p}_2\cdot\vec{r}_1+ \vec{p}_1\cdot\vec{r}_2)}\right).$$ The probability distribution for momenta is proportional to $$\label{} |A|^2 = 1 + \cos\left((\vec{p}_1 - \vec{p}_2)\cdot(\vec{r}_1 - \vec{r}_2)\right).$$ In order to make use of this expression it is necessary to average it over the non measured production points $\vec{r}_1,\vec{r}_2$. In the GGLP paper the averaging was over the space distribution of sources $\rho(\vec{r}_1;R)\rho(\vec{r}_2;R)$, where $R$ is a parameter with dimension length, which was interpreted as the radius of the production region. Various generalizations, modifications and extensions followed, but let us use this simple variant to make some general remarks. In the GGLP model the distribution for pairs of particle momenta depends only on the momentum difference $\vec{q} = \vec{p}_1 - \vec{p}_2$. This is in violent contradiction with the data, but GGLP found a clever way out. The distribution for the unsymmetrized amplitude is flat. Therefore, the result can be just as well interpreted as a prediction for the ratio of the actual momentum distribution to the distribution for distinguishable bosons. Further we denote this ratio by $R(\vec{p}_1,\vec{p}_2)$: $$\label{} R(\vec{p}_1,\vec{p}_2) = \frac{|A(\vec{p}_1,\vec{p}_2)|^2} {|A_D(\vec{p}_1,\vec{p}_2)|^2}$$ Now, the momentum distribution is not assumed to be independent of the sum of the momenta. It is enough to make the much weaker and more reasonable assumption that the dependence on this sum can be factored out and cancels in the ratio. A well known difficulty with this approach is that the distribution for distinguishable $\pi^+$-s, say, cannot be obtained from experimental data without further assumptions. GGLP assumed that the distribution for $\pi^+\pi^-$ pairs can be used instead. There have been many other proposals (cf. e.g. [@HAY] and references contained there), but none is fully satisfactory. For any nonsingular averaging process the average cosine must be close to one for $|\vec{q}| \approx 0$ and very small for large values of $|\vec{q}|$. Therefore, the ratio $R(\vec{p}_1,\vec{p}_2)$ decreases, though not necessarily monotonically, from values close to two for small values of $|\vec{q}|$ to values close to one for large values of $|\vec{q}|$. This gives the characteristic bump in $R(\vec{p}_1,\vec{p}_2)$ for small values of $\vec{q}^2$. If $R$ is the only dimensional parameter available, the width of this bump must, for simple dimensional reasons, be proportional to $R^{-2}$. Thus, the main qualitative results of GGLP are much more general than their specific choices of the weight functions $\rho(\vec{r};R)$. Nevertheless, they are not quite general. It is well known from optics that, whether photons bunch or not, depends on the type of source and not only on the fact that they are bosons. Photons can antibunch just as well. In order to illustrate this point within the GGLP type models, let us assume that the amplitude $A$ has an additional factor, which equals one, if the product $(\vec{r}_1 - \vec{r}_2)(\vec{p}_1 - \vec{p}_2) > 0$ and minus one otherwise. This factor changes sign, when the momenta of the two bosons are exchanged. Therefore, the squared modulus of the properly symmetrized production amplitude is $$\label{} |A|^2 = 1 - \cos\left((\vec{p}_1 - \vec{p}_2)\cdot(\vec{r}_1 - \vec{r}_2)\right).$$ and we get a hole instead of the bump in the small $\vec{q}^2$ region. Admittedly this model is not realistic. Its purpose is only to indicate a possibility. This may be interesting in view of the LEP results concerning Bose-Einstein correlations in $e^+e^-$ annihilations, where two $W$ bosons are simultaneously produced. It seems that identical pions originating from the decay of a single $W$ exhibit the usual bump attributed to Bose-Einstein correlations, while these correlations are absent, or very weak, for pairs of identical pions, when each pion originates from a different $W$ [@ABB], [@BAR], [@ACC]. In the GGLP model the bump results from the assumptions that pion pairs produced in different pairs of points add incoherently. Mild modifications of this assumption [@WIH], [@WEI] can affect the size of the bump, but do not eliminate it. It would be interesting to check, whether the GGLP assumptions could be modified so as to predict the bump for some, but not for all pairs of identical mesons produced in a multiple particle production event. Hydrodynamic models =================== It is not possible to express the ratio $R(\vec{p},\vec{p}')$ in terms of the single particle momentum distributions. In the hydrodynamic models, as well as in GGLP models, one makes, however, an assumption, which makes it possible to express this ratio in terms of the diagonal and off-diagonal terms of the single particle density matrix in the momentum representation. It is convenient to formulate the hydrodynamic models in terms of source functions $S(X,K)$. The source function [@GKW], [@PRA3] is related to the single particle density matrix in the momentum representation by the formula $$\label{} \rho(\vec{p},\vec{p}') = \int e^{iqX}S(X,K)d^4X.$$ In this formula $$\label{} K = \frac{1}{2}(p + p');\qquad q = p - p'.$$ Here $K,q,p,p'$ are fourvectors, but in order to calculate the density matrix, we need only their values corresponding to the momenta $p,p'$ being on their mass shells. X is an integration variable, which is associated with the position of the sources in space-time. The physical interpretation of X may be helpful, when trying to find the source function. It is irrelevant for the calculation of the density matrix, once the source function is known. There is an infinity of different source functions, which all give the same density matrix and consequently the same predictions for the ratio $R(\vec{p},\vec{p}')$. For instance, one could put $$\label{} S(X,K) = W(\vec{X},\vec{K})\delta(X_0),$$ where $W(\vec{X},\vec{K})$ is the well-know Wigner function satisfying the relation $$\label{} \rho(\vec{p},\vec{p}') = \int e^{-i\vec{q}\cdot\vec{X}}W(\vec{X},\vec{K}) d^3X$$ and $$\label{} \vec{X} = \frac{1}{2}(\vec{x} + \vec{x}').$$ This source function gives the correct density matrix by construction, but it corresponds to a most unlikely scenario, where all the particles are created simultaneously at $X_0 = 0$. Since our aim is to find the correct density matrix, this source function would be fine, in spite of the unlikely physical picture attached to it. The problem is, however, that finding the Wigner function is not any easier than finding the density matrix in the momentum representation. The hope is that using a source function, which corresponds to a plausible scenario for the production process, we will be able to use more efficiently what we know about particle production in order to find the source function. As an example of this approach let us consider the model reviewed in [@WIH]. The source function is postulated in the form $$\begin{aligned} \label{} S(X,K) = C m_T \cosh(y - \eta)\exp\left[\frac{m_T\cosh y \cosh \eta_t - r_T^{-1}XK_T\sinh \eta_t}{T}\right]*\nonumber\\ \exp\left[-\frac{r_T^2}{2R^2} - \frac{\eta^2}{2(\Delta\eta)^2} - \frac{(\tau - \tau_0)^2}{2(\Delta\tau)^2}\right].\end{aligned}$$ In this formula $$\begin{aligned} \label{} \eta = \frac{1}{2}\log\frac{t+z}{t-z};\qquad \eta_t = \eta_f\frac{r_T}{R};\qquad \tau = \sqrt{t^2 - z^2};\nonumber\\ m_T^2 = m^2 + \vec{K}^2;\qquad r_T^2 = \vec{x}^2_T;\end{aligned}$$ $z$ is parallel to $x_\|$ and $\vec{x}$ is parallel to $\vec{K}$. The source function depends on six free parameters $(R, T, \eta_f, \Delta\eta, \tau_0, \Delta\tau)$ and, moreover, contains the normalization constant $C$, but each piece of the source function has a clear physical interpretation. Therefore, fixing these parameters from the data yields directly interesting physical information. We will illustrate this important point using a fit to the NA49 data on Pb-Pb scattering at $158$ GeV/c per nucleon [@WIH]. The parameter $R$ is the transverse radius of the tube, from which the final hadrons are emitted. The result $R \approx 7$fm is about twice the radius resulting from from the known radii of the lead nuclei. This is evidence for a significant transverse expansion, before most of the hadrons are produced. The parameter $\eta_f$ governs the transverse rapidity of the sources. The value obtained $\eta_f \approx 0.35$ corresponds to transverse velocities reaching the velocity of sound in the plasma (1/3), which looks very reasonable. The parameter $T$ occurs as the temperature in a Boltzmann type factor. Its fitted value $T \approx 130$fm is significantly lower than the temperatures obtained, when fitting the chemical composition of the final state hadrons, which indicates that during the expansion the stuff cools down. Another interesting comparison is that of the fitted values $\tau_0 \approx 9$fm and $\Delta\tau \approx 1.5$fm. The parameter $\tau_0$ is the typical time between the moment of collision and the moment, when a hadron is produced. The parameter $\Delta\tau$ is the duration of the time, when hadrons are produced. The fact that $\tau_0 \gg \Delta\tau$ means that all the hadrons are produced in a short time interval after a relatively long incubation time. Unfortunately, as stated by the authors [@WIH], the parameter $\Delta\tau$ is poorly constrained by the data, so that this conclusion is not as solid as the others. As seen from this example, given a model one can obtain from the data much important information. An open problem is, however, how stable are these conclusions when models change. As seen from the expression of the source function in terms of the Wigner function, one can fit perfectly the data assuming that all the particles are produced exactly simultaneously. It is just as easy to get a perfect fit assuming that the particles are produced only on the surface of a sphere, or only on the surface of a cube. These alternative models are so implausible physically that there is little doubt they should be discarded. The question is, however, how many physically plausible models can fit the data, while giving completely different descriptions of the hadronization process? String picture ============== The string model for Bose-Einstein correlations [@ANH],[@AND],[@ANR] has not yet been developed to the point, where it could be compared quantitatively with the data. It is, however, much more ambitious than the models described above. Instead of phenomenological assumptions about sources and incoherence, it gives a well defined amplitude for the production of particles with momenta $\vec{p}_1,\ldots,\vec{p}_n$. This amplitude is a plausible approximation to QCD. We will consider only the 1+1 dimensional version of the string model, which seems to contain all the main ingredients of this approach, while it is much simpler than the 3+1 dimensional version. In fact the 1+1 dimensional model has been recently analytically diagonalized [@ANS], though in the version without the Bose-Einstein correlations. Let us consider a final state consisting of hadrons with momenta $p_1,\ldots p_n$. To this final state the model ascribes a polygon in the $(z,t)$ plane. The sides of this polygon are the trajectories of the various partons existing between the moment of $e^+e^-$ annihilation and the moments, when the hadrons are formed. The partons are considered massless and moving with the velocity of light, therefore the sides of the the polygon form angles $\pm 45^\circ$ with the $t$ and $z$ axes. Let us put the $e^+e^-$ annihilation point at the origin of the coordinate system. At this point two partons, a quark and an antiquark, are formed flying along the $z$ axis, away from each other. Their trajectories form the first two sides of the polygon, both starting at the origin, one going to the right and upwards, the other going to the left and upwards. The partons are end points of a colour-string. Thus the string sweeps the surface of the polygon. The energy of the string $E$ is connected to its length $L$ by the formula $E = \kappa L$, where $\kappa$ is a constant known as the string tension. Thus, while the quark and the antiquark fly away from each other and the string expands, there is a force reducing the energy of the two partons and finally the directions of their motions get reversed starting another pair of the sides of the polygon. In the meantime the string at any point between the endpoints can break producing a quark and an antiquark, which form another pair of sides of the polygon. Since any segment of the string has a quark at one end and an antiquark at the other, it is easily checked that, except for the original two partons, all the quarks fly in one direction and all the antiquarks in the other. From time to time a quark meets and antiquark. Then the two form a hadron with two-momentum equal to the sum of the two-momenta of the two meeting partons. Thus the polygon contains the following elements: the vertex at the origin, two turning points of the original partons, $n$ vertices, where the $n$ hadrons were formed and $n-1$ vertices, where the string broke. One finds that the two-momentum of a hadron is determined by and determines the lengths of the two sides of the polygon adjacent to the vertex where the hadron was formed. The probability amplitude for producing the state $(p_1,\ldots,p_n)$ depends on the area $A$ of the polygon $$\label{amplstr} M(p_1,\ldots,p_n) \sim e^{i\xi A}.$$ The imaginary part of $\xi$, $ib$, gives the probability distribution well-known from the LUND model $$\label{} |M(p_1,\ldots,p_n)|^2 \sim e^{-bA}.$$ The imaginary part, believed to be close to the string tension $\kappa$, is important for the description of the Bose-Einstein correlations. In order to describe these Bose-Einstein correlations the amplitude (\[amplstr\]) is symmetrized very much like in the GGLP approach and a qualitatively satisfactory description of the correlations is obtained. Since, however, the amplitude being symmetrized is not the GGLP one, there are some significant new points. Symmetrization means summing over all the permutations of identical hadrons. Let us concentrate on an exchange of two $\pi^+$ mesons. Each of them is produced at some hadronization vertex of the polygon. In order to perform the exchange, one has to cut off the two pionic vertices together with their adjacent sides of the polygon and to glue them back, pion one in the position of pion two and pion two in the position of pion one, so as to obtain again a closed polygon. The new polygon has in general a different area $A' \neq A$. If the new area is much larger than the area before the exchange, the contribution from the interference with the permuted amplitude is negligible. One reason, familiar from the GGLP model, is that the relative phase is a rapidly varying function of momentum. The new fact is, however, that the modulus of the permuted amplitude is additionally suppressed by the factor $e^{-\frac{b}{2}(A-A')}$. In order to obtain small changes of the area, it is advantageous to exchange pions, which are close to each other counting along the perimeter of the polygon, or equivalently, which have similar momenta. This is the reason for the familiar bump for $p_1 \approx p_2$ Let us quote two interesting qualitative predictions of this model. There should be a difference between the Bose-Einstein correlations for pairs of charged pions, say $\pi^+\pi^+$, and corresponding correlations for pairs of neutral pions. The reason is that two $\pi^0$-s can be formed at two adjacent hadronisation vertices of the polygon, while two $\pi^+$-s cannot. There should also be a difference between the Bose-Einstein correlations for pairs of mesons originating from the same string and pairs of mesons originating from different strings. For the latter situation the present model is clearly not applicable. It has been suggested ([@AND2] and references quoted there) that perhaps there are no correlations between mesons from different strings, which would explain the observations of Bose-Einstein correlations for pions from decays of pairs of $W$ bosons. An interesting question raised by Bowler is the relation of the string model to the GGLP model. Bowler proposed [@BOW] a model closely related to the GGLP model, which looks very similar to the string model and, according to Bowler, gives also very similar predictions. The model used by Bowler contains a distribution of sources, which depends not only on space-time points, as in the GGLP paper, but also on the momenta of the produced particles. Such models have become popular after the work Yano and Koonin [@YAK] and are sometimes called Yano Koonin models. It is not clear, however, what constraints must be imposed on the distributions, in order to make them consistent with quantum mechanics. Therefore, occasionally this approach gives inconsistent results [@MKFW]. Bowler’s model deviates in two ways from the string model. It yields the probability amplitude as an exponential in the area, but this is a modified area, where certain regions are counted more than once. Moreover, in both models, in order to get the inclusive $k$-particle distribution it is necessary to integrate out the momenta of the remaining particles, but in Bowler’s model the integration region is different from that used in the string model. According to some unpublished numerical calculations by Bowler the two deviations nearly cancel. [99]{} G. Goldhaber, S. Goldhaber, W. Lee and A. Pais, [*Phys. Rev.*]{} [**120**]{}(1960)300. U.A. Wiedemann and U. Heinz,[*Phys. Rep.*]{} [**319**]{}(1999)145. R.M. Weiner, [*Phys. Rep.*]{} **327**(2000)250. S. Pratt, [*Phys. Letters*]{} [**B301**]{}(1993)159. S. Pratt, [*Phys. Rev.*]{} [**C50**]{}(1994)469. A. Bialas and K. Zalewski, [*Eur. Phys. J.*]{} [**C6**]{}(1999)349. A. Bialas and K. Zalewski, [*Phys. Rev.*]{} [**D59**]{}(1999)097502. S. Haywood, [*Where are we going with Bose-Einstein – a mini review*]{} RAL report January 6-th 1995. G. Abbiendi et al., [*Eur. Phys. J.*]{} [**C8**]{}(1999)559. R. Barate et al., [*Phys. Letters*]{} [**B478**]{}(2000)50. M. Acciarri et al., [*Phys. Letters*]{} [**B493**]{}(2000)233. M. Gyulassy, S.K. Kauffmann and L.W. Wilson, [*Phys. Rev.*]{} [**C20**]{}(1979)2267. S. Pratt, [*Phys. Rev. Letters*]{} [**53**]{}(1984)1219. B. Andersson and W. Hofmann, [*Phys. Letters*]{} [**169B**]{}(1986)364. B. Andersson, [*Acta Phys. Pol.*]{} [**B29**]{}(1998)1885. B. Andersson and M. Ringn$\grave{e}$r, [*Nucl. Phys*]{} [**B513**]{}(1998)627. B. Andersson and F. Södergerg, [*Eur. Phys. J.*]{} [**C16**]{}[2000]{}. B. Andersson, Moriond 2000. M.G. Bowler, [*Phys. Letters*]{} [**B185**]{}(1987)205. F. Yano and S. Koonin, [*Phys. Letters*]{} [**B78**]{}(1978)556. M. Martin, H. Kalechofsky, P. Foka and U.A. Wiedemann, [*Eur. Phys. J*]{} [**C2**]{}(1998)359.
--- abstract: 'In this paper, we explore the nature of three-dimensional Bose gases at large positive scattering lengths via resummation of dominating processes involving a minimum number of virtual atoms. We focus on the energetics of the nearly fermionized Bose gases beyond the usual dilute limit. We also find that an onset instability sets in at a critical scattering length, beyond which the near-resonance Bose gases become strongly coupled to molecules and lose the metastability. Near the point of instability, the chemical potential reaches a maximum, and the effect of the three-body forces can be estimated to be around a few percent.' author: - 'Dmitry Borzov$^{1}$, Mohammad S. Mashayekhi$^{1}$, Shizhong Zhang$^{2}$, Jun-Liang Song$^{3}$ and Fei Zhou$^{1,4}$' title: Nature of 3D Bose Gases near Resonance --- Introduction ============ Recently, impressive experimental attempts have been made to explore the properties of Bose gases near Feshbach resonance  [@Navon11; @Papp08; @Pollack09]. In these experiments, it has been suggested that when approaching resonance from the side of small positive scattering lengths in the upper branch, Bose atoms appear to be thermalized within a reasonably short time, well before the recombination processes set in, and so form a quasistatic condensate. Furthermore, the life time due to the recombination processes is much longer than the many-body time scale set by the degeneracy temperature. This property of Bose gases near resonance and the recent measurement of the chemical potentials for a long-lived condensate by Navon [*et al*]{}. [@Navon11] motivate us to make further theoretical investigations on the fundamental properties of Bose gases at large scattering lengths. The theory of dilute Bose gases has a long history, starting with the Bogoliubov theory of weakly interacting Bose gases [@Bogoliubov47]. A properly regularized theory of dilute gases of bosons with contact interactions was first put forward by Lee, Huang, and Yang [@Lee57] and later by Beliaev [@Beliaev58; @Nozieres90], who developed a field-theoretical approach. Higher-order corrections were further examined in later years [@Wu59; @BHM02]. Since these results were obtained by applying an expansion in terms of the small parameter $\sqrt{na^3}$ (here $n$ is the density and $a$ is the scattering length), it is not surprising that, formally speaking, each of the terms appearing in the dilute-gas theory diverges when the scattering lengths are extrapolated to infinity. As far as we know, resummation of these contributions, even in an approximate way, has been lacking [@twobody]. This aspect, to a large extent, is the main reason why a qualitative understanding of Bose gases near resonance has been missing for so long. There have been a few theoretical efforts to understand the Bose gases at large positive scattering lengths. The numerical efforts have been focused on the energy minimum in truncated Hilbert spaces, which have been argued to be relevant to Bose gases studied in experiments [@Cowell02; @Song09; @Diederix11]. These efforts are consistent in pointing out that the Bose gases are nearly fermionized near resonance. However, there are two important unanswered questions in the previous studies. One is whether the energy minimum found in a restricted subspace is indeed metastable in the whole Hilbert space. The other equally important issue is what the role of three-body Efimov physics in the Bose gases near resonance is. Below we outline a nonperturbative approach to the long-lived condensates near resonance. We have applied this approach to explore the nature of Bose gases near resonance and to address the above issues. One concept emerging from this study is that a quantum gas (either fermionic or bosonic) at a positive scattering length does not always appear to be equivalent to a gas of effectively repulsive atoms; this idea, which we believe has been overlooked in many recent studies, plays a critical role in our analysis of Bose gases near resonance. Our main conclusions are fourfold:  (a) energetically, the Bose gases close to unitarity are nearly [*fermionized*]{}, i.e., the chemical potentials of the Bose gases approach that of the Fermi energy of a Fermi gas with equal mass and density;  (b) an onset instability sets in at a positive critical scattering length, beyond which the Bose gases appear to lose the metastability as a consequence of the sign change of effective interactions at large scattering lengths;  (c) because of a strong coupling with molecules near resonance, the chemical potential reaches a maximum in the vicinity of the instability point;  (d) at the point of instability, we estimate, via summation of loop diagrams, the effect of three-body forces to be around a few percent. Feature (a) is consistent with previous numerical calculations [@Cowell02; @Song09; @Diederix11]; both (b) and (c) are surprising features, not anticipated in the previous numerical calculations or in the standard dilute-gas theory [@Lee57; @Nozieres90]. Our attempt here is mainly intended to reach an in-depth understanding of the energetics, metastability of Bose gases beyond the usual dilute limit as well as the contributions of three-body effects. The approach also reproduces quantitative features of the dilute-gas theory. In Sec. II and Appendixes A-C, we outline our main calculations and arguments. In Sec. III, we present the conclusion of our studies. Chemical potential, Metastability and Efimov effects ==================================================== The Hamiltonian we apply to study this problem is $$\begin{aligned} H&=&\sum_{\bf k} (\epsilon_{\bf k} -\mu) b_{\bf k}^\dagger b_{\bf k} + 2 U_0 n_0 \sum_{\bf k} b^\dagger_{\bf k} b_{\bf k} \nonumber \\ &+&\frac{1}{2} U_0 n_0\sum_{\bf k} b^\dagger_{\bf k} b^\dagger_{-\bf k}+ \frac{1}{2}U_0 n_0 \sum_{\bf k} b_{\bf k}b_{-\bf k} \nonumber \\ &+&\frac{U_0}{\sqrt{\Omega}}\sqrt{n_0} \sum_{{\bf k'},{\bf q}} b^\dagger_{\bf q} b_{\bf k'+\frac{\bf q}{2}} b_{-\bf k'+\frac{\bf q}{2}}+{\rm H.c.} \nonumber \\ &+&\frac{U_0}{2\Omega} \sum_{{\bf k}, {\bf k'},{\bf q}} b^\dagger_{\bf k+\frac{\bf q}{2}} b^\dagger_{-\bf k+\frac{\bf q}{2}} b_{\bf k'+\frac{\bf q}{2}} b_{-\bf k'+\frac{\bf q}{2}}+{\rm H.c.}\end{aligned}$$ Here $\epsilon_{\bf k}={|\bf k|}^2/2m$, and the sum is over nonzero momentum states. $U_0$ is the strength of the contact interaction related to the scattering length $a$ via $U_0^{-1}=m (4\pi a)^{-1} -\Omega^{-1} \sum_{\bf k} (2\epsilon_{\bf k})^{-1}$, and $\Omega$ is the volume. $n_0$ is the number density of the condensed atoms and $\mu$ is the chemical potential, both of which are functions of $a$ and are to be determined self-consistently. The chemical potential $\mu$ can be expressed in terms of $E(n_0,\mu)$, the energy density for the Hamiltonian in Eq. (1), with $n_0$ fixed [@Pines59; @Beliaev58]; $$\begin{aligned} \mu=\frac{\partial E(n_0,\mu)}{\partial n_0}, E(n_0,\mu)=\sum_{M=2}^{\infty} g_M(n_0,\mu) \frac{n_0^M}{M!}, \label{mu}\end{aligned}$$ where $g_M(M=2,3,...)$ are the irreducible $M$-body potentials that we will focus on below. The density of condensed atoms $n_0$ is further constrained by the total number density $n$ as $$\begin{aligned} n= n_0- \frac{\partial E(n_0,\mu)}{\partial \mu}, \label{QD}\end{aligned}$$ In the dilute limit, the Hartree-Fock energy density is given by Eq. (\[mu\]), with $g_2=4\pi a/m$ and the rest of the potentials $g_{M}, M=3,4...$ set to zero. The one-loop contributions to $g_M$ for $M=3,4,...$ in Figs. 1(c) and 1(d) all scale like $g_2 \sqrt{na^3}$, and their sum yields the well-known Lee-Huang-Yang (LHY) correction to the energy density [@Lee57]. When evaluated in the usual dilute-gas expansion, $g_{2}$ as well as one-loop contributions formally diverge as $a$ becomes infinite. Below we regroup these contributions into effective potentials $g_{2,3...}$ at a finite density $n_0$ via resummation of a set of diagrams in the perturbation theory. The approximation produces a convergent result for $\mu$. Before proceeding further, we make the following general remark. In the standard diagrammatic approach [@Beliaev58; @Pines59], the chemical potentials can have contributions from diagrams with $L$ internal lines, $S$ interaction vertices, and $X$ incoming or outgoing zero momentum lines, and $X=2S-L$. For the normal self-energy ($\Sigma_{11}$) and the anomalous counterpart ($\Sigma_{02}$) introduced by Beliaev, by classifying the diagrams Hugenholtz and Pines had shown that, in general, the following identity holds [@Pines59] in the limit of zero energy and momentum: $\mu=\Sigma_{11}-\Sigma_{02}$. Following a very similar calculation, we further find that $$\begin{aligned} \Sigma_{11}(n_0,\mu) =\mu + n_0 \frac{\partial \mu}{\partial n_0}, \label{SE}\end{aligned}$$ where $\mu={\partial E(n_0,\mu)}/{\partial n_0}$. The equality in Eq. (\[SE\]) is effectively of a hydrodynamic origin. Following Eq. (\[SE\]), the speed of Bogoliubov phonons [@Bogoliubov47] $v_s$ can be directly related to an [*effective compressibility*]{} $\partial n_0/\partial \mu$ via $m v^2_s={\Sigma_{11}-\mu}={n_0}\frac{\partial \mu}{\partial n_0}$, where the first equality is due to the Hugenholtz-Pines theorem on the phonon spectrum [@HDC]. Note that hydrodynamic considerations had also been employed previously by Haldane to construct the Luttinger-liquid formulation for one-dimensional (1D) Bose fluids [@Haldane81]. When $na^3$ is small, Eq. (\[SE\]) leads to the well-known result, $\Sigma_{11}=2\mu$. The self-consistent approach outlined below is mainly suggested by an observation that a subclass of one-loop diagrams \[shown in Fig. 1(c)\] yields almost all contributions in the LHY correction (see below and Appendixes A and B). Resummation of these and their $N$-loop counterparts can be conveniently carried out by introducing the [*renormalized*]{} or effective potentials $g_{2,3}$ as shown in Figs. 1(a) and 1(b), where all internal lines represent, instead of the noninteracting Green’s function $G_0^{-1}(\epsilon,{\bf k})=\epsilon-\epsilon_{\bf k}+\mu+i\delta$, the interacting Hartree-Fock Green’s function, $G^{-1}(\epsilon,{\bf k})=\epsilon -\epsilon_{\bf k} -\Sigma_{11}+\mu+i\delta$. This approximation captures the main contributions to the chemical potential in the dilute limit because the renormalization of two-body interactions is mainly due to virtual states with energies higher than $\mu$ where the Hartree-Fock treatment turns out to be a good approximation. The self-consistent equation for $\mu$ can be derived by estimating $g_{2,3,...}(n_0,\mu)$ diagrammatically (see examples in Fig. 1). When neglecting $g_{3,4,...}$ potentials in Eq. (\[mu\]), one obtains $$\begin{aligned} \mu &=& n_0 g_2(n_0,\mu)+\frac{n_0^2}{4}g_2^2(n_0,\mu) \int\frac{d^3{\bf k}}{(2\pi)^3} \frac{\partial \Sigma_{11}/\partial n_0}{(\epsilon_{\bf k}+\Sigma_{11} -\mu)^2}, \nonumber\\ n &=& n_0+ \frac{n_0^2}{4}g_2^2(n_0,\mu) \int\frac{d^3{\bf k}}{(2\pi)^3} \frac{1-\partial \Sigma_{11}/\partial \mu}{(\epsilon_{\bf k}+\Sigma_{11} -\mu)^2}, \nonumber\\ \frac{1}{g_2}&=&\frac{m}{4\pi a}+ \frac{1}{2} \int \frac{d{\bf k}}{(2\pi)^3} (\frac{1}{\epsilon_k+\Sigma_{11} -\mu}-\frac{1}{\epsilon_k}). \label{SC}\end{aligned}$$ Equations (\[SE\]) and (\[SC\]) can be solved self-consistently. ![ (Color online) Diagrams showing contributions to the total energy $E(n_0,\mu)$. The dashed lines are for $k=0$ condensed atoms, thick solid internal lines in (a) and (b) are for interacting Green’s functions $G^{-1}(\epsilon,{\bf k})=\epsilon-\epsilon_{\bf k}-\Sigma_{11}+\mu+i\delta$, and thin solid lines in (c) and (d) are for noninteracting Green’s function $G_0^{-1}(\epsilon,{\bf k})=\epsilon-\epsilon_{\bf k}+\mu+i\delta$. (a) The blue circle is for $g_2(n_0,\mu)$; vertices here represent the bare interaction $U_0$ in Eq. (1). (b) ($N=1,2,...$)-loop diagrams that lead to the integral equation for $G_3(-3\eta, p)$ in Eq. (7). Note that the usual tree-level diagram violates the momentum conservation and does not exist; the one-loop diagram has already been included in $g_2(n_0,\mu)$ and therefore needs to be subtracted when calculating $g_{3}(n_0,\mu)$. Arrowed dashed lines here as well as in (c) and (d) stand for outgoing condensed atoms, and the remaining dashed lines stand for incoming ones. (c) and (d) The tree level and examples of one-loop diagrams that yield the usual Lee-Huang-Yang corrections in the limit of small $na^3$. The self-consistent approach contains contributions from (c)-type diagrams but not (d)-type ones (see further discussion in the text). Patterned green circles also represent the sum of diagrams in (a), but with thin internal lines, or the noninteracting Green’s function $G_0$ lines. All vertices are time ordered from left to right. []{data-label="fig0"}](bosegasfig0b.eps){width="\columnwidth"} ![(Color online) (a) Chemical potential $\mu$ in units of the Fermi energy $\epsilon_F$ and (b) condensation fraction as a function of $n^{1/3}a$. Beyond a critical value of $0.18$ (shown as circles), the solutions become complex, and only the real part of $\mu$ is plotted; the imaginary part of $\mu$ scales like $\epsilon_F(a/a_{cr}-1)^{1/2}$ near $a_{cr}$. (However, the sharp transition would be smeared out if the small imaginary part of $G_3$ is included.) Dashed lines are the result of the Lee-Huang-Yang theory, thin solid blue lines are the solution without three-body effects (i.e. $g_3=0$). Thick solid red lines are the solution with $g_3$ included; the momentum cutoff is $\Lambda=100 n^{1/3}$. The inset is the relative weight of three-body effects in the chemical potential as a function of $\Lambda n^{-1/3}$ at the critical point. []{data-label="fig1"}](bosegasfig1.eps){width="\columnwidth"} We first benchmark our results with the LHY correction or Beliaev’s results for $\mu$ by solving the equations in the limit of small $na^3$. We find $\mu=\frac{4\pi}{m} n_0 a (1+ 3\sqrt{2\pi}\sqrt{n_0 a^3}+...)$, and the number equation yields an estimate $n_0/n=(1-\frac{\sqrt{2\pi}}{2}\sqrt{na^3}+...)$. The second terms in the parentheses are of the same nature as the LHY correction. Comparing to Beliaev’s perturbative result for chemical potential, $\mu=\frac{4 \pi}{m}n_0 a (1+ \frac{40}{3\sqrt{\pi}} \sqrt{n_0 a^3}+...)$ [@Beliaev58], and for the condensation fraction $n_0/n=1- \frac{8}{3\sqrt{\pi}}\sqrt{na^3}+...$, one finds that the self-consistent solution reproduces $99.96\%(=9\pi\sqrt{2}/40)$ of the Beliaev’s correction for the chemical potential, and $83.30\% (=3\pi\sqrt{2}/16)$ of the depletion fraction in the dilute limit. Technically, one can further examine $g_2(n_0,\mu)$ by expanding it in terms of $a$ and $\Sigma_{11}$ and then compare with the usual diagrams in the dilute gas theory [@Beliaev58]. One indeed finds that $g_2(n_0,\mu)$ in Eq. (\[SC\]) effectively includes [*all*]{} one-loop diagrams with $X=3,4,5,...$ incoming or outgoing zero-momentum lines that involve a [*single pair*]{} of virtually excited atoms \[between any two consecutive scattering vertices; Fig. 1(c)\]. The one-loop diagrams with $X=4,5,...$ incoming or outgoing zero-momentum lines that involve multiple pairs of virtual atoms \[Fig. 1(d)\] have been left out, but they only count for less than $0.04\%$ of Beliaev’s result [@G4]. Following the same line of thought, one can also verify that $g_{2}(n_0,\mu)$ further contains ($N=2,3,4,..$)-loop contributions that only involve [*one pair*]{} of virtual atoms; each two adjacent loops only share one interaction vertex and are reducible. $g_3(n_0,\mu)$ included below, on the other hand, includes ($N=2,3,4,..$)-loop contributions with $S=4,5...$ interaction vertices that only involve three virtual atoms; two adjacent loops share one internal line instead of a single vertex \[see Fig. 1(b)\] and are irreducible, [*i.e.*]{}, cannot be expressed as a simple product of individual loops. Effectively, we take into account all the virtual processes involving either two or three dressed excited atoms in the calculation of the chemical potential $\mu$ by including the effective $g_{2,3}$ (defined in Fig. 1) in Eq. (\[mu\]). The processes involving four or more excited atoms only appear in $g_{4,5...}$ and are not included here; at the one-loop level following the above calculations, the corresponding contributions from the processes involving multiple pairs of virtual atoms are indeed negligible. A solution to Eq. (\[SC\]) is shown in Fig. 2. An interesting feature of Eq. (\[SC\]) is that it no longer has a real solution once $n^{1/3}a$ exceeds the critical value of $0.18$, implying an onset instability; this is not anticipated in the dilute-gas theory [@Lee57]. This can also be illustrated by considering the two-body effective coupling constant $G_2(\Lambda_0)$ as a function of $\Lambda_0$ [@Cui10], a characteristic momentum that defines a low-energy subspace, $$\begin{aligned} G_2(\Lambda_0)=\frac{4\pi}{m}\frac{1}{\frac{1}{a}-\frac{2}{\pi}\Lambda_0}.\end{aligned}$$ For Bose gases, it is appropriate to identify the relevant $\Lambda_0$ as $\sqrt{2m\mu}=\Lambda_{\mu}$. For positive scattering lengths, $a$ not only defines the strength of interaction in the small $\Lambda_\mu$ or dilute limit but also sets a scale for $\Lambda_\mu$, above which the effective interaction becomes negative, [*i.e.*]{}, $G_2 (\Lambda_\mu)< 0$ if $\Lambda_\mu > \pi/(2a)$. So as $a$ approaches infinity, condensed atoms with a chemical potential $\mu$ typically see each other as attractive rather than repulsive, resulting in molecules [@Upper]. Thus, beyond the critical point the upper branch atomic gases become strongly coupled to the molecules with a strength proportional to the imaginary part of $\mu$. Consequently, we anticipate that $\mu$ decreases quickly beyond the critical scattering length due to the formation of molecules, leading to a maximum in $\mu$ in the vicinity of the critical point [@data]. A renormalization group approach based on atom-molecule fields was also applied in a previous study to understand Bose gases near resonance [@Lee10; @FTA]. Our results differ from theirs in two aspects. First, in our approach, an onset instability sets in near resonance even when the scattering length is positive, a key feature that is absent in that previous study. Second, when extrapolated to the limit of small $na^3$, the results in Ref. [@Lee10] imply a correction of the order of $\sqrt{na^3}$ to the usual Hartree-Fock chemical potential but with a negative sign, opposite to the sign of LHY corrections. In a recent study [@Diederix11], a self-consistent mean-field equation was employed, leading to a similar conclusion as the approach in Ref. [@Lee10]; the approach does not yield the correct sign of the LHY corrections. And so the onset instability pointed out in this paper, which is surprising from the point of view of dilute-gas theory, is also absent there. The chemical potential near the critical point can be estimated using Eq. (\[SC\]) and is close to $0.9\epsilon_F$, where $\epsilon_F=(6\pi^2)^{2/3}n^{2/3}/2m$ is the Fermi energy defined for a gas of density $n$. This is consistent with the picture of nearly fermionized Bose gases suggested by the previous calculations and experiments [@Cowell02; @Song09; @Lee10; @Diederix11; @Navon11]. ![ (Color online) ${\rm Re} G_3(-3\eta,0)\eta$ as a function of $\eta=\Sigma_{11}(n_0)-\mu$. $E_\Lambda=\Lambda^2/2m$ and $\Lambda$ is the momentum cut-off. The imaginary part of $G_3$ (not shown) is zero once $3\eta > 1/(ma^2)$. []{data-label="fig3"}](bosegasfig2.eps){width="\columnwidth"} We now turn to the effect of $g_3(n_0,\mu)$ on the chemical potential by including it in Eq. (\[mu\]). We estimate $g_3$ by summing up all $N$-loop diagrams with $X=3$ incoming or outgoing zero momentum lines, which are represented in Fig. 1. All diagrams have three incoming or outgoing zero momentum lines but with $N=2,3,..$ loops. The effect of three-body forces due to Efimov states [@Efimov70] was previously studied in the dilute limit [@BHM02]. The deviation of the energy density from the usual universal structures (i.e., only depends on $na^3$ ) was obtained by studying the Efimov forces in the zero-density limit. The contribution obtained there scales like $a^4$, apart from a log-periodic modulation [@BHK99], and again formally diverges as other terms when approaching a resonance. There was also an interesting proposal of a liquid-droplet phase at negative scattering lengths but in the vicinity of a trimer-atom threshold [@Bulgac02]. It is necessary to regularize the usual $a^4$ behavior at resonance in the three-body forces by further taking into account the interacting Green’s function when calculating the $N$-Loop six-point correlators. Including the self-energy in the calculation, we remove the $a^4$ dependence that usually appears in the Bedaque-Hammer-Van Kolck theory for the three-body forces [@BHK99]; when setting $\mu,\Sigma_{11}$ to zero, the equation collapses into the corresponding equation for three Bose atoms in vacuum, which was previously employed to obtain the $\beta$ function for the renormalization flow in an atom-dimer field-theory model. The sum of loop diagrams in Fig. 1(b), $G_3(-3\eta,p)$, satisfies a simple integral equation ($m$ set to be unity; see Appendix C): $$\begin{aligned} & &G_3(-3\eta,p)= \frac{2}{\pi} \int dq K(-3\eta;p,q) \nonumber \\ & & \times \frac{q^2}{\sqrt{\frac{3q^2}{4}+3\eta}-\frac{1}{a}} [ G_3(-3\eta,q)-\frac{1}{q^2+3\eta} ], \nonumber\\ & & K(-3\eta;p,q)=\frac{1}{pq}\ln \frac{p^2+q^2+pq+3\eta}{p^2+q^2-pq+3\eta}, \label{3body}\end{aligned}$$ where we have introduced $\eta=\Sigma_{11}(n_0)-\mu$. $G_3(-3\eta,0)$ is plotted numerically in Fig. 3. Three-body potential $g_3(n_0,\mu)$ is related to $G_3(-3\eta,0)$ via $g_3(n_0,\mu)= 6 g^2_2 {\rm Re} \tilde{G}_3(-3\eta,0)$ where $\tilde{G}_3$ is obtained by further subtracting from $G_3$ the one-loop diagram in Fig. 1(b) because its contribution has already been included in $g_2(n_0,\mu)$. The structure of $G_3(-3\eta,0)$ is particularly simple at $a=+\infty$, as shown in Fig. 3: It has a desired log-periodic behavior reflecting the underlying Efimov states [@Efimov70]. When $3\eta$ is close to an Efimov eigenvalue $B_n=B_0 \exp(-2\pi n/s_0)$ \[$n=1,2,3....$,$\exp(2\pi/s_0)=515$\] that corresponds to a divergence point in Fig. 3, the three-body forces are the most significant. When $3\eta$ is in the close vicinity of zeros in Fig. 3, the three-body forces are the negligible and Bose gases near resonance are dictated by the $g_2$ potential. When including the real part of $g_3(n_0,\mu)$ in the calculation of $E(n_0,\mu)$, we further get an estimate of three-body contributions to the energy density and chemical potential $\mu$. The contribution is nonuniversal and depends on the momentum cutoff in the problem. For typical cold Bose gases, it is reasonable to assume the momentum cutoff $\Lambda$ in the integral equation Eq. (\[3body\]) to be $100 n^{1/3}$ or even larger. Quantitative effects on the chemical potential are presented in Fig. 2. Note that $G_3(-3\eta,0)$ also has an imaginary part even at small scattering lengths; this corresponds to the well-known contribution of three-body recombination. The onset instability discussed here will be further rounded off if the imaginary part of $G_3$ is included. However, for the range of parameters we studied, both the real and imaginary parts of $G_3$ appear to be numerically small (see also Fig. 2); the energetics and instabilities near $a_{cr}$ are found to be mainly determined by the renormalized two-body interaction $g_2(n_0,\mu)$. Conclusions =========== In conclusion, we have investigated the energetics of Bose gases near resonance beyond the Lee-Huang-Yang dilute limit via a simple resummation scheme. We have also pointed out an onset instability and estimated three-body Efimov effects that had been left out in recent theoretical studies of Bose gases near resonance [@Cowell02; @Song09; @Lee10; @Diederix11]. Within our approach, we find that the three-body forces contribute around a few percent to the chemical potential and that the Bose gases are nearly fermionized before an onset instability sets in near resonance. acknowledgement {#acknowledgement .unnumbered} =============== This work is in part supported by Canadian Institute for Advanced Research, Izzak Walton Killam Foundation, NSERC (Canada), and the Austrian Science Fund FWF FOCUS. One of the authors (F.Z.) also would like to thank the Institute for Nuclear Physics, University of Washington, for its hospitality during a cold-atom workshop in Spring, 2011. This work was prepared at the Aspen center for physics during the 2011 cold-atom workshop. We would like to thank Aurel Bugalc, Eric Braaten, Randy Hulet, Gordon Semenoff, Dam T. Son, Shina Tan, Lan Yin and Wilhelm Zwerger for helpful discussions. Solving Self-consistent Equation (5) in The Dilute Limit ======================================================== We apply Eq. (5) to calculate the leading-order correction beyond the mean-field theory. We notice that the equations for $g_{2}$ and $\mu$ are arranged in a way that the next-order correction can be obtained by applying the results from the lowest-order approximation to the right-hand side. In the lowest-order approximation, we find $\Sigma_{11}=8\pi n_{0}a$ and $\mu=4\pi n_{0}a$; this leads to a correction to $g_{2}$ as $$\begin{aligned} g_{2} & = & 4\pi a+\frac{\left(4\pi a\right)^{2}}{2}\int\frac{d^{3}k}{\left(2\pi\right)^{3}}\left(\frac{1}{\epsilon_{k}}-\frac{1}{\epsilon_{k}+\mu}\right)=4\pi a\left(1+\sqrt{8\pi n_{0}a^{3}}\right).\label{eq:g2-exp-sc}\end{aligned}$$ Similarly, from the relation $\frac{\partial\Sigma_{11}}{\partial n_{0}}=8\pi a$ and $\frac{\partial\Sigma_{11}}{\partial\mu}=0$, we can get the correction for the chemical potential $\mu$ as, $$\begin{aligned} \mu & = & 4\pi an_{0}+\frac{\left(4\pi a\right)^{3}n_{0}^{2}}{2}\int\frac{d^{3}k}{\left(2\pi\right)^{3}}\frac{1}{\left(\epsilon_{k}+\mu\right)^{2}}=4\pi an_{0}\left(1+3\sqrt{2\pi n_{0}a^{3}}\right)\label{eq:mu-leading-sc}\end{aligned}$$ and the depletion fraction $$\frac{n_{p}}{n_{}}=\frac{n_{0}}{4}g_{2}^{2}\int\frac{d^{3}k}{\left(2\pi\right)^{3}}\frac{1}{\left(\epsilon_{k}+\mu\right)^{2}}=\sqrt{\frac{\pi}{2}n_{0}a^{3}}.$$ For a comparison we list the results from the dilute-gas theory, $$\begin{aligned} \mu_{Beliaev} & = & 4\pi n_{0}a\left[1+\frac{40}{3}\sqrt{\frac{1}{\pi}n_{0}a^{3}}\right],\\ \left(\frac{n_{p}}{n_{}}\right)_{Beliaev} & = & \frac{8}{3}\sqrt{\frac{1}{\pi}n_{0}a^{3}}.\end{aligned}$$ Our self-consistent approach produces $\frac{9\sqrt{2}\pi}{40}(=99.96\%)$ of Beliaev’s result for the chemical potential, and $\frac{3\sqrt{2}\pi}{16}(=83\%)$ for the depletion fraction. A Comparison Between the Self-Consistent Approach and the Dilute- Gas Theory ============================================================================ In the following, we show explicitly that our self-consistent equation corresponds to a subgroup of diagrams \[in Fig. 1(c)\] in the usual dilute gas theory. The two-body $T$-matrix used in the dilute-gas theory \[represented by the green circles in Figs. 1(c) and 1(d)\] are obtained using the non-interacting Green’s function $G^{{-1}}(\epsilon,k)=\epsilon-\epsilon_{k}+\mu+i0^{+}$; in the dilute limit, we can expand the $T$-matrix as $$t(\omega,Q)=4\pi a\left[1+4\pi a\int\frac{d^{3}k}{\left(2\pi\right)^{3}}\left(\frac{1}{\omega-\frac{Q^{2}}{4}-k^{2}+2\mu+i0^{+}}+\frac{1}{k^{2}}\right)+\cdots\right],$$ where $\omega$ and $Q$ are the total energy and momentum of the incoming atoms. The contribution from the first two diagrams in Fig. 1(c) are $$\begin{aligned} E_{(c1)} & \simeq & \frac{t(0,0)n_{0}^{2}}{2}\simeq2\pi an_{0}^{2}\left[1+4\pi a\int\frac{d^{3}k}{\left(2\pi\right)^{3}}\left(\frac{1}{-k^{2}+2\mu+i0^{+}}+\frac{1}{k^{2}}\right)\right]\nonumber \\ E_{(c2)} & \simeq & 2\frac{t^{2}(0,0)n_{0}^{2}}{2}\int\frac{d^{3}k}{\left(2\pi\right)^{3}}\left(\frac{1}{-k^{2}+2\mu+i0^{+}}\right)^{2}2n_{0}t(\mu-\epsilon_{k},0)\\ & \simeq & 2\pi an_{0}^{2}\left[(4\pi a)\left(16\pi n_{0}a\right)\int\frac{d^{3}k}{\left(2\pi\right)^{3}}\left(\frac{1}{-k^{2}+2\mu+i0^{+}}\right)^{2}\right].\label{eq:c2}\end{aligned}$$ For the leading-order correction beyond the mean-field theory, it suffices to set $t(\mu-\epsilon_{k},0)\simeq4\pi a$ in Eq. (\[eq:c2\]) and in higher-order diagrams. Similarly, we can get the contributions from the higher-order diagrams in this series, and the sum is $$\begin{aligned} E_{(c)} & \simeq & 2\pi an_{0}^{2}\left[1+4\pi a\int\frac{d^{3}k}{\left(2\pi\right)^{3}}\left(\frac{1}{-k^{2}+2\mu+i0^{+}}+\frac{1}{k^{2}}\right)\right]\nonumber \\ & + & 2\pi an_{0}^{2}(4\pi a)\sum_{m=1}^{\infty}\left(16\pi an_{0}\right)^{m}\int\frac{d^{3}k}{\left(2\pi\right)^{3}}\left(\frac{1}{-k^{2}+2\mu+i0^{+}}\right)^{m+1}\label{eq:Ec-series}\\ & \simeq & 2\pi an_{0}^{2}\left[1+4\pi a\int\frac{d^{3}k}{\left(2\pi\right)^{3}}\left(\frac{1}{-k^{2}+2\mu-16\pi n_{0}a}+\frac{1}{k^{2}}\right)\right].\label{eq:Ec-all}\end{aligned}$$ We see that the energy given by the diagrams in Fig. 1(c) is *exactly* the same as the one used in our self-consistent equation, e.g., $g_{2}n_{0}^{2}/2$, where $g_{2}$ should be expanded as Eq. (\[eq:g2-exp-sc\]) in the dilute limit. Next, we can sum up the rest of the one-loop diagrams that are not included in the self-consistent equations; they represent the lowest-order contributions to four- and six-body forces and so on. In the dilute limit, these diagrams \[as shown in Fig. 1(d)\] can be summed as $$\begin{aligned} E_{(d)} & = & -\left(4\pi an_{0}\right)\int\frac{d^{3}k}{\left(2\pi\right)^{3}}\sum_{m=2}^{\infty}\frac{1}{2}\frac{\left(2m-2\right)!}{m!\left(m-1\right)!}\left(\frac{4\pi an_{0}}{2\epsilon_{k}-2\mu+16n_{0}\pi a}\right)^{2m-1}.\end{aligned}$$ Indeed, we can recover Beliaev’s result by summing up one-loop diagrams in Figs. 1(c) and 1(d) as $$\begin{aligned} \frac{\partial}{\partial n_{0}}\left(E_{(c)}+E_{(d)}\right) & = & 4\pi n_{0}a+4\pi a\int\frac{d^{3}k}{\left(2\pi\right)^{3}}\left[\frac{\left(\epsilon_{k}-\mu+6\pi n_{0}a\right)}{\sqrt{\left(\epsilon_{k}-\mu+8\pi an_{0}\right)^{2}-\left(4\pi an_{0}\right)^{2}}}-1+\frac{4\pi n_{0}a}{k^{2}}\right]\\ & = & 4\pi n_{0}a\left[1+\frac{40}{3}\sqrt{\frac{1}{\pi}n_{0}a^{3}}\right]=\mu_{Beliaev}\end{aligned}$$ Including Three-body Forces In the Self-Consistent Equations ============================================================ We now calculate the amplitude of three-body scatterings corresponding to the processes described in Fig. 1(b). First, we consider a general case where three incoming momenta, instead of being zero, are ${\bf k}_1={\bf p}/2-{\bf q}$, ${\bf k}_2={\bf p}/2+{\bf q}$, and ${\bf k}_3=-{\bf p}$, and the outgoing ones are ${\bf k}'_1={\bf p}'/2-{\bf q}'$, ${\bf k}'_2={\bf p}'/2+{\bf q}'$, and ${\bf k}'_3=-{\bf p}'$. The scattering amplitude between theses states is then given by $A(E-3\eta;{\bf p}, {\bf p}')$, which represents the sum of diagrams identical to Fig. 1(b) except that the external lines carry finite momenta. For the estimate of three-body contributions of $g_3$, we first treat the sum of diagrams in Fig. 1(b) as the limit of $A(E-3\eta;{\bf p},{\bf p'})$ when ${\bf p}$ and ${\bf p}'$ approach zero and the total frequency $E$ is set to zero. It is therefore more convenient to work with the reduced amplitude $G_3(E-3\eta;{\bf p})=A(E-3\eta;{\bf p},0)$, where ${\bf p}'$ is already taken to be zero. $G_3(E-3\eta;{\bf p})$ itself obeys a simple integral equation, as can be seen by listing the terms in the summation explicitly. Indeed, when $E$ is further set to zero, we find that the diagrams in Fig. 1(b) yield $$\begin{aligned} G_{3}(-3\eta,p) & = & \frac{2}{\pi}\int dq\frac{K(-3\eta;p,q)q^{2}}{\sqrt{\frac{3}{4}q^{2}+3\eta}-\frac{1}{a}}\frac{-1}{q^{2}+3\eta}\\ & + & \left(\frac{2}{\pi}\right)^{2}\int dqdq'\frac{K(-3\eta;p,q)q^{2}}{\sqrt{\frac{3}{4}q^{2}+3\eta}-\frac{1}{a}}\frac{K(-3\eta;q,q')q'^{2}}{\sqrt{\frac{3}{4}q'^{2}+3\eta}-\frac{1}{a}}\frac{-1}{q'^{2}+3\eta}+\cdots, \label{sum1}\end{aligned}$$ where $K(-3\eta;p,q)$ is the kernel defined in the main text. The sum of the above infinite series leads to the following integral equation of $G_{3}$ : $$G_{3}(-3\eta,p)=\frac{2}{\pi}\int dq\frac{K(-3\eta;p,q)q^{2}}{\sqrt{\frac{3}{4}q^{2}+3\eta}-\frac{1}{a}}\frac{-1}{q^{2}+3\eta}+\frac{2}{\pi}\int dq\frac{K(-3\eta;p,q)q^{2}}{\sqrt{\frac{3}{4}q^{2}+3\eta}-\frac{1}{a}}G_{3}(-3\eta,q).$$ Note that $G_{3}(-3\eta,0)$ defined above includes a diagram \[the leftmost one in Fig. 1(b)\] that has already been included in $g_{2}$. To avoid overcounting, we subtract the first diagram in Fig. 1(b) from $G_{3}$ as $$\begin{aligned} g_{3} & = & 6 g_{2}^{2}{\rm Re}\left[G_{3}(-3\eta,0)-\frac{2}{\pi}\int dq\frac{K(-3\eta;0,q)q^{2}}{\sqrt{\frac{3}{4}q^{2}+3\eta}-\frac{1}{a}}\frac{-1}{q^{2}+3\eta}\right].\end{aligned}$$ Alternatively, one can also carry out a direct summation of the diagrams in Fig. 1(b). It leads to a result that numerically differs very little from the estimation obtained above via an asymptotic extrapolation. For instance, a direct evaluation of those diagrams yields $$\begin{aligned} G_{3}(-3\eta,0) & = & \frac{2}{\pi}\int dq\frac{K(-2\eta;0,q)q^{2}}{\sqrt{\frac{3}{4}q^{2}+3\eta}-\frac{1}{a}}\frac{-1}{q^{2}+2\eta}\\ & + & \left(\frac{2}{\pi}\right)^{2}\int dqdq'\frac{K(-2\eta;0,q)q^{2}}{\sqrt{\frac{3}{4}q^{2}+3\eta}-\frac{1}{a}}\frac{K(-3\eta;q,q')q'^{2}}{\sqrt{\frac{3}{4}q'^{2}+3\eta}-\frac{1}{a}}\frac{-1}{q'^{2}+2\eta}+\cdots \label{sum2}\end{aligned}$$ The only difference between Eqs. (\[sum2\]) and (\[sum1\]) is that the frequencies appearing in the first kernel $K(E;0,q)$ in the integrands and in the last denominators are now $-2\eta$ instead of $-3\eta$. One can easily verify that the sum can be written in the following compact form: $${G}_{3}(-3\eta,0)=\frac{2}{\pi}\int dq\frac{K(-2\eta;0,q)q^{2}}{\sqrt{\frac{3}{4}q^{2}+3\eta}-\frac{1}{a}} [\frac{-1}{q^{2}+2\eta}+ G^{'}_{3}(-3\eta,q)],$$ where $G^{'}_{3}(-3\eta, p)$ is a solution of the following integral equation: $$G^{'}_{3}(-3\eta,p)=\frac{2}{\pi}\int dq\frac{K(-3\eta;p,q)q^{2}}{\sqrt{\frac{3}{4}q^{2}+3\eta}-\frac{1}{a}} [\frac{-1}{q^{2}+2\eta}+ G^{'}_{3}(-3\eta,q)]$$ Note that $G^{'}_{3}(-3\eta,p)$ defined here describes an off-shell scattering between three incoming atoms with momenta ${\bf p}/2-{\bf q}$, ${\bf p}/2+{\bf q}$, and $-{\bf p}$ and three condensed atoms. And $G_3(-3\eta,0)$ is not equal to $G^{'}_3(-3\eta,0)$, a consequence of the Hartree-Fock approximation we have employed here. $G_3(-3\eta, 0)$ and $G^{'}_3(-3\eta,0)$ can be obtained numerically. Finally, after subtracting the leftmost one-loop diagram in Fig. 1(b) we again find the three-body contribution is $$\begin{aligned} g_{3} & = & 6 g_{2}^{2}{\rm Re} \left[ {G}_{3}(-3\eta,0) -\frac{2}{\pi}\int dq\frac{K(-2\eta;0,q)q^{2}}{\sqrt{\frac{3}{4}q^{2}+3\eta}-\frac{1}{a}}\frac{-1}{q^{2}+2\eta}\right].\end{aligned}$$ Now we can include the three-body forces $\frac{g_{3}n_{0}^{3}}{6}$ in a set of differential self-consistent equations similar to Eq. (5). We solve the equation numerically, and the results are shown in Fig. 2, where in the inset we show the momentum cutoff $\Lambda$ dependence in the chemical potential. In our numerical program, we further use the approximation $\frac{\partial\Sigma}{\partial n_{0}}=2g_{2}$, $\frac{\partial\Sigma}{\partial\mu}=0$, and $\Sigma_{11}=\beta\mu$ ($\beta=2$) to simplify the numerical calculations. We have tested other types of approximation schemes for the self-energy, such as $\Sigma_{11}=8\pi an_{0}$ or $\Sigma_{11}=2g_{2}n_{0}$. We find that the chemical potential and the value of the critical point $na_{cr}^{3}$ are insensitive to approximation schemes. [10]{} S. B. Papp, J. M. Pino, R. J. Wild, S. Ronen, C. E. Wieman, D. S. Jin, and E. A. Cornell, Phys. Rev. Lett.[**101**]{}, 135301 (2008). S. E. Pollack, D. Dries, M. Junker, Y. P. Chen, T. A. Corcovilos, and R. G. Hulet, Phys. Rev. Lett.[**102**]{}, 090402(2009). N. Navon, S. Piatecki, K. J. Günter, B. Rem, T. C. Nguyen, F. Chevy, W. Krauth, and C. Salomon, Phys. Rev. Lett. [**107**]{}, 135301 (2011). N. N. Bogoliubov, J. Phys. USSR [**11**]{}, 23 (1947). T. D. Lee, and C. N. Yang, Phys. Rev. [**105**]{}, 1119 (1957); T. D. Lee, K. Huang, and C. N. Yang, Phys. Rev. [**106**]{}, 1135 (1957). S. T. Beliaev, Sov. Phys. JETP. [**7**]{}, 289 (1958); Sov. Phys. JETP. [**7**]{}, 299 (1958). See also general discussions in P. Nozieres and D. Pines, [*The Theory of Quantum Liquids, Vol 2, Superfluid Bose Liquids*]{}, (Addison-Wesley, Redwood City, CA, 1990). T. T. Wu, Phys. Rev. [**115**]{}, 1390(1959); K. Sawada, Phys. Rev. [**116**]{}, 1344 (1959). E. Braaten, H. W. Hammer and T. Mehen, Phys. Rev. Lett. [**88**]{}, 040401 (2002). Resummation is possible for two scattering atoms in a box of size $L$. The interaction energy should scale like $\frac{4\pi a}{m L^3}(1+ A \frac{a}{L}+...)$ when $a$ is much less than $L$ ($A$ is a constant) but generically saturate at a value of the order of $1/2 m L^2$ when $a$ becomes infinite. S. Cowell, H. Heiselberg, I. E. Mazets, J. Morales, V. R. Pandharipande, and C. J. Pethick, Phys. Rev. Lett. [**88**]{}, 210403 (2002). J.-L. Song, and F. Zhou, Phys. Rev. Lett. [**103**]{}, 025302 (2009). J. M. Diederix, T. C. F. van Heijst, H. T. C. Stoof, Phys. Rev. A [**84**]{}, 033618 (2011). N. M. Hugenholtz and D. Pines, Phys. Rev. [**116**]{}, 489 (1959). If we attribute the energy density to the zero-point energy of Bogoliubov phonons, for an arbitrary scattering length, one can, using Eqs. (\[QD\]) and (\[SE\]), express the condensation density as $n_0=n - 1/3\pi^2 (\partial \mu/\partial \ln n_0)^{3/2} m^{3/2}$. The long-wavelength dynamics thus sets an upper bound on the value of $\partial\mu/\partial \ln n_0$. The upper bound can be estimated to be $ 2^{1/3} \epsilon_F$, where $\epsilon_F=(6\pi^2)^{2/3}n^{2/3}/2m$ is the Fermi energy defined for a gas of density $n$. F. D. M. Haldane, Phys. Rev. Lett. [**47**]{}, 1840 (1981). Two diagrams in Fig. 1(d) correspond to the lowest-order contribution to the irreducible renormalized $g_4$. X. Cui, Y. Wang, and F. Zhou, Phys. Rev. Lett.[**104**]{},153201(2010). One should thus expect an instability in a near-resonance Fermi gas as well. The pairing dynamics previously emphasized in D. Pekker, M. Babadi, R. Sensarma, N. Zinner, L. Pollet, M. W. Zwierlein, and E. Demler, Phys. Rev. Lett. 106, 050402 (2011) is consistent with this conclusion. Molecule dynamics in a Bose-Einstein condensate was also studied in L. Yin, Phys. Rev. A [**77**]{}, 043630(2008). Although LHY corrections and three-body forces were not taken into account in that random phase approximation, the molecule formation discussed there appears to be consistent with our conclusion on the loss of metastability. Interestingly, a maximum in the Bragg line shift at a finite wave vector was found when the LHY correction is $0.22$ in Ref. [@Papp08]. The maximum in $\mu$ in this paper occurs when the LHY correction is $0.45$. The implication of the maximum found in this paper on the line shift data will be further studied in the future. Y. L. Lee and Y. W. Lee, Phys. Rev. A[**81**]{}, 063613 (2010). For field-theory-based approaches to the lower-branch unitary gases, see Y. Nishida and D. T. Son, Phys. Rev. Lett. 97, 050403 (2006), P. Nikolic and S. Sachdev, Phys. Rev. A. 75, 033608 (2007) and M. Y. Veillette, D. E. Sheehy, and L. Radzihovsky, Phys. Rev. A 75, 043614 (2007). V. Efimov, Phys. Lett. B. [**33**]{}, 563 (1970); Sov. J. Nucl. Phys. [**12**]{}, 589 (1971). P. F. Bedaque, H. W. Hammer, and U. van Kolck, Phys. Rev. Lett.[**82**]{}, 463 (1999); Nucl. Phys. A[**646**]{}, 444(1999). A. Bulgac, Phys. Rev. Lett. [**89**]{}, 050402 (2002).
--- abstract: 'Mining large graphs for information is becoming an increasingly important workload due to the plethora of graph structured data becoming available. An aspect of graph algorithms that has hitherto not received much interest is the effect of memory hierarchy on accesses. A typical system today has multiple levels in the memory hierarchy with differing units of locality; ranging across cache lines, TLB entries and DRAM pages. We postulate that it is possible to allocate graph structured data in main memory in a way as to improve the spatial locality of the data. Previous approaches to improving cache locality have focused only on a single unit of locality, either the cache line or virtual memory page. On the other hand cache oblivious algorithms can optimise layout for all levels of the memory hierarchy but unfortunately need to be specially designed for individual data structures. In this paper we explore hierarchical blocking as a technique for closing this gap. We require as input a specification of the units of locality in the memory hierarchy and lay out the input graph accordingly by copying its nodes using a hierarchy of breadth first searches. We start with a basic algorithm that is limited to trees and then extend it to arbitrary graphs. Our most efficient version requires only a constant amount of additional space. We have implemented versions of the algorithm in various environments: for C programs interfaced with macros, as an extension to the Boost object oriented graph library and finally as a modification to the traversal phase of the semispace garbage collector in the Jikes Java virtual machine. Our results show significant improvements in the access time to graphs of various structure.' bibliography: - 'paper.bib' title: Memory Hierarchy Sensitive Graph Layout --- Introduction {#sec:intro} ============ Modern computer systems usually consist of a complex path to memory. This is necessitated by the difference between the speed of computation and that of accessing memory, often referred to as the memory wall. While microprocessor performance has increased by 60% every year, memory systems have increased in performance by only 10% every year. The typical solution employed by memory designers is to use faster smaller caches to cache data from larger but slower levels of memory. For example, a typical CPU cache would cache 64 byte lines from main memory while a Translation Lookaside Buffer (TLB) would cache mappings for 4KB chunks of virtual address space. A typical access to memory therefore has to negotiate many levels of hierarchy. Locality therefore has an important role to play for the in-memory processing of large datasets. If accesses are clustered (blocked) on the same 64 byte or 4KB chunk of memory (which we call units of spatial locality), it will lead to fewer transfers between levels in the memory hierarchy and consequently better performance. Graphs form an important and frequently used abstraction for the processing of large data. This is more so today, with increasing interest in mining graph structured data: common examples being page-ranking that examines the hyperlinking between web-pages, community detection in social networks, navigational queries on road-network data or simulating the spread of epidemics (viruses) over human (computer) networks. Thus far, little attention has been paid to mitigating the impact of the memory hierarchy on processing large graphs. This paper makes the case that sensitivity to the memory hierarchy can make a big difference to the costs of processing large graphs. Existing research along the same lines can be divided into two categories. The first category optimises object layout and connectivity taking into account only *one* level of the memory hierarchy. These algorithms however are suitable for use at runtime on arbitrary graphs. The second category are cache-oblivious algorithms that can optimise data structure layouts without knowing the precise hierarchy in use on the machine. Unfortunately cache-oblivious algorithms have been designed only for specific data structures and the techniques cannot be applied to graphs in general. This paper proposes a Hierarchical Blocking Algorithm (HBA) as a solution. The HBA proposed in this paper takes arbitrary graphs as input and produces a layout that is sensitive to all levels of the memory hierarchy, information about which is supplied to the algorithm. We show that not only does this make a large difference to the processing of graphs, it also performs comparably to a cache oblivious layout. The HBA therefore, closes the gap between cache oblivious and cache sensitive (but limited to a single level) algorithms, an important contribution of this paper. The rest of this paper is organised as follows. We begin with some intuition about HBA and describe how it is motivated by cache oblivious algorithms in Section \[sec:motivation\]. We then describe a basic version of HBA applicable only to trees in Sections \[sec:theory\], \[sec:analysis\] and \[sec:pract\]. We then provide extensions that make it applicable to arbitrary graphs in Section \[sec:simple\] and extensions for space efficiency in Section \[sec:super\]. We then describe implementation in three different environments (custom graph processing in C, Boost C++ graph libraries and the semispace garbage collector in the Jikes Java Virtual Machine) in Section \[sec:implementations\]. We then evaluate HBA and show that it delivers significant speedups in all these environments (from 10% to as much as 21X). We then discuss related work and possible future extensions to HBA, before concluding. Motivation and Intuition {#sec:motivation} ======================== The hierarchical blocking algorithm we have developed draws strong inspiration from the van Emde Boas (VEB) layout. The VEB tree, originally proposed in a paper outlining a design for a priority queue [@veb] is an arrangement that makes a tree data-structure cache oblivious i.e. likely to provide good performance regardless of the cache hierarchy or units of spatial locality in operation. The van Emde Boas(VEB) layout had provided some of the initial motivation for this work. Figure \[fig:veb\] details the intuition behind the VEB layout. The VEB layout is a layout of a tree that is done by repeatedly splitting it at the middle and *recursively* laying out all the component subtrees in contiguous units of memory. In the figure, the tree of depth $D$ is split into a subtree (rooted at the original tree) of depth $\frac{D}{2}$ and this is recursively laid out first. Next, the remaining subtrees, $O(2^\frac{D}{2})$ in number, are laid out recursively. The VEB layout is complex to setup and maintain for trees and difficult to apply to graphs in general. The first step in applying it to a graph is to traverse the graph and prepare a sub-graph in the form of a tree that covers it. This spanning tree could then be laid out in a VEB layout. The key difficulty however is determining where to cut the tree since a-priori knowing the diameter of a graph and its splits at runtime is a difficult business. Further, the VEB layout does not consider heterogeneous graphs where the objects representing graph vertices may have different sizes, rendering it impractical to apply. Our approach instead is to make the problem somewhat easier by assuming that the memory hierarchy (of caches) is known at runtime as an input to the algorithm. This can be used during the traversal to determine the spanning subtree in conjunction with information about the size of in-memory representations of graph vertices to determine the right split-point. Figure \[fig:layout\] shows graphically how this might be done. Assume an algorithm $P_i$ that aims to copy a tree while traversing it, into blocks that fit into the cache at level $i$. Using breadth first search, it can discover the entire subtree that fits into a block at level $i$. It can then call breadth first searches for individual subtrees that are rooted at the leaves of this subtree (not shown in the Figure). For the subtree it has identified, it can *recursively* call $P_{i -1}$: an algorithm that can lay out a given tree into blocks that fit into the cache at level $i-1$. This is shown in the Figure and corresponds (roughly) to the recursive layout achieved by VEB. The key difference is that we *know* where to cut the spanning tree based on runtime information about the memory hierarchy rather than simply using half the diameter of the graph. Having provided intuition behind our Hierarchical Blocking Algorithm (HBA), we now proceed to discuss it in more detail below. Hierarchical Blocking for Trees {#sec:theory} =============================== In this section we develop a hierarchical blocking algorithm applicable to trees. A tree is a graph (either directed or undirected) where every vertex has a *unique parent* and is itself connected to a number of *children*. We express the algorithm in terms of repeated breadth-first searches [@cormen] each of which is bounded to produce roots for new searches. We begin by introducing some basic notation that we use in this section. We denote the application of an Algorithm $A$ to a graph vertex $x$ as $A(x)$, which produces as output a list of vertices. We denote application of an algorithm $A$ to a list of vertices $L$ by $A(L)$, which is done by applying it to *each individual vertex* and *concatenating the outputs in order*. We denote the repeated application of an algorithm (using the output of one as the input of the next) as $A^k$, which means that we apply algorithm $A$ $k$ times. An important concept for hierarchical blocking is the space occupied by representations of a vertex. For a vertex $v$ we assume a way to measure the space occupied by the vertex, which we represent as $|v|$. This naturally extends to applying an algorithm $A$ on input $I$ as $|A(I)|$, which is just the sum of the spaces occupied by every vertex that is processed to produce the output. The core algorithm is bounded breadth first search, which we abbreviate as $\text{BFS}_d$ that takes as input one vertex and produces a list of vertices that are at distance $d$ from the start vertex. We measure distance as the number of edges traversed. *We also delegate to $\text{BFS}_d$ the job of copying traversed vertices into a spatially contiguous unit of memory.* We consider here a memory hierarchy of $n$ levels with monotonically increasing units of spatial locality: $s_i$ for level $i$, with $s_i < s_{i+1}$. We now define a blocking algorithm that takes as input a list of vertices and memory hierarchy levels, and recursively calls itself with decreasing levels. We denote the algorithm for level $i$ as $P_i$ taking as input a single vertex $o$. As indicated above, using $P_i$ on a list of vertices naturally follows from the definitions given below. - $P_1(o)$ = $\text{BFS}_d(o)$ where we choose depth $d$ such that : - If $|\text{BFS}_0(o)| > s_1$ then $d = 0$ - Else choose $d$ such that $|\text{BFS}_{d-1}(o)| < s_1$ and $\text{BFS}_{d}(o) \geq s_1$ - $P_i(o) := P_{i-1}^k(o)$ where we choose $k$ such that: - If $|P_{i-1}(o)| > s_i$ then set $k=1$ - Else choose $k$ such $|P_{i-1}^{k-1}(o)|< s_i$ and $|P_{i-1}^{k}(o)|\geq s_i$ At the start we are given the root of the tree: $r$. For hierarchical blocking of the tree, we repeatedly apply $P_n$ starting from $r$ until we have copied all the vertices (the output list is empty). The formalism given above produces exactly the layouts that we have provided an intuition for in the previous section. A crucial point to note here is that we allow the copying to overshoot the set limit by an amount bounded by one application of the algorithm at the next underlying level. Analysis {#sec:analysis} ======== A traversal of the tree needs to transfer blocks of size $s_i$ from the $i^{\text{th}}$ level in the memory hierarchy. We now provide an upper bound on the number of such blocks transferred. We make the observation that any application of $P_n$ can be ultimately expressed as repeated applications of $P_i$ for any $i < n$. Consider a traversal of the copied tree generated by $P_i(x)$ for an arbitrary vertex $x$ in the input graph. We consider a traversal that starts from the copy of $x$ produced by $P_i(x)$ and terminates at some leaf in the copied subtree produced by $P_i(x)$. For any memory hierarchy level $j$ this traversal leads to the transfer of some number of blocks of size $s_j$. Let an upper bound on the number of memory blocks at level $j$ accessed due to this traversal be $B_i^j$, *regardless of the start vertex*. $B_{i+1}^{i+1} = 2 + B_{i}^{i+1}$ For any $x$: $P_{i+1}(x)$ is defined as $P_{i}^d(x)$ with $|P_{i}^{d-1}(x)| \leq s_{i+1}$. Now traversing a block of memory of size $s_{i+1}$ can incur accesses to at worst 2 blocks at level $i + 1$ (if the block start is not aligned). The remaining part of the traversal is to a subtree produced by $P_{i}(y)$ for some leaf $y$ of the previous traversal incurring at most $B_i^{i+1}$ block transfers. Hence we have: $B_{i+1}^{i+1} = 2 + B_{i}^{i+1}$ Under the common conditions where each level in the memory hierarchy is sufficiently smaller than the next level and $B_1^1$ fits within $s_2$ we can deduce a simple constant upper bound on $B_i^i$ for any $i$. If $\;\;\;\forall j\;\;\;B_1^1 < s_2 \;\;\;\text{and}\;\;\;4s_j \leq s_{j+1}$ then $ B_i^i \leq 4$ The theorem is true at $i = 1$ due to the conditions of the theorem where $B_1^1 < s_2$ and $s_2$ is at most 4 blocks of $s_1$. We now give an inductive proof. Let the theorem be true till $i = k$. We have $B_{k+1}^{k+1} = 2 + B_k^{k+1}$. From the induction hypothesis $B_k^k \leq 4$. Four blocks at level $k$ is at most one misaligned block at level $k+1$ (due to the bounds on sizes at each level) which is at most two aligned blocks at level $k+1$. Hence $B_{k+1}^{k+1} \leq (2+2) = 4$ In the context of the whole tree a traversal from root to leaf in the copied tree incurs repeated costs of $B_n^i$ at memory hierarchy level $i$. If we assume that $P_i(x)$ covers subtrees of depths at least $d_i$ then the number of block accesses at memory hierarchy level $i$ for a traversal of depth $D$ is bounded by $4\frac{D + 1}{d_i + 1}$. For a pseudorandom allocation of vertices to memory, one would normally expect every access to cause a transfer, leading to $D+1$ transfers. The hierarchical blocking algorithm is therefore able to guarantee reduced transfers when $d_i > 3$. Note that this is a pessimistic upper bound. For example, at the lowest level (usually cache lines) any organisation that packs subtrees of depth one into a cacheline leads to better performance with one cacheline serving two access requests instead of one. Iterative Version {#sec:pract} ================= We now present an iterative version of the hierarchical blocking algorithm (HBA). In addition to being easier to understand, implement and analyse; it forms the basis for extension to handle arbitrary graphs. The [HBTreeIterative]{} algorithm listed in Procedure \[alg:reorg\] is a direct translation of the recursive algorithm described in the previous section. It takes as input a root vertex and a description of a memory hierarchy and performs runtime hierarchical blocking of the tree rooted at the supplied vertex. Starting from this section, we introduce the term ’node’, that we use as an abstraction for the memory occupied by a graph vertex (and any associated edge data structure, such as an edge list). root: root to block from n: levels of hierarchy s\[1..n\]: block sizes (monotonically increasing) s\[n+1\] = INFINITY Initialise to empty: Dequeue roots\[1..n+1\] Initialise to empty: Dequeue leaves\[1..n+1\] Initialise to zero: space\[1..n+1\] roots\[n+1\].push\_back(root) level := n + 1 ////////Refill roots\[level\] := leaves\[level\] leaves\[level\] := empty //////Promote leaves\[level + 1\].append(roots\[level\]) roots\[level\] := empty space\[level + 1\] := space\[level + 1\] + space\[level\] level := level + 1 continue loop //////Promote TERMINATE space\[level + 1\] := space\[level + 1\] + space\[level\] level := level + 1 node := roots\[level\].pop\_front() //////Push work down roots\[level - 1\].push\_back(node) space\[level - 1\] := 0 level := level - 1 /////Do some copying work space\[1\] := space\[1\] + sizeof(node) leaves\[1\].append(children(node)) node: node to copy Copy node to tospace tospace := tospace + sizeof(node) Update parent of node in tospace to point to copy true The core data structures used in the algorithm are lists of [roots]{} and [leaves]{} for each level of the hierarchy. In addition the [space]{} array maintains the amount of space used at each level. Lines 34–36 of the algorithm implement $\text{BFS}_d$. This is done by taking the root node for the BFS, copying it (through the call to [UnconditionalCopyNode]{} and updating the space used. The children produced during this BFS step are added to [leaves\[1\]]{}. If the amount of space used is less than the unit of spatial locality for hierarchy level 1, all the leaves of the BFS are moved to [roots\[1\]]{} and a BFS step is subsequently performed for each of them to uncover their children. Thus, only when the total space consumed at this lowest level is equal to, or exceeds the unit of spatial locality at the lowest level, is the [ level]{} variable bumped and all the produced leaves of the BFS moved to level 2. It is easy to see that this replicates the operation of $\text{BFS}_d$ described in the previous section and discovers $d$ *dynamically*. For any $\text{level} = i >1$, the remaining steps of [HBTreeIterative]{} implement $P_i$. Recall that the input to $P_i$ is a list of nodes, this is held in [roots\[i\]]{}. Lines 11-17 check whether repeated applications of $P_{i-1}$ have exhausted the unit of spatial locality given by $s[i]$. If so, the output leaves are passed on to level $i+1$. Else, we repeatedly call $P_{i-1}$ on the head of the $roots[i]$ list. Finally, the level [n + 1]{} is simply a placeholder for the output of $P_n$. For convenience, it is assumed to have an infinite amount of spatial locality, i.e. it covers the whole memory. Copying services are provided by [UnconditionalCopyNode]{} that copies nodes into a region of memory whose top is held in the [tospace]{} variable. We assume that this region of memory is infinite. A practical implementation of [UnconditionalCopyNode]{} could simply call a standard heap allocation function (such as [malloc]{}) to allocate memory to copy into, although this inserts metadata before the copied object that can reduce locality (as we discuss later). In order to better understand the operation of\ [HBTreeIterative]{}, consider the graph of Figure \[fig:example\]. If the input to [HBTreeIterative]{} is the node [a]{} then the first thing the algorithm does is to add [a]{} to roots\[n+1\]. The node [a]{} then bubbles down to roots\[1\] through repeated iterations of the loop. It is then passed to\ [UnconditionalCopyNode]{} at line 34. Next, line 36 adds its children, nodes [b]{} and [c]{} to [ leaves\[1\]]{}. Since [roots\[1\]]{} is now empty, both [b]{} and [c]{} are copied into [roots\[1\]]{} at line 9. Assume for this example that at least one node fits into [s\[1\]]{} and so [b]{} and [c]{} are processed in turn resulting in [roots\[1\]]{} containing [d]{}, [e]{}, [f]{}, and [ g]{}. If now the algorithm finds [space\[1\] &gt; s\[1\]]{} then it promotes these four nodes to level 2 and then calls $P_2$ on them (unless [s\[2\]]{} is also exhausted). This finally results in a consecutive layout in memory of nodes [ a]{}, [b]{} and [c]{}, followed by (partial) subtrees rooted at [d]{}, [e]{}, [f]{} and [g]{} produced by $P_2$. Complexity ---------- Given a tree with $N$ nodes and $E$ edges, the [HBTreeIterative]{} algorithm performs a graph traversal on it. It visits every edge exactly once (in line 36). For any node, after discovery, the node is added to every one of the root and leaf lists *at most once*. Hence, given $L$ levels in the memory hierarchy the algorithm has a worst-case complexity of $O(LN + E)$. In a tree the number of edges is one less than the number of nodes and hence the complexity is $O(LN)$. Note that in practice with settings such as $s[1]=64$ for and $s[2]=1024$ (used in this paper) most nodes are only added to lists at levels 1 and 2 before being processed and never make it to higher levels. Practically, this keeps the overhead of the algorithm (and its variants) close to $O(N + E)$ or $O(N)$ for trees. Limitations ----------- A key limitation of [HBTreeIterative]{} is that it applies only to trees. There are two reasons why it cannot be used on arbitrary graphs. The first is that [UnconditionalCopyNode]{} expects that it is passed any node exactly *once*. This is easily violated in the case of multiple parents (as in directed acyclic graphs) or graphs with cycles. A related problem arises because more than one pointer may exist to a node and hence [ UnconditionalCopyNode]{} should be able to update parent pointers even if the node has already been copied. In the next section we describe extensions to [ HBTreeIterative]{} that allow it to be used on arbitrary graphs. Extension to Arbitrary Graphs {#sec:simple} ============================= Extending [HBTreeIterative]{} to arbitrary graphs first requires lowering our level of abstraction somewhat. Along these lines, we introduce the notion of a slot. A slot is simply a pointer to a node. Any given arbitrary graph is therefore ’rooted’ at multiple slots. Readers familiar with garbage collectors in Java will notice that we have borrowed these two terms from there [@gc_book]. Processing an arbitrary graph requires processing each root slot in turn. This is done by calling [InitiateCopy]{} in Procedure \[alg:initcopy\]. For every given root slot, it calls [HBTreeIterative]{} thereby processing individual components of the graph. Note that we *do not require graphs with different roots to be unreachable from each other*. roots: queue of slots to start copying slot := roots.pop\_front() HBTreeIterative(\*slot, levels, Sizes\[1..levels\]) The only change we make to [HBTreeIterative]{} is to call [ ConditionalCopyNode]{} at line 34 instead of\ [ UnconditionalCopyNode]{}. Procedure \[alg:ccopy\] describes the former. The major change is the introduction of the Forward table that has an entry for *each* possible node, indicating whether that node has been forwarded. If not already forwarded, it forwards (determines the position in tospace) the node and returns an appropriate indication. [HBTreeIterative]{} then uses the returned indication to ensure that every node is considered *at most once* thereby solving the problem of multiple parents and cycles encountered in arbitrary graphs. node: node to copy Forward\[node\] := tospace tospace := tospace + sizeof(node) true FALSE roots: list of pointers to start copying slot := roots.pop\_front() Copied\[\*slot\] = true Copy \*slot to Forward\[\*slot\] slot := Forward\[\*slot\] roots.push\_back(child\_slot) Since [ConditionalCopyNode]{} no longer updates pointers or copies nodes, we introduce a post-processing phase called [CompleteCopy]{}. This is shown in Procedure \[alg:compcopy\] and is called after [InitiateCopy]{} has completed. It traverses the graph starting at the roots again and maintains a [Copied]{} map to avoid copying a node more than once. It also *updates* all the slots to point to the copies in [tospace]{}. Complexity ---------- We now consider the complexity of [InitiateCopy]{},\ [HBTreeIterative]{} and [CompleteCopy]{} taken together. Slots now explicitly represent edges in the graph. Every slot (edge) is still considered at most once (or twice if it is a root slot), this includes lookups in the extra maps. Any given node is also processed at most once at copying and enters (and leaves) every one of the $L$ lists at most once. Hence the *asymptotic* complexity of the algorithm remains at $O(LN + E)$. Limitations ----------- The extensions to deal with arbitrary graphs suffer from two key problems. The first problem naturally is the need to have two passes through the graph. The second problem is the space cost of maintaining the extra maps. A related problem that we have not thus far considered is the cost of maintaining the root and leaf dequeues. Even maintained as linked lists (as we do) they require one [next]{} pointer per node. Note that the space overheads are bounded by $O(N)$ and do not depend on the number of edges. Nevertheless, it is desirable to try to eliminate them. In spite of these limitations, we use an actual implementation of the generalised HBA described in this section in the evaluation. For applications where offline Reorganisation of large graphs is acceptable it is simple to implement and effective. Single-Pass and Possibly Metadata-Less Blocking {#sec:super} =============================================== We now introduce the final and most sophisticated HBA. Before introducing the algorithm, we make some observations about the operation of [ HBTreeIterative]{} in the case of general graphs. For any node that is copied, all its unvisited children are added as a group to [leaves\[1\]]{}. This group of nodes continues *unbroken* through various lists until it enters a [roots]{} list. After that they are dequeued in order to be bubbled down and copied. Note that once a node is picked off a [roots]{} list at line 26 of [ HBTreeIterative]{} it is copied immediately. The key idea we take away from this observation is that it is possible to *represent this group* of nodes by its parent. Once the group (parent) enters a [ roots]{} list, instead of popping the parent, we pop *slots in the parent* one by one and bubble them down in turn to be processed. Processing the slot involves both conditionally copying the target and updating the slot to point to the new version of the node. We now introduce [HBGraphOnePass]{} that incorporates these ideas. Unlike the version for trees, it takes as input the root slot to start processing from (and not the root node pointed to by that slot). It still depends on (a slightly modified for interface reasons) [ InitiateCopy]{} to iterate through roots but eliminates [CompleteCopy]{}. root\_slot: root slot to block from n: levels of hierarchy s\[1..n\]: block sizes (monotonically increasing) s\[n+1\] = INFINITY Initialise to empty: Dequeue roots\[1..n+1\] Initialise to empty: Dequeue leaves\[1..n+1\] Initialise to zero: space\[1..n+1\] level := 1 /////Conditionally copy root slot old\_node := \*root\_slot space\[1\] := space\[1\] + sizeof(\*root\_slot) leaves\[1\].append(old\_node) ////////Refill roots\[level\] := leaves\[level\] leaves\[level\] := empty //////Promote leaves\[level + 1\].append(roots\[level\]) roots\[level\] := empty space\[level + 1\] := space\[level + 1\] + space\[level\] level := level + 1 continue loop //////Promote TERMINATE space\[level + 1\] := space\[level + 1\] + space\[level\] level := level + 1 slot := roots\[level\].pop\_front\_slot() //////Init level space\[i\] := 0 level := 1 /////Do some copying work old\_node := \*slot space\[1\] := space\[1\] + sizeof(\*slot) leaves\[1\].append(old\_node) slot: slot to copy copied := false Forward\[\*slot\] := tospace tospace := tospace + sizeof(\*slot) copy \*slot to Forward\[\*slot\] copied := true slot := Forward\[\*slot\] copied [HBGraphOnePass]{} also uses a slightly different helper routine [ CopySlot]{} to complete copying of nodes. It directly updates the slot with the copy of the node. Assuming that copying was required the node is then explored for children. Note that in line 39 of [HBGraphOnePass]{} we append the *old* node to the [leaves\[1\]]{} list. This node is bubbled up and ultimately moves to a [roots]{} list. In line 30, instead of popping the node, we pop its children one by one. This can be implemented by maintaining constant sized state about which child has been popped. Note that we hoist a copy of part of the processing for the root\_slot to lines 6–9. Other than these changes [HBGraphOnePass]{} operates similarly to [ HBTreeIterative]{}. To illustrate this, consider again the example graph in Figure \[fig:example\]. The algorithm is passed the slot pointing to node [ a]{}. It then copies node [a]{} and adds [a]{} *itself* to [ leaves\[1\]]{}. Assuming [s\[1\]]{} is not exhausted [a]{} then moves to [ roots\[1\]]{}. Slots containing children of [a]{} are then popped off by the call to [pop\_front\_slot]{} and this [b]{} and [c]{} are copied next. Tracing the operation further, it should be evident that [ HBGraphOnePass]{} produces the same layout in memory as [HBTreeIterative]{} for the example. Complexity ---------- The complexity analysis for [HBGraphOnePass]{} is substantially the same as in the previous section. Every slot is considered at most once from lines 30–39 of [HBGraphOnePass]{} (other than being passed in as a root slot). Nodes traverse every list at most once. Thus [HBGraphOnePass]{} also has an asymptotic complexity of $O(LN + E)$. Eliminating Metadata {#sec:nomdata} -------------------- A simple observation also serves to eliminate the need for $O(N)$ extra metadata. All nodes in a graph would have at least one pointer worth of space (unless the graph is using a particularly compressed format). Further if the graph is to represent any form of branching it would have space for at least two pointers in its node representation. We use the first available pointer to store a pointer to the forwarded copy. This eliminates to need for the [Forward]{} table since slots that need copying point to old objects that can be looked up to determine the forward pointer. In our implementations, we set the last bit to distinguish the forward pointer from the same field in objects that have not yet been copied (since they would point to objects aligned at 4 byte boundaries in our implementation). Further, we can use the other available pointer for manipulation of the lists representing the dequeues. This eliminates the need for any external metadata, removing the need for $O(N)$ extra space. Note that this elimination is made possibly by the organisation of [ HBGraphOnePass]{} that uses parent nodes to represent groups of children. In the absence of this observation, we would have been forced to use dequeues of slots in order to eliminate the extra pass thus rendering impossible elimination of extra metadata for the dequeues. Further, this metadata-less one pass [ HBGraphOnePass]{} algorithm is a significant advancement over previous work. Cheney [@cheney] had shown that it was possible to use a breadth-first traversal over objects in the heap without the need for any extra metadata. Although Wilson [@wilson] had developed hierarchical BFS for a single level, it required one pointer per page of memory. [HBGraphOnePass]{} subsumes Wilson’s algorithm as a special case of a single-level memory hierarchy and also admits implementation without the need for extra metadata, similar to Cheney’s copying algorithm. Of course, the technique described in this subsection is optional. For example one could also allocate the extra $O(N)$ metadata directly in objects, such as we have done for integration with the Jikes Java Virtual Machine [@jikes] garbage collector in one of our implementations. There, the forwarding pointer already existed in the object header and we found it simplest to just add another field for manipulation in the dequeue lists. Implementations {#sec:implementations} =============== In this section we describe three different environments into which we have integrated versions of our HBA. This section focuses on graph representation and concrete interfaces for HBA. Another important focus area for this section is memory allocation. Allocating target memory for copied nodes can broadly follow one of two strategies. One is to use the system provided memory manager that is already in use. This has the advantage of integrating cleanly with existing code that uses graphs since there is no need to write an additional memory manager and allocated nodes can be freed by the rest of the application. A disadvantage to this approach is that system memory managers (such as [malloc]{}) introduce additional metadata at the head of each object. This should be taken into account when calculating object sizes in the HBA and additionally reduces the effectiveness of blocking in improving spatial locality. Also, memory managers such as [malloc]{} often use discontiguous pools for objects of different sizes. This introduces further fragmentation if graph nodes are differently sized, which is often the case for variable sized edge lists attached to the nodes. The other option is to use a memory manager with external metadata to manage to space copied to, which can introduce the complexity of using multiple memory managers. We have implemented HBA in three environments for evaluation: custom graph implementations written in C, as an add-on to the Boost Graph Library in C++ and finally as a modification to the traversal phase of the semispace copying collector in the Jikes Java Virtual Machine. We now discuss these implementations individually. Boost C++ Graph Library {#sec:boost} ----------------------- The Boost C++ Graph Library (BGL) [@boost] is a library for in-memory manipulation of graphs. It makes extensive use of C++ generics (making extensive use of templates) to provide a customisable interface for storing graph structured data. We wrote an extension to the library that takes as input a graph stored in the *adjacency list* representation and produces a new graph after hierarchical blocking. The adjacency list representation stores a list of vertices in an iterable container, that allows one to iterate over every vertex in a graph and then apply a suitable function. For each vertex the list of edges originating at (for directed graphs) or connected to (for undirected graphs) is maintained in another iterable container attached to the vertex. Although our implementation is generic, we experimented with C++ STL vectors as the container for both the types of components. Our HBA addition to Boost uses the simpler version of the algorithm described in Section \[sec:simple\]. It makes use of two external $O(N)$ sized vectors and assumes a canonical mapping from the set of vertices to non-negative integers: $f:V\rightarrow \{1\;\;..\;\;|V|\}$. This is already provided by Boost and we use this integer to index into the $O(N)$ sized vectors. The first vector is used to maintain a “next” pointer ($f(\text{next})$ rather than $\text{next}$). The second vector (we call it the [RemapVector]{}) assigns, for each given node in the input graph a number indicating its position in the copied graph i.e if [RemapVector(j) = RemapVector(i) + 1]{} then node [j]{} should be copied right after node [i]{} in the output graph. It is easy to see how the remap vector is set up by calls to [ConditionalCopyNode]{}. We use the produced remap vector in [CompleteCopy]{} to actually produce the output graph. We use the memory allocator provided by Boost thereby incurring the overheads described above. We have assumed for object size calculation that each edge occupies an area equal to three pointers (for source, destination and edge-weight information) and two pointers worth of memory management metadata. Since the vertices are already laid out in a vector, we multiply the number of outgoing edges by the space occupied by five contiguous pointers to determine the object size for the HBA algorithm. Our decision to use the simpler algorithm for integration into Boost was guided in part by the observation that others have deemed the overheads of storing all the vertices of a graph in memory acceptable [@semiem]. Custom Graph Implementations in C {#sec:custom-c} --------------------------------- The Boost graph library introduces a number of overheads internal to the objects used to represent graph vertices and edges, in part due to the need to be generic and object-oriented. In order to explore the benefits that HBA can bring to graphs constructed out of carefully designed minimal objects we also wrote a custom implementation of binary search trees and undirected graphs for evaluation. For the C implementations, we allocated a large chunk of memory to copy the nodes into, this is done by maintaining the size of the graph as it is loaded and calculating the total space required for the copy, in advance. This eliminates all overhead due to memory manager metadata. It is easy to write a memory manager that makes use of external metadata [@vee] to manage this space, although we have not done so for this implementation. ### Binary Search Trees We use the fairly minimalist representation of binary search tree nodes shown below: /* Basic bst node */ typedef struct node_st { unsigned long k; struct node_st *l, *r; #ifndef NO_REORG_PTR struct node_st **reorg_next; #endif }node_t; We have explored both [HBTreeIterative]{} (that makes use of the [ reorg\_next]{} pointer) as well as [HBGraphOnePass]{} that eliminates that pointer. ### Undirected Graphs We have also written representations for undirected graphs in C. These make use of the data structures described below: typedef struct node_st { int id; struct node_st* neighbours[0]; } node_t; node_t **node_vector; int *neighbour_cnts; int node_cnt; The graph node structure contains an integer identifier and an array of pointers to its neighbours. Since this is an undirected graph, two neighbouring nodes point to each other through their [neighbours]{} arrays. In addition, we have a list of vertices ([node\_vector]{}), a list of neighbour counts ([ neighbour\_cnts]{}) and finally the count of the total number of nodes in the graph ([node\_cnt]{}). Maintaining the size of the neighbours array outside the data structure improves cache-line utilisation. Note that as with the Boost implementation, we have an enumeration of vertices as integers that allows indexing appropriate arrays. A point that might not be evident is that HBA also results in better utilisation of linear arrays such as [ neighbour\_cnts]{}. This is because adjacent nodes are more likely to be placed close to each other in those arrays. This was key to our decision to move the neighbour count out of the containing [node\_t]{} object. Finally, we also wrote an extension to our implementation of undirected graphs to illustrate a *beneficial and powerful application of the HBA algorithm*. We allow the writing out of the entire graph after HBA to disk. The nodes are written out in the order that they are produced by HBA. Therefore, reloading the nodes results in an in-memory representation of the graph that is *already blocked*. Although we show in our evaluation that the overheads of HBA are tolerable enough to apply at runtime, this feature serves to illustrate that offline HBA of in-memory graphs and storing the results in a persistent manner is very much possible and eliminates the overheads of HBA when processing static graphs. Jikes RVM --------- We have also implemented HBA as an extension to the traversal phase of a semispace copying collector in the Jikes Java Virtual Machine. We have done this to illustrate the ability of HBA to operate in dynamic environments with varying graphs. The fundamental idea of the semispace copying collector is to divide the heap into two ’spaces’. At any instance of time only one space is ’active’, and is used to allocate objects. When a garbage collection is triggered, all mutator threads (that can change the connectivity or contents of objects on the heap) are stopped. A collector thread then runs a traversal phase that finds all reachable objects on the heap using a depth-first search in the baseline implementation and copies them (on discovery) to the other ’inactive’ space. After completion of this traversal, the ’inactive’ and ’active’ spaces switch roles until the next garbage collection cycle. We have modified the traversal phase to use HBA in order to copy objects with appropriate clustering. We use the single pass [HBGraphOnePass]{} algorithm. Since the object-header used in the Jikes RVM had already allocated a pointer to hold miscellaneous information, we used this pointer to hold forwarding information. We added another pointer size field to hold next pointer information for maintaining the dequeues. We use this slightly bloated object representation as the baseline (without HBA) as we felt that this fairly reflected the fact that we could have eliminated this overhead with extra work. Our implementation in the Jikes RVM is at a prototype level only, in part to avoid the complexity of refactoring the garbage collection classes to implement an optimised version. For example, it is difficult to interrupt the scanning of objects on the heap to determine slots. Hence we have a suboptimal implementation of [pop\_front\_slot]{} that simply uses an array of 32 slots to hold the results of a complete scan. Any overflows from this array are treated as new roots by the HBA implementation. Nevertheless, we found the implementation adequate to demonstrate the feasibility of integrating HBA into the garbage collector of a managed environment, thereby showing that is can be used in such environments and on changing graphs. Performance Evaluation ====================== We evaluate HBA on an system equipped with an Intel i5-2400S CPU and 16 GB of RAM. For uniformity (the JikesRVM is 32-bit only) we use 32 bit code and thus are limited to using under 4GB of main memory. We know a-priori that the system has the following caches in its memory hierarchy (with corresponding settings for the HBA): 1. Various caches (L1 data, L2 and L3) with a 64 byte line size. We set [s\[1\] = 64]{}. 2. Open-page mode DRAM that provides lower latencies for consecutive accesses to the same 1024 byte page. We set [s\[2\] = 1024]{}. 3. TLB caching page table translations for 4KB pages. A TLB miss incurs significant penalty for page table walks. We set [s\[3\] = 4096]{}. 4. TLB caching of super page translations. The OS (linux) clusters groups of pages into 2MB superpages to reduce consumption of page table and TLB entries. We set [s\[4\] = 2097152]{}. It is unclear to us (as it would be to users of HBA in the field) about which level is most likely to impact performance for a particular graph and particular traversal algorithm. Hence we usually use full HBA with the settings above, indicated as [HBA(all)]{} in the evaluation. Occasionally we use HBA for only a subset of levels: such as the page level [HBA(4k)]{}, this is essentially produces the layouts of Wilson’s hierarchical BFS [@wilson]. C, Binary Search Tree --------------------- We first consider the performance of a binary search tree written in C. The tree is setup to hold a contiguous integer keyspace and is then queried with random keys. We are interested in the average query time (measured over a minute of continuous queries) as the traversal is affected by the locality of the nodes. We investigate the following layouts of tree nodes in memory: 1. Psedorandom layout 2. BFS layout ((such as would be produced by Cheney et. al. [@cheney]) 3. DFS layout, some researchers have suggested that this might be a better way to layout nodes for locality than BFS [@stamos] 4. VEB layout and finally 5. HBA layout using the algorithm in this paper. The results are shown in Figure \[fig:bst\]. As expected the pseudorandom layout performs the worst. BFS performs better than DFS. The best performance is provided by HBA, *which performs almost comparably to VEB*. At a tree depth of 25 ( 64 million nodes) using HBA reduces query time by approximately 54% while using BFS reduces query time by approximately 31%. A notable feature of the graph is the knee around the tree depth of 18. This is because beyond that depth the tree no longer fits in the 6MB last level cache leading to a sudden increase in query time. Not every level of cache has an equal impact on performance. To illustrate this, we ran the same experiment restricting HBA to various subsets of the memory hierarchy. The results, shown in Figure \[fig:bst\_hba\_vary\] illustrate that the cache (64 byte units of spatial locality) and the VM page (4K unit of spatial locality) have the maximum impact on tree access. Finally, we verify that HBA is indeed improving cache access. We used cachegrind [@cachegrind] to instrument the queries and simulate various levels of the memory hierarchy of the actual system. Table \[table:mr\] shows the miss rates for various levels of the hierarchy. Although DFS and BFS both improve miss rates, HBA is most effective at reducing miss rates, explaining its better performance. It is interesting to note that HBA when run over all levels reduces miss rates for any level to that produced by running HBA to block only for that level. This is an important result, since it illustrates that HBA provides additive benefits for all the memory hierarchy levels it is aware of. Finally, we note that HBA is almost as effective at tackling miss rates as the cache-oblivious VEB layout. L1d line DRAM page VM page VM superpage -------------- ---------- ----------- ---------- -------------- Pseudorandom 0.42 0.53 0.44 0.42 BFS 0.36 0.43 0.26 0.07 DFS 0.33 0.23 0.19 0.11 VEB **0.24** **0.15** **0.05** **0.02** 64 **0.25** 0.19 0.10 0.03 1K 0.31 **0.19** 0.07 0.02 4K 0.32 0.25 **0.07** 0.02 2M 0.33 0.34 0.14 **0.02** 64+1K+4K+2M **0.25** **0.17** **0.06** **0.02** : Cachegrind miss rates[]{data-label="table:mr"} Finally, we illustrate the effect of removing HBA related metadata from the tree node, using the [HBGraphOnePass]{} algorithm and reusing pointer fields from the old version of the object (as discussed in Section \[sec:nomdata\]). The results shown in Figure \[fig:bstnomdata\] illustrate the improvements obtained due to the lower memory footprint of the tree nodes: approximately 14% lower than that with an extra pointer per tree node. Arbitrary Graphs ---------------- Trees represent an ideal workload for HBA, since they correspond exactly to the spanning tree built during traversal. In this section, we consider more complex graphs with a large number of connections. We use a synthetic graph generator that is part of the SNAP suite [@snap]. We consider various kinds of graphs that are of current interest to the research community involved in mining information from graph structured data: 1. Watts-Strogatz small world model [@watts] (10 million nodes, 29 million edges): These graphs have logarithmically growing diameter and model small world networks, such as social networks with the informally well known “six degrees of separation”. 2. Barabasi-Albert model [@albert] (10 million nodes, 39 million edges) also models real world phenomena but provides graphs where the out-degree of nodes follows a power law distribution. This is often the case, for example, with web pages that link to each other. 3. 2d mesh (9 million nodes, 17 million edges) models real world road networks. Answering real time navigational queries on such networks are often a component of many online services. 4. 4ary tree (10 million nodes, one less edge). To provide some perspective on binary trees considered thus far, we also measure performance on trees where each node has 4 children. We use two different algorithms (in two different implementations). The first is a single source shortest path algorithm (using Dijkstra [@cormen]) that finds shortest paths from a given source to all nodes in the graph. We use a random assignment of wights to edges (a uniform random choice over a range of size the same as the number of vertices). We use an implementation of this algorithm in Boost (Section \[sec:boost\]). We then turn the input graph into an undirected and unweighted version by adding a reverse edge for every given edge and performing a breadth-first search in our custom C environment (Section \[sec:custom-c\]). The choice of algorithms and datasets is fairly similar to other approaches that evaluate the performance of graph processing solutions [@semiem]. For each algorithm, data set and environment we measure the speedup of the algorithm after HBA on the graph using both HBA(4k) as well as HBA(all) i.e.blocking for VM pages and for all levels of the hierarchy respectively. The results shown in Table \[table:graph\] *underscore the efficacy of HBA for arbitrary graphs*. Large speedups (as high as 21X !) are obtained with HBA. Speedups are generally higher for our custom C environment due to the optimised (reduced) object footprints. Optimising for all levels of the memory hierarchy often provides better performance than just optimising for one level, underscoring the importance of a multi-level blocking algorithm. The results also illustrate an interesting example of destructive interference between levels. HBA(4k) performs slightly better than HBA(all) in the case of 2d meshes. Other than this example, we have found that in all cases HBA(all) performs at least as well as HBA(4k). ------------------ --------- ---------- ----------- ----------- HBA(4K) HBA(all) HBA(4k) HBA(all) Watts\_Strogatz 1.41 1.44 1.38 1.40 Albert\_Barabasi 1.01 1.02 1.09 1.11 2d\_mesh 2.38 2.35 3.85 3.80 4ary\_tree 1.00 1.00 **20.70** **21.31** ------------------ --------- ---------- ----------- ----------- : Graph Processing Speedups[]{data-label="table:graph"} JikesRVM -------- Our final set of results are from the Jikes Java Virtual machine. We measured the performance of the same binary search tree considered in the C environment, when implemented in Java. We configured the JVM to use a 1GB heap for the experiments. We also configured it to perform a system-wide GC before starting the query phase of the test. The results, shown in Figure \[fig:bst\_jikes\] indicated that the benefits seen with C are also replicated in the Java environment. HBA for all levels with a tree depth of 22 provides a 29% speedup over the baseline version, while HBA for only the VM page provides a 19% speedup over the baseline version. Note that JVM memory limitations meant we were unable to build trees of larger depth. In a runtime environment the overhead of collection is also an important factor. With this in mind, we measured the time for a semispace copy of the entire heap after the tree has been completely built. The results are shown in Figure \[fig:bst\_jikes\_overhead\]. In the worst case, HBA adds an overhead of only 18%. Crucially the overhead of optimising for *all* levels is the worst case only 10% more than optimising for only the page level. The average overheads are much lower, well under 10%. Finally we also measure overheads and performance with the more general DaCapo benchmark suite [@dacapo]. The results shown in Fig \[fig:dacapo\_overhead\] indicate that the overhead of adding HBA to the garbage collector is under 15% in all cases and usually under 10%. In addition for two of the benchmarks (antlr and fop) we see improved performance due to more locality on the heap. Related Work {#sec:related} ============ There is a large body of existing research into improving the cache performance of in-memory data. Broadly the approaches can be divided into three classes. The first class of techniques deal with prefetching objects ahead of use. An example of this is the approach of Luk et al. [@rds_prefetch], who place a prefetch pointer in linked list nodes to prefetch later nodes early enough to avoid cache miss penalties during traversals. Dynamic approaches are also possible such as that of Chilimbi et al. who profile a program to detect frequently occurring streams of accesses [@chilimbi_hot]. A second class of techniques is to statically modify the data structures themselves to make them more cache friendly. One way is to use knowledge of the cache hierarchy and transfer units to size data structure nodes, such as in B-trees  [@cormen]. This can be extended to make the B-tree nodes cache friendly at various levels (similar to the objective of this work). Kim et al.[@fastbtree] extend the basic idea of B-trees to be architecture sensitive at various levels using hierarchical blocking. Although their hierarchical blocking produces layouts similar to our reorganisation algorithm they have a static data structure redesign for B trees unlike our dynamic general purpouse algorithm. Another approach to data structure design is cache oblivious data structures. These are designed so as to improve spatial locality regardless of the level of memory hierarchy and block size being considered. The “van Emde Boas” layout [@veb] forms the basis for many cache oblivious designs including those for cache oblivious B-trees [@streamingbtree]. A third class of techniques (including the one in this paper) are used at runtime. One approach is to control memory allocation. Chilimbi et al.[@chilimbi_cc_layout] investigated the use of a specialised memory allocator that could be given a hint about where to place the allocated node. Another approach is to use the data structure traversal done by garbage collectors to copy objects into new cache friendly locations [@copying]. Mark Adcock in his PhD thesis [@adcock_phd] considered a range of runtime data movement techniques including those triggered by pointer updates. However none of these techniques consider the effect of multiple units of locality in the memory hierarchy and in that sense this work is orthogonal to all of them. It is possible to take the algorithm in this paper and use it to improve on all of these locality maximisation techniques, which are usually restricted to plain breadth-first search to discover nodes. Future Work =========== The current implementation of HBA ignores the last level in a usual memory memory hierarchy: persistent storage. It is extremely easy to add a 512 byte sector size to HBA to also optimise layouts for transfer from disk. Although we have not investigated this aspect yet, we believe HBA can also significantly improve access to the last (persistent) level in typical memory hierarchies. Conclusion ========== We have presented a hierarchical blocking algorithm (HBA) that takes as input an arbitrary graph and a description of a memory hierarchy and lays out graph nodes to be sensitive to and provide better performance for that hierarchy. We have investigated implementations of HBA in various settings and shown that it provides non-trivial benefits in all of them; making the case the graph layout and memory hierarchy sensitivity are important factors in the performance of graph algorithms.
--- abstract: 'We investigate magnetic ordering in metallic Ba(Fe$_{1-x}$Mn$_x$)$_2$As$_2$ and discuss the unusual magnetic phase, which was recently discovered for Mn concentrations $x>10$%. We argue that it can be understood as a Griffiths-type phase that forms above the quantum critical point associated with the suppression of the stripe-antiferromagnetic spin-density-wave (SDW) order in BaFe$_2$As$_2$ by the randomly introduced localized Mn moments acting as strong magnetic impurities. While the SDW transition at $x=0$, 2.5% and 5% remains equally sharp, in the $x=12$% sample we observe an abrupt smearing of the antiferromagnetic transition in temperature and a considerable suppression of the spin gap in the magnetic excitation spectrum. According to our muon-spin-relaxation, nuclear magnetic resonance and neutron-scattering data, antiferromagnetically ordered rare regions start forming in the $x=12$% sample significantly above the Néel temperature of the parent compound. Upon cooling, their volume grows continuously, leading to an increase in the magnetic Bragg intensity and to the gradual opening of a partial spin gap in the magnetic excitation spectrum. Using neutron Larmor diffraction, we also demonstrate that the magnetically ordered volume is characterized by a finite orthorhombic distortion, which could not be resolved in previous diffraction studies most probably due to its coexistence with the tetragonal phase and a microstrain-induced broadening of the Bragg reflections. We argue that Ba(Fe$_{1-x}$Mn$_x$)$_2$As$_2$ could represent an interesting model spin-glass system, in which localized magnetic moments are randomly embedded into a SDW metal with Fermi surface nesting.' author: - 'D.S.Inosov' - 'G.Friemel' - 'J.T. Park' - 'A.C.Walters' - 'Y. Texier' - 'Y. Laplace' - 'J.Bobroff' - 'V. Hinkov' - 'D.L.Sun' - 'Y. Liu' - 'R.Khasanov' - 'K.Sedlak' - 'Ph.Bourges' - 'Y.Sidis' - 'A.Ivanov' - 'C.T. Lin' - 'T. Keller' - 'B.Keimer' title: |  \ Possible realization of an antiferromagnetic Griffiths phase in Ba(Fe$_{1-x}$Mn$_x$)$_2$As$_2$ --- Introduction ============ Magnetic phase transitions in disordered systems ------------------------------------------------ It is well established that intrinsic randomness, often present in real condensed-matter systems in the form of quenched substitutional disorder, can exert a crucial influence on the behavior of the system’s thermodynamic parameters close to a phase transition [@Vojta06; @LohneysenRosch07]. Such effects have been studied in detail in several model systems, most notably in disordered Ising or Heisenberg ferro- and antiferromagnets [@Griffiths69; @BallesterosFernandez98; @BhattLee82; @Sandvik02; @VajkGreven02; @SknepnekVojta04; @VojtaSchmalian05]. It has been demonstrated that sufficiently strong disorder can alter the critical scaling behavior of a phase transition, or even lead to the appearance of qualitatively new electronic or magnetic states. In particular, quantum phase transitions can be smeared because of the coexistence of disordered (paramagnetic) regions and locally ordered clusters within the so-called Griffiths region of a phase diagram [@Griffiths69; @TanaskovicMiranda04; @LohneysenRosch07], which has been observed experimentally in various real materials [@BinekKleemann98; @SalamonLin02; @SalamonChun03; @WangSun07; @GuoYoung08; @GuoYoung10; @KrivoruchkoMarchenko10; @EreminaFazlizhanov11]. The specifics of itinerant magnetic systems [@Vojta10; @NozadzeVojta11; @NozadzeVojta12], which are of the most relevance to our present study, is determined by the presence of long-range Ruderman-Kittel-Kasuya-Yosida (RKKY) interactions [@RudermanKittel54; @Kasuya56; @Yosida57; @FischerKlein75] between local magnetic moments that induce correlations between the magnetically ordered rare regions, leading to the formation of so-called cluster glass (CG) phases preceding uniform ordering [@CastroNetoJones00; @DobrosavljevicMiranda05; @WesterkampDeppe09; @UbaidKassisVojta10]. At present, theoretical understanding of rare-region effects in itinerant systems still remains a topic of active research and is yet far from being complete [@Vojta10; @NozadzeVojta11]. It has been also noted [@NozadzeVojta11] that most of the experimental reports of Griffiths-type phases in itinerant systems are concerned with ferromagnetic metals, while there are barely any clear-cut experimental observations of such phases in itinerant antiferromagnets. Metallic compounds with pronounced Fermi-surface nesting, which are close to a spin-density-wave (SDW) instability, are especially promising as model systems for demonstration of the above-mentioned effects, because the RKKY interaction is known to be enhanced at the nesting vector [@InosovEvtushinsky09]. Hence, if localized magnetic moments are randomly embedded into such a metal to form a so-called RKKY spin glass (SG) [@ShellCowen82; @BinderYoung86; @FischerHertz99], the long-range superexchange between them [@BulaevskiiPanyukov86] is expected to support magnetic correlations between antiferromagnetic (AFM) rare regions with the same SDW wave vector. The RKKY interaction in layered metals with Fermi surface nesting has been considered theoretically, for example, in Refs. . However, thermodynamic properties of such strongly nested systems with randomly embedded local magnetic moments have not been investigated, to the best of our knowledge. Phase diagram of Ba(Fe$_{1-x}$Mn$_x$)$_2$As$_2$ ----------------------------------------------- Layered iron pnictides [@RenZhao09] are among the most actively studied metallic materials, in which Fermi surface nesting is generally considered to be responsible for the formation of an AFM spin-density-wave state at low temperatures [@LumsdenChristianson10review]. They have attracted enormous attention in recent years mainly because of the high superconducting transition temperatures that can be induced in these systems by chemical substitution or pressure [@Chu09; @PaglioneGreene10; @Johnston10; @Stewart11]. In particular, the so-called ‘122’ compounds with the body-centered-tetragonal ThCr$_2$Si$_2$-type structure, such as $A$Fe$_2$As$_2$ ($A$=Ba, Sr or Ca), usually exhibit superconductivity upon transition-metal doping on the Fe site [@CanfieldBudko10]. Prominent exceptions are Mn- and Cr-substituted systems [@SefatSingh09; @MartyChristianson11; @PandeyAnand11; @KimKhim10; @ThalerHodovanets11; @KimKreyssig10; @KimPratt11], which exhibit no superconductivity, but instead show unusual magnetic behavior that is not typical for their stoichiometric parent compounds. Moreover, it has been demonstrated that substituting Mn for Fe in a hole-doped Ba$_{1-x}$K$_x$Fe$_2$As$_2$ leads to a much more rapid suppression of the superconducting transition temperature, $T_\text{c}$, as compared to other transition-metal elements [@ChengShen10; @LiGuo12]. Our recent nuclear magnetic resonance (NMR) measurements [@TexierLaplace12] indicate that this distinct behavior results from the localization of additional Mn holes, which prevents the change in the electron count within the conductance band, in contrast to Co or Ni dopants, but instead stabilizes local magnetic moments on the Mn sites. Their absolute value was initially assessed at 2.58$\mu_\text{B}$ from dc magnetization measurements [@ChengShen10], yet this quantity is likely overestimated according to a more recent analysis [@Bobroff_private]. Such a localized magnetic behavior extends to the pure and doped BaMn$_2$As$_2$ compounds, in which large spin-5/2 local moments have also been reported [@SinghEllern09; @SinghGreen09; @JohnstonMcQueeney11; @PandeyDhaka12; @BaoJiang12; @LamsalTucker13]. The Ba(Fe$_{1-x}$Mn$_x$)$_2$As$_2$ (BFMA) system reportedly changes its ground-state structure from orthorhombic to tetragonal at a critical Mn concentration of $x_\text{c}\approx10$%, while its $(\piup,0)$ magnetic ordering wave vector remains unchanged [@KimKreyssig10; @KimPratt11]. This observation is surprising, because the anisotropic arrangement of magnetic moments in the stripe-AFM state, characterized by this propagation vector, is expected to break the tetragonal symmetry of the crystal and naturally lead to an orthorhombic distortion, as it happens in the BaFe$_2$As$_2$ and in many other iron pnictides. The SDW ordering temperature, $T_\text{N}$, is initially reduced upon Mn substitution, like in Sr(Fe$_{1-x}$Mn$_x$)$_2$As$_2$ [@KasinathanOrmeci09], for $x<x_\text{c}$, but starts to increase again above this critical concentration. This is accompanied by a drastic broadening of the phase transition in temperature [@ThalerHodovanets11; @KimKreyssig10]. So far, both the unusual suppression of the structural distortion and this nonmonotonic behavior of the ordering temperature remain unexplained. They appear to be unique to BFMA, as they are not observed in the very similar Ba(Fe$_{1-x}$Cr$_x$)$_2$As$_2$ system, which changes its ground state abruptly from the stripe-AFM SDW to a checkerboard (G-type) AFM order, typical for pure BaCr$_2$As$_2$ [@SinghSefat09], at $\sim$30% Cr concentration [@MartyChristianson11]. ![Characterization of the magnetic transitions in Ba(Fe$_{1-x}$Mn$_x$)$_2$As$_2$. (a) Temperature dependence of the normalized in-plane resistivity, $\rho(T)/\rho(\text{300\,K})$, for all samples used in the present study. (b) In-plane resistivity (smooth curve) and its temperature derivative (noisy curve) for the $x=12$% sample, exhibiting an inflection-point anomaly at $T^\ast\approx105$K. (c) Temperature dependence of the elastic neutron scattering intensity at the $(\smash{{\frac{1}{\protect\raisebox{0.8pt}{\scriptsize 2}}}}0\smash{{\frac{1}{\protect\raisebox{0.8pt}{\scriptsize 2}}}})_{\text{Fe}_1}$ magnetic Bragg peak position (monotonic curve) and its temperature derivative []{data-label="Fig:Transition"}](Fig01.pdf){width="\columnwidth"} Finally, in a recent inelastic neutron scattering (INS) experiment on a BFMA sample with $x=7.5$% ($x<x_\text{c}$, $T_\text{N}=80$K), the presence of an additional branch of short-range quasielastic spin fluctuations was demonstrated at the $(\piup,\piup)$ wave vector, corresponding to the checkerboard-type AFM order that is not observed in the parent compound [@TuckerPratt12]. This result indicates a tendency to the formation of antiferromagnetically polarized Néel regions around Mn local moments, which compete with the stripe SDW order of the parent compound and are likely responsible for the initial reduction of $T_\text{N}$ at low Mn concentrations ($x<x_\text{c}$). Sample preparation and characterization {#Sec:Characterization} ======================================= Single crystals of Ba(Fe$_{1-x}$Mn$_x$)$_2$As$_2$ ------------------------------------------------- For the present study, we used three single-crystalline BFMA samples with Mn concentrations of 2.5%, 5.0%, and 12% and a reference sample of the pure parent BaFe$_2$As$_2$ compound. These samples are identical to those studied in Refs. and , respectively. All single crystals were grown from self-flux in zirconia crucibles sealed in quartz ampoules under argon atmosphere, as described elsewhere [@LiuSun10]. All four compositions have been characterized using dc resistivity, NMR, and muon spin relaxation ($\mu$SR) spectroscopy. INS experiments were performed only on the $x=0$ and $x=12$% samples, which represented arrays of multiple single crystals with a total mass of the order of 1g, coaligned to a mosaicity of $\sim$2$^\circ$ using a real-time digital x-ray Laue backscattering camera. In addition, the $x=12$% sample was investigated by neutron Larmor diffraction. The lattice parameters corresponding to this composition, as measured on a triple-axis neutron spectrometer during sample alignment at room temperature, were $a=b=3.97(4)$Å (which is nearly the same as in BaFe$_2$As$_2$) and $c=13.44(5)$Å (about 1% larger than in BaFe$_2$As$_2$ [@RotterTegel08PRB]). These relative changes in the unit cell dimensions are similar to those reported for Sr(Fe$_{1-x}$Mn$_x$)$_2$As$_2$ in an earlier study [@KimKhim10]. Resistivity and elastic neutron scattering ------------------------------------------ The temperature dependence of the in-plane resistivity, $\rho(T)$, for all four BFMA samples, normalized to its room-temperature values, is shown in Fig.\[Fig:Transition\](a). In agreement with Ref., we observe sharp anomalies in $\rho(T)$ at the SDW transition for all samples with $x<x_\text{c}$, whereas for the $x=12$% sample the resistivity curve is smooth. This observation is consistent with the absence of anomalies in the temperature dependence of the specific heat [@PopovichBoris10]. Only after differentiation \[Fig.\[Fig:Transition\](b)\], an inflection point is revealed near $T^\ast\approx105$K, somewhat above the SDW transition temperature of the $x=5.0$% sample, in agreement with the increasing tendency for $T^\ast$ in this composition range that was reported in Ref.. ![Temperature dependence of the main $^{75}$As NMR line intensity (filled symbols) for samples with different Mn concentrations, normalized to the respective high-temperature saturation values. Empty circles show the volume fraction of the tetragonal phase in the $x=12$% sample (right vertical axis), as measured by neutron Larmor diffraction (see text).[]{data-label="Fig:NMR_wipeout"}](Fig02.pdf){width="\columnwidth"} To establish the origin of this $T^\ast$-anomaly in the resistivity, in Fig.\[Fig:Transition\](c) we compare it with the temperature dependence of the magnetic Bragg intensity (without background subtraction), measured on the same sample at the $({\frac{1}{\protect\raisebox{0.8pt}{\scriptsize 2}}}0 {\frac{1}{\protect\raisebox{0.8pt}{\scriptsize 2}}})_{\text{Fe}_1}$ magnetic Bragg peak by means of elastic neutron scattering. Here and henceforth, the subscript “Fe$_1$” indicates that the reciprocal-lattice vector, $(H\,K\,L)$, is given in the unfolded notation corresponding to the Fe-sublattice (one Fe atom per unit cell) [@ParkInosov10], and its coordinates are presented in reciprocal lattice units (r.l.u.), defined as 1$\text{r.l.u.} = 2\sqrt{2}\piup/a$ for the $H$ and $K$ directions and as 1$\text{r.l.u.} = 4\piup/c$ along the $L$ direction, where $a$ and $c$ are the lattice constants of the crystal in the tetragonal ($I4/mmm$) symmetry. First, we note that in contrast to the sharp order-parameter-like onset of the magnetic Bragg scattering at $T_\text{N}$ that is typical for most iron-arsenide parent compounds [@CruzHuang08; @HuangQiu08; @ZhaoRatcliff08; @KanekoHoser08; @GoldmanArgyriou08; @ParkFriemel12], here we see a smeared transition with a gradual onset around $\sim$240K, which lies approximately 100K above the ordering temperature of BaFe$_2$As$_2$. One possible explanation for this smearing, which we will later substantiate by direct measurements, is a disorder-induced separation of the sample into spacial regions with different local values of $T_\text{N}$ that leads to a gradual change of the magnetically ordered volume with temperature. However, the conventional random-$T_\text{N}$ type of disorder [@Vojta06] alone, which one would expect from a locally inhomogeneous distribution of the Mn atoms, can not explain the dramatic enhancement of the onset temperature. Indeed, at small Mn concentrations, $T_\text{N}$ is suppressed as a function of $x$ and therefore an inhomogeneous Mn distribution should result in the spread of local $T_\text{N}$ values between zero and at most 140K, i.e. we would normally expect it to be limited from above by the transition temperature of the parent compound. This conventional type of behavior is found, for instance, in Ba(Fe$_{0.99}$Ni$_{0.01}$)$_2$As$_2$, where despite the strong disorder the transition is merely suppressed by Ni substitution with no significant broadening, according to a recent $^{57}$Fe Mössbauer spectroscopy study [@OlariuBonville12]. In contrast, the behavior of magnetic Bragg intensity in Ba(Fe$_{0.88}$Mn$_{0.12}$)$_2$As$_2$ is qualitatively different, because at 140K it already reaches 27% of its saturation value, suggesting that the local $T_\text{N}$ exceeds that of the pure BaFe$_2$As$_2$ in approximately 1/4 of the sample volume. Hence, we must conclude that although individual Mn impurities tend to suppress the ordering temperature, at sufficiently large concentrations (perhaps at $x\gtrsim x_\text{c}$) there exists an increasing probability of finding certain local configurations of Mn moments (rare regions) that stabilize the $(\piup,0)$ type of order sufficiently to reverse the downward trend in the onset temperature, as can be seen in the published phase diagram [@KimKreyssig10]. For this to happen, collective effects of several Mn moments (deviation from the dilute limit) must be at play. In Fig.\[Fig:Transition\](c), we also show the temperature derivative of the magnetic Bragg intensity, whose striking similarity with the $\mathrm{d}\rho(T)/\mathrm{d}T$ curve in Fig.\[Fig:Transition\](b) leaves no doubt about the magnetic origin of the $T^\ast$-anomaly. Nuclear magnetic resonance -------------------------- In Ref., we already reported a detailed NMR study performed on the same set of BFMA samples. Without reiterating the results of that work, here we will only be interested in the $T$-dependence of the paramagnetic (PM) volume fraction, which can be directly measured by following the main $^{75}$As NMR line wipeout as a function of temperature. The NMR line intensity, multiplied by temperature, is plotted in Fig.\[Fig:NMR\_wipeout\] for samples with different Mn content. For the convenience of comparison, the high-temperature saturation values for every dataset were normalized to unity. The plotted quantity therefore serves as a direct gauge of the nonmagnetic fraction of the sample volume. For both $x=2.5$% and $x=5.0$%, the NMR line intensity sharply drops to zero at the SDW ordering temperature, indicating a transition to the magnetically ordered state in the whole volume of the sample: The freezing of the Fe moments results in a strong shift of the NMR line out of our limited observation window. In the $x=12$% sample, however, a gradual intensity drop starts already near $\sim$240K, well above the ordering temperature of the parent compound, and progresses down to $\widetilde{T}_\text{N}\approx50$K, where the entire signal is lost. The shape of the transition curve is strikingly similar to that of the magnetic Bragg intensity in Fig.\[Fig:Transition\](c), which unequivocally confirms that the smearing of the magnetic transition occurs due to the gradual expansion of the regions with static magnetic moments and to the corresponding reduction in the PM volume upon cooling, most naturally explained by the broad distribution of the local ordering temperatures. We note that even in the $x=5.0$% sample, a small, but similarly gradual wipeout of the NMR line can be seen below 200K, which leads to only a 10% reduction of the PM volume upon reaching $T_\text{N}$. Neutron Larmor diffraction and orthorhombicity {#Sec:Larmor1} ---------------------------------------------- Perhaps the most surprising property of the BFMA system, according to previous neutron and x-ray diffraction studies [@KimKreyssig10], is the complete suppression of the tetragonal-to-orthorhombic structural phase transition for $x>x_\text{c}$, which reportedly holds down to the lowest temperatures despite the presence of the well established $(\pi,0,\pi)_{\text{Fe}_1}$ stripe-AFM order that appears to be identical to that in the parent compound. This observation is very difficult to explain, because the stripe-AFM order obviously breaks the $C_4$ rotational symmetry, and the corresponding orthorhombic distortion is anticipated due to the non-vanishing magnetoelastic coupling. In Ref., the authors speculate that a new double-$\mathbf{Q}$ magnetic structure with an order parameter of the form $\Delta_1\mathrm{e}^{\mathrm{i}\,(\pi,0)\cdot\mathbf{R}}+\Delta_2\mathrm{e}^{\mathrm{i}\,(0,\pi)\cdot\mathbf{R}}$ (with both $\Delta_1\neq0$ and $\Delta_2\neq0$), theoretically suggested by Eremin and Chubukov [@EreminChubukov10], could be reconciled with their experimental observations. We find this explanation theoretically compelling, yet unpersuasive, as it is hard to imagine that in the presence of very strong magnetic disorder and the dramatically broadened distribution of the local transition temperatures, the system could keep the delicate balance between the $\Delta_1$ and $\Delta_2$ order parameters over macroscopic volumes. Apparently, such an exotic order that has never been observed in any clean iron-pnictide compound requires precisely tuned conditions to be stabilized, which are unlikely in a magnetically inhomogeneous system with randomly embedded local moments. ![Neutron Larmor-diffraction measurements of the orthorhombic splitting in Ba(Fe$_{0.88}$Mn$_{0.12}$)$_2$As$_2$. (a) Experimental data for different temperatures (indicated above each curve), fitted to a model containing a mixture of the orthorhombic (O) and tetragonal (T) phases (solid lines). The dashed line shows a failed fit of the $T=4$K data assuming a single tetragonal phase [@KimKreyssig10]. For clarity, each dataset is offset vertically by an increment of 0.2 units from the one below it. (b) Modeled diffraction profiles, corresponding to every temperature in panel (a), as they would look like in an x-ray diffraction experiment with infinitesimally small resolution. These models account for the experimentally determined orthorhombic splitting, ratio of the tetragonal and orthorhombic phase volumes, and the peak broadening due to the finite width of the microstrain distribution, as extracted from the fits in panel (a). (c) Temperature dependence of the orthorhombicity parameter, $\varepsilon=(a-b)/(a+b)$, extracted from the same fits. The dashed line is a temperature-independent fit. The grey dotted line is the corresponding dependence for the parent BaFe$_2$As$_2$, reproduced from Ref. for comparison.[]{data-label="Fig:Larmor"}](Fig03.pdf){width="\columnwidth"} In search for an alternative explanation for the missing orthorhombicity, we have performed neutron Larmor diffraction measurements on our $x=12$% sample, which is very similar to the $x=11.8$% sample from Ref., if judged by the shape of the resistive transition, the temperature dependence of the magnetic Bragg intensity, and the value of $T^\ast$. Neutron Larmor diffraction [@Rekveldt00; @RekveldtKeller01; @RekveldtKraan02] is a polarized-neutron technique known to be extremely sensitive to minor structural distortions and the lattice-spacing spread, ${\scriptstyle\Delta}d/d$, with resolution better than $10^{-5}$, which does not depend on beam collimation and monochromaticity and is independent of the mosaic spread. The detailed principle of this technique is explained, for instance, in Ref.. Our measurements were done at the neutron resonant spin-echo triple-axis spectrometer TRISP installed at the FRM-II research reactor in Garching, Germany. The neutron polarization was measured as a function of the Larmor precession phase, controlled by the magnitude of the magnetic field that was applied in the same direction before and after the sample. To be sensitive to variations in the $d$-spacing of the $(200)_{\text{Fe}_1}$ Bragg reflection, the magnetic field boundaries were made parallel to the (200)$\mathrm{_{Fe_{1}}}$ Bragg planes. The results are shown in Fig.\[Fig:Larmor\](a). In Larmor diffraction, the measured polarized-neutron intensity is proportional to the Fourier transform of the $d$-spacing distribution [@RekveldtKraan02; @KellerRekveldt02]. This means that for a single mean value of $d$, distributed with a certain full width at half maximum (FWHM), the measured signal would monotonically decrease with increasing magnetic field (increasing Larmor phase). However, for two closely spaced characteristic values of $d$, one will observe destructive and constructive interference in the measured neutron polarization. Larmor diffraction is therefore highly sensitive to orthorhombic distortions, as it can distinguish very clearly between a single Bragg peak in the case of a tetragonal crystal and a pair of peaks that are split due to an orthorhombic distortion, even if this splitting is too small to be resolved by conventional neutron or x-ray diffraction. The appearance of a pronounced minimum in the low-temperature data measured on the $x=12$% sample \[Fig.\[Fig:Larmor\](a), bottom curve\] is therefore definitive evidence $\hspace*{\columnwidth}\vspace*{6em}$ $\hspace*{\columnwidth}$ that the majority of the sample is orthorhombic. At higher temperatures, it proved impossible to fit the data under the assumption that the whole sample was either orthorhombic or tetragonal. However, by assuming a *coexistence* of orthorhombic and tetragonal phases, the data could be fitted consistently at all temperatures, with all parameters nearly independent of temperature apart from the orthorhombic and tetragonal fractions of the sample volume. The latter fraction is plotted in Fig.\[Fig:NMR\_wipeout\] as a function of temperature (empty symbols), showing an increase upon warming that is consistent with that of the PM volume fraction measured on the same sample by NMR and exhibiting a similarly broadened transition with a comparable width and centered at approximately the same temperature. Note that at high temperatures, the fitting of the Larmor diffraction data systematically underestimates the tetragonal volume fraction by $\sim$20%, which is most likely due to a deviation of the ${\scriptstyle\Delta}d/d$ distribution from a perfect Gaussian shape that cannot be trivially accounted for. In the experimental data, such a deviation is difficult to distinguish from a small admixture of the orthorhombic phase, which explains the 20% reduction of the high-temperature saturation value in Fig.\[Fig:NMR\_wipeout\] from the expected 100%. Otherwise, the similar shapes of the curves describing the evolution of the PM and tetragonal volume fractions let us conclude that only the PM part of the sample remains tetragonal, whereas the remaining magnetically ordered fraction is orthorhombic. The corresponding orthorhombicity parameter, obtained from the same fits and plotted in Fig.3(c), turns out to be nearly independent of temperature, with a mean magnitude of $(a-b)/(a+b) = 3.5(1) \times 10^{-3}$ that is almost identical to that found in the orthorhombic phase of the undoped BaFe$_2$As$_2$ (Ref.). ![(a) Temperature dependence of the momentum width, ${\scriptstyle\Delta}Q/Q$, measured on the $({\frac{1}{\protect\raisebox{0.8pt}{\scriptsize 2}}}0\frac{7}{\protect\raisebox{0.8pt}{\scriptsize 2}})_{\text{Fe}_1}$ magnetic Bragg reflection. (b) The energy width measured on the $({\frac{1}{\protect\raisebox{0.8pt}{\scriptsize 2}}}0{\frac{1}{\protect\raisebox{0.8pt}{\scriptsize 2}}})_{\text{Fe}_1}$ magnetic Bragg reflection vs. temperature. Measurements above 160K were unfeasible due to the dramatically reduced intensity of the signal.[]{data-label="Fig:Larmor_widths"}](Fig04.pdf){width="0.85\columnwidth"} As another parameter of the fits in Fig.\[Fig:Larmor\](a), we have also obtained the FWHM of the microstrain distribution, which describes the lattice-spacing spread, ${\scriptstyle\Delta}d/d$, and the intrinsic width of the Bragg reflection that would be measured in a conventional diffraction experiment if both the diffractometer resolution and the sample mosaic were infinitesimally small. For the $(200)_{\text{Fe}_1}$ Bragg peak, this width is nearly independent of temperature and amounts to ${\scriptstyle\Delta}d/d = 4.6(1) \times 10^{-3}$, which is comparable to the orthorhombic distortion. For the out-of-plane $(002)_{\text{Fe}_1}$ reflection, ${\scriptstyle\Delta}d/d$ marginally increases from $1.37(1)\times 10^{-3}$ at room temperature to $1.44(1)\times 10^{-3}$ at $T=6$K. In Fig.3(b), we reconstruct the scattering function, $S(Q)$, from the parameters of the fits in Fig.3(a). These model curves correspond to the longitudinal Bragg-peak profiles that would be measured in a conventional x-ray or neutron diffraction experiment under the assumption of an infinitesimally small diffractometer resolution. Even at the lowest temperature of 4K, we observe some intrinsic overlap of the two orthorhombic peaks due to the broad microstrain distribution, so there is no doubt that the sizeable intrinsic variation of the $d$-spacing would make it exceedingly difficult to observe the orthorhombic distortion directly using traditional diffraction methods. At higher temperatures, the splitting would be additionally masked by the coexisting tetragonal phase. This appears to be the most likely reason for the reported absence of orthorhombicity in a similar sample [@KimKreyssig10]. Intrinsic width of the magnetic Bragg peaks ------------------------------------------- We now turn our attention to the evolution of the momentum and energy widths of the magnetic Bragg peaks with temperature in the $x=12$% sample. The momentum width of the $({\frac{1}{\protect\raisebox{0.8pt}{\scriptsize 2}}}0\frac{7}{\protect\raisebox{0.8pt}{\scriptsize 2}})_{\text{Fe}_1}$ magnetic Bragg peak was measured using Larmor diffraction in the same experimental setup as described in section \[Sec:Larmor1\]. We find no temperature dependence of this width up to 150K \[Fig.\[Fig:Larmor\_widths\](a)\], with the mean value of the normalized FWHM ${\scriptstyle\Delta}Q/Q = 2.3(1) \times 10^{-3}$. In general, the momentum width of a commensurate magnetic Bragg peak is determined by both the structural microstrain ${\scriptstyle\Delta}d/d$ and the size of the ordered magnetic domains that could lead to an additional finite-size broadening. One might expect that since the magnetically ordered fraction of the sample becomes smaller with increasing temperature, the ordered magnetic domains would shrink upon warming, thereby increasing the momentum width. However, in our case we find the momentum width to be independent of temperature, which suggests that the magnetic ordering remains long range at least up to 150K. Under this assumption, the sole source of the broadening is the structural microstrain, which in the case of the $({\frac{1}{\protect\raisebox{0.8pt}{\scriptsize 2}}}0\frac{7}{\protect\raisebox{0.8pt}{\scriptsize 2}})_{\text{Fe}_1}$ magnetic Bragg peak lies between the values of ${\scriptstyle\Delta}d/d$ that were found in section \[Sec:Larmor1\] for the $(200)_{\text{Fe}_1}$ and $(002)_{\text{Fe}_1}$ structural peaks. Such an anisotropy in the width of the microstrain distribution is typical for the iron pnictides and has been [@InosovLeineweber09]. The magnetic Bragg peak energy width was measured using the neutron resonance spin-echo (NRSE) technique at the TRISP spectrometer. In NRSE, the dependence of neutron polarization on the magnitude of the magnetic fields before and after the sample is proportional to the Fourier transform of the lineshape of magnetic fluctuations [@Keller03]. NRSE spectroscopy routinely provides accurate measurements of energy widths down to the $\mu$eV range at TRISP. In Fig.\[Fig:Larmor\_widths\](b) we show the energy width of the $({\frac{1}{\protect\raisebox{0.8pt}{\scriptsize 2}}}0 {\frac{1}{\protect\raisebox{0.8pt}{\scriptsize 2}}})_{\text{Fe}_1}$ magnetic Bragg peak in the $x=12$% BFMA sample as a function of temperature. We find that the width is vanishingly small at all temperatures, meaning that the observed peak remains static and shows no quasielastic behavior up to at least 160K within our instrumental resolution. In other words, its characteristic lifetime $\tau$ is longer than $\sim$1ns, which is the typical timescale over which the NRSE measurement was sensitive. We therefore conclude that the magnetic order in BFMA remains truly static and long range above the critical Mn concentration even at temperatures that are comparable with the $T_\text{N}$ of the parent compound. Thermal expansion coefficient ----------------------------- The magnetic and structural phase transitions in iron pnictides typically have a pronounced signature in the temperature dependence of the thermal expansion coefficients [@BudkoNi09; @MeingastHardy12; @BohmerBurger12]. Linear thermal expansion can be directly measured using neutron Larmor diffraction by following the shift of the total Larmor precession phase vs. temperature, even though the precision of this type of measurements is typically inferior to the state of the art capacitive dilatometry. To avoid the complications related to the coexistence of the tetragonal and orthorhombic phases and the resulting nontrivial structure of the in-plane Bragg reflections, here we will only concentrate on the $c$-axis isobaric linear thermal expansion coefficient, $$\alpha_c = \frac{1}{c}\frac{\partial\![c(T)-c(0)]}{\partial\!T},$$ measured on the $(004)_{\text{Fe}_1}$ structural Bragg reflection of the $x=12$% BFMA sample. It is presented in Fig.\[Fig:ThermalExpansion\] as the $\alpha_c/T$ ratio in order to emphasize the asymptotic behavior at $T\rightarrow0$. We compare it with the equivalent result of the BaFe$_2$As$_2$ dilatometry measurements from the literature [@MeingastHardy12]. No significant changes in the absolute values of the $\alpha_c/T$ coefficient upon Mn substitution can be observed either in the low- or high-temperatures regions, whereas in the immediate vicinity of the SDW transition the sharp anomaly at $T_\text{N}$ is replaced by a broad and shallow minimum near $T^\ast$, reminiscent of the one seen in the $T$-derivative of the resistivity \[Fig.1(b)\]. ![Temperature dependence of the $c$-axis linear thermal expansion coefficient, $\alpha_c/T$, for Ba(Fe$_{0.88}$Mn$_{0.12}$)$_2$As$_2$ as measured by polarized-neutron Larmor diffraction (circles). The grey line shows the analogous dependence for the pure BaFe$_2$As$_2$, reproduced from Ref. for comparison.[]{data-label="Fig:ThermalExpansion"}](Fig05.pdf){width="\columnwidth"} $\mu$SR spectroscopy {#Sec:MuSR} ==================== ![image](Fig06.pdf){width="\textwidth"} Experimental details -------------------- Muon-spin-rotation spectroscopy [@Amato97; @Blundell99] is a very powerful tool when it comes to studying magnetism in samples with several coexisting phases. As spin-polarized muons are implanted in the sample, the precession of their magnetic moment is determined by the value of the local magnetic field at the muon site. Therefore, this method is sensitive to the statistical distribution of the local magnetic environments in the sample in a very similar way to NMR. For a system that exhibits static magnetism, $\mu$SR can therefore offer valuable information about the degree of magnetic ordering (long- vs. short-range, commensurate vs. incommensurate, etc.), the value of the static magnetic moment, its homogeneity in the sample, and the magnetic volume fraction. By performing measurements in a weak transverse field, one can also accurately estimate the fraction of the sample volume with no static magnetism, i.e. PM or nonmagnetic. This is achieved by counting the fraction of muons that feel no internal magnetic field, so that their precession frequency matches the magnitude of the applied field. In particular, $\mu$SR spectroscopy has already accumulated a long track record of studying phase-separation phenomena in both iron-pnictide and iron-chalcogenide superconductors [@DrewNiedermayer09; @ParkInosov09; @GokoAczel09; @TakeshitaKadono09; @WiesenmayerLuetkens11; @KhasanovSanna11; @ShermadiniKrztonMaziopa11; @CharnukhaCvitkovic12; @ShermadiniLuetkens12; @BernhardWang12]. We performed our $\mu$SR measurements on BFMA single crystals with all four available compositions ($x = 0$, 2.5%, 5.0%, and 12%) using the DOLLY instrument at the muon source of the Paul Scherrer Institute in Villigen, Switzerland. The incident muons were polarized parallel to the beam direction, and the samples were mounted with their $c$-axes turned by 45$^\circ$ in the horizontal plane with respect to the muon beam. Because the internal magnetic field at the muon site in the AFM phase is directed parallel to the crystallographic $c$-axis [@AczelBaggio08], in this experimental geometry the signal could be counted both on the left-right and forward-backward pairs of positron detectors. Zero-field $\mu$SR (AFM phase) ------------------------------ Figure \[Fig:muSR-ZF\] shows $\mu$SR data measured in zero magnetic field on samples with different Mn concentrations as a function of temperature. The parent compound (leftmost column), which we used here as a reference sample, showed pronounced oscillations in the time dependence of the muon asymmetry below $T_\text{N}$ with two characteristic frequencies, in agreement with Ref.. Upon increasing Mn concentration, we observed an increase in the depolarization rate of the oscillating signal, as can be seen from the comparison of the lowest-temperature ($T=5$K) datasets in Fig.\[Fig:muSR-ZF\]. This trend is indicative of the increasing inhomogeneity in the system that leads to a broadening of the local-field distribution at the muon site. As a result, the $T=5$K dataset for the $x=5.0$% sample looks qualitatively similar to the one measured on the parent compound at $T=133$K, immediately below the SDW transition. ![Fitting parameters for the zero-field $\mu$SR data. (a) Temperature dependencies of the muon oscillation frequencies. (b) The same for the muon depolarization rate. Arrows indicate transition temperatures. For the $x=12$% sample, depolarization rates for the exponentially decaying component of the $\mu$SR signal are additionally plotted with empty symbols. The lines are guides to the eyes. The inset shows the $T\rightarrow0$ limit of the depolarization rate, which is a measure of the degree of magnetic disorder in the ground state of the system, as a function of Mn concentration. The line is a .](Fig07.pdf){width="1.02\columnwidth"} \[Fig:muSR-Freq\] At a temperature of 200K, which lies significantly above $T_\text{N}$, we observed no loss of the muon asymmetry either in the $x=2.5$% or in the $x=5.0$% sample. This proves that samples with $x<x_\text{c}$ remain fully PM at this temperature. However, the $x=12$% sample shows a noticeable SG-like exponential depolarization of the $\mu$SR signal even at $T=200$K, which points at the nucleation of static magnetic islands in the small fraction of the sample volume. This signal persists down to $\sim$75K, where it coexists with the rapidly depolarizing oscillatory component. Knowing that the onset of the static $({\frac{1}{\protect\raisebox{0.8pt}{\scriptsize 2}}}0 {\frac{1}{\protect\raisebox{0.8pt}{\scriptsize 2}}})_{\text{Fe}_1}$ magnetic Bragg peak can be observed in the same temperature range \[Fig.\[Fig:Transition\](c)\], we can associate these islands with AFM rare regions. The size of such static magnetic domains must be sufficiently small to explain the absence of clear oscillations in the muon asymmetry down to 130K in temperature. Therefore, to support the long-range AFM order that is evidenced by the sharp magnetic Bragg peaks (Fig.\[Fig:Larmor\_widths\]), long-range AFM correlations between these domains must be present, possibly mediated by the nesting-assisted RKKY exchange interaction [@AkbariEremin11; @AkbariThalmeier13]. It is natural to associate this type of order with an RKKY SG or a CG phase [@ShellCowen82; @BinderYoung86; @FischerHertz99; @Vojta10]. ![image](Fig08.pdf){width="87.00000%"} In order to extract quantitative information from the zero-field $\mu$SR data, we have fitted the time-dependence of the muon asymmetry with the following model: $$A(t)=A_0\bigl[P_\text{osc}(t)+P_\text{SG}(t)+P_\text{PM}(t)\bigr],$$ where $A_0$ is the initial asymmetry, while the $P_\text{osc}(t)$, $P_\text{SG}(t)$, and $P_\text{PM}(t)$ terms represent the oscillating, exponentially depolarizing, and PM components of the $\mu$SR signal, respectively. These, in turn, can be described by $$\begin{aligned} &\!\!\!P_\text{osc}(t)=\frac{\upsilon_\text{osc}}{2}\Biggl[\sum_{i=1}^2 p_i\cos(2\piup\nu_it+\varphi)\,\mathrm{e}^{-\lambda^\text{ZF}_it}+\mathrm{e}^{-\lambda^\text{LO}t}\Biggr]\text{;}\\ &\!\!\!P_\text{SG}(t)\,=\frac{\upsilon_\text{SG}}{2}\,\bigl[\mathrm{e}^{-\lambda^\text{SG}t}+\mathrm{e}^{-\lambda^\text{LO}t}\bigr]\text{;}~~ P_\text{PM}(t) =\upsilon_\text{PM}\,\mathrm{e}^{-\lambda^\text{PM}t}\text{.}\end{aligned}$$ Here $\upsilon_\text{osc}$, $\upsilon_\text{SG}$, and $\upsilon_\text{PM}$ stand for the volume fractions of the corresponding phases; $\nu_i$ are the two muon precession frequencies; $p_i$ are the fractions of the muons on the two muon stopping sites corresponding to these frequencies (such that $p_1+p_2=1$); $\varphi$ is the initial phase of the muon spin; $\lambda^\text{ZF}$ and $\lambda^\text{SG}$ are the depolarization rates for the oscillating and for the rapidly decaying SG-like parts of the zero-field $\mu$SR signal, respectively; $\lambda^\text{LO}$ describes the slow relaxation of the muon polarization component longitudinal to the local magnetic field, originating from the 45$^\circ$ rotation of the sample’s $c$-axis with respect to the muon beam in our experimental geometry; $\lambda^\text{PM}$ represents the slow depolarization rate of the PM response. As we fitted the experimental data, we fixed $\upsilon_\text{PM}$ to the PM volume fraction determined from the transverse-field $\mu$SR, as described below. The $\upsilon_\text{SG}$ volume fraction was considered zero for all samples except for $x=12$%, where it was treated as a free fitting parameter within the full width of the smeared phase transition. Further insight is gained by directly plotting the temperature dependent fitting parameters of the zero-field $\mu$SR data, such as the oscillation frequencies and the depolarization rates (Fig.\[Fig:muSR-Freq\]). A nonmonotonic dependence of the oscillation frequencies on Mn concentration is revealed by Fig.\[Fig:muSR-Freq\](a). Initially, for the $x=0$, $x=2.5$%, and $x=5.0$% samples, the oscillation frequency decreases with Mn substitution, whereas for the $x=12$% sample it is remarkably restored to roughly the same value as in the parent compound. Moreover, the oscillation frequencies in the $x=12$% sample no longer exhibit the order-parameter-like suppression as a function of temperature, which is typical for samples with sharp AFM transitions. Instead, they remain approximately constant in the whole range of temperatures where the frequency can be properly defined ($T \lesssim 130$K), possibly with a weak local minimum at $T^\ast$. In Fig.\[Fig:muSR-Freq\](b), we also show the depolarization rate of the zero-field $\mu$SR signal, $\lambda^\text{ZF}(T)$. For $x=0$, $x=2.5$%, and $x=5.0$% samples, the depolarization rate is only defined for the oscillatory response below $T_\text{N}$, as shown by solid lines. For the $x=12$% sample, we also plot in addition the depolarization rate for the SG-like phase that exhibits a rapid exponential depolarization without oscillations in a $T$-dependent fraction of the muons stopping in the sample, $\lambda^\text{SG}(T)$. This parameter, which turns out to be nearly constant within the accuracy of our fits, can only be measured at elevated temperatures ($T\gtrsim75$K) and is plotted in Fig.\[Fig:muSR-Freq\](b) with empty symbols (dashed line). To demonstrate that the actual amount of magnetic disorder introduced in the system with Mn substitution is indeed proportional to $x$, in the inset to Fig.\[Fig:muSR-Freq\](b) we plot the $x$-dependence of the depolarization rate in the zero-temperature limit, $\lambda^\text{ZF}(T\rightarrow 0)$, resulting from the empirical fits of $\lambda^\text{ZF}(T)$. This quantity is a good measure of the degree of magnetic disorder in the ground state of the system. As expected, it shows a nearly perfect linear increase with Mn concentration, which confirms that the nominal Mn content is statistically distributed within the crystals, and that the exceptional behavior of the $x=12$% sample is not a consequence of macroscale Mn inhomogeneities at this particular composition. A qualitatively similar enhancement of the depolarization rate with increasing Mn concentration has been also reported recently in the LaFe$_{1-x}$Mn$_x$AsO series of samples [@FrankovskyLuetkens13]. Transverse-field $\mu$SR (paramagnetic phase) --------------------------------------------- \[Sec:TF-MuSR\] To measure the temperature dependence of the PM volume fraction in our samples, we have applied a weak transverse field of 30G and measured the fraction of the muons that experienced slow precession in the external field, as shown in Fig.\[Fig:muSR-TF\](a). A constant part of the observed oscillation amplitude, which persists down to the base temperature (5K curve) and originates from muons stopping outside of the sample, has been subtracted during the fitting process. The remaining ($T$-dependent) amplitude of the oscillations, normalized to the maximum muon asymmetry, is plotted in Fig.\[Fig:muSR-TF\](b) vs. temperature for all the four sample compositions. In agreement with the corresponding NMR result (Fig.\[Fig:NMR\_wipeout\]), the $x=0$, $x=2.5$%, and $x=5.0$% samples exhibit sharp magnetic transitions in their full volume, whereas in the $x=12$% sample the volume fraction of the PM phase changes gradually from 0 at low temperatures to $\sim$80% at 300K. The remaining 20% of the volume fraction at 300K can be naturally ascribed to the magnetic clusters that are responsible for the SG-like exponential depolarization of the muon asymmetry in zero field, which is observed in a comparable volume fraction of the sample. The width of the smeared transition is perfectly consistent with the results of NMR measurements discussed earlier. However, both in NMR and in $\mu$SR, the transition happens over a narrower range of temperatures than in elastic neutron scattering or in resistivity (Fig.\[Fig:Transition\]). As a consequence, the midpoint of both NMR and $\mu$SR transitions is shifted to $\sim$150K, which is significantly higher than $T^\ast$. Phase diagram for $x=12$% ------------------------- In Fig.\[Fig:muSR-TF\](c), we present a phase diagram that summarizes the results of both zero-field and transverse-field $\mu$SR measurements and elastic neutron scattering for the $x=12$% composition. It shows the temperature evolution of the volume fractions corresponding to the bulk ordered AFM phase (oscillating $\mu$SR signal in zero field), the CG phase (rapid exponential muon depolarization in zero field accompanied by a magnetic Bragg peak in neutron diffraction evidencing long-range magnetic correlations), the SG phase (muon depolarization in zero field without any long-range magnetic order), and the PM phase ($\mu$SR oscillations in the transverse field). This lets us define several characteristic temperature scales for this particular sample composition. Below $\widetilde{T}_{\rm}\approx50$K, the sample exhibits bulk AFM order in its whole volume. This is consistent with the monotonic trend of Néel temperature suppression with Mn substitution, already established at lower concentrations. At higher temperatures, the system enters the Griffiths regime of multiple coexisting phases. Above $\sim$150K, oscillations in the zero-field $\mu$SR signal can no longer be observed, which indicates the disappearance of the bulk AFM ordered phase. The CG phase, characterized by long-range AFM correlations between static magnetic clusters that are too small or too inhomogeneous to produce muon oscillations, persists to somewhat higher temperatures. We define the characteristic offset temperature of the CG phase, $T_\text{CG}\approx210$K, by the 95% suppression of the magnetic Bragg intensity with respect to its low-$T$ value. A weak exponential depolarization of the muon asymmetry in $\sim$20% of the sample volume persists up to the room temperature, but with no traces of long-range AFM correlations in the elastic neutron scattering, which is suggestive of fully magnetically disordered static clusters similar to a dilute SG [@Yamazaki82; @UemuraYamazaki85; @Fischer85]. ![INS data on the Ba(Fe$_{0.88}$Mn$_{0.12}$)$_2$As$_2$ sample at the magnetic ordering wave vector, $\mathbf{Q}_\text{AFM}$. (a) Several representative unprocessed momentum scans, measured at $T=1.5$K along the $(\smash{{\frac{1}{\protect\raisebox{0.8pt}{\scriptsize 2}}}}\kern.5pt K \smash{{\frac{1}{\protect\raisebox{0.8pt}{\scriptsize 2}}}})_{{\rm Fe}_1}$ reciprocal-space direction with $k_\text{f}=2.662$Å$\smash{^{-1}}$, centered at $\mathbf{Q}_\text{AFM}$. (b) Color map of the low-energy INS intensity in the spin-gap region, compiled out of multiple low-temperature momentum scans such as those shown in panel (a). (c) The background-subtracted scattering intensity, $S(\mathbf{Q},\omega)$, at $\mathbf{Q}_\text{AFM}=({\frac{1}{\protect\raisebox{0.8pt}{\scriptsize 2}}}0\, L)_{{\rm Fe}_1}$, with $L={\frac{1}{\protect\raisebox{0.8pt}{\scriptsize 2}}}$ (grey points) or $L={\frac{3}{\protect\raisebox{0.8pt}{\scriptsize 2}}}$ (all other points). The filled symbols were obtained from fits of the full momentum scans, such as those shown in panel (a), whereas empty symbols result from 3-point scans. The data taken with $k_\text{f}=2.662$Å$^{-1}$ and 3.837Å$^{-1}$ are shown with circles and squares, respectively. Datasets measured with different experimental conditions have been rescaled to match each other in the overlapping energy window. The solid curve is a guide to the eyes. The corresponding energy dependence for the BaFe$_2$As$_2$ parent compound from Ref. is shown for comparison as a dashed curve to emphasize the suppression of the spin gap by Mn substitution. (d) Evolution of the low-energy part of $S(\mathbf{Q},\omega)$ with temperature, demonstrating a partial spin gap at intermediate temperatures with a magnitude that decreases upon heating. (e) Temperature dependence of $S(\mathbf{Q},\omega)$ at various energies within the spin-gap region. (f) The same for the dynamic spin susceptibility, $\chi''(\mathbf{Q},\omega)$, obtained from the data in panel (e) after Bose-factor normalization. The lines are guides to the eyes.[]{data-label="Fig:INS"}](Fig09.pdf){width="\columnwidth"} As one can see from Fig.\[Fig:muSR-TF\](c), the characteristic temperature $T^\ast$, defined in Ref. and in Fig.\[Fig:Transition\] by the inflection point in the $T$-dependence of the resistivity, corresponds to the midpoint of the transition associated with the suppression of the bulk ordered AFM phase. This observation is not surprising, as one would expect the transport properties to be much stronger affected by the long-range static AFM order, leading to a Fermi surface reconstruction, than by dilute random inclusions of static magnetic clusters into the otherwise PM material. For a two-dimensional square lattice, the site percolation threshold amounts to 59.3% [@VanDerMarck97]. Therefore, at 50% filling of the sample volume by AFM ordered regions, the system is close to a percolative transition. In other words, at $T<T^\ast$ the AFM phase volume is mostly connected, whereas at $T>T^\ast$ it consists of disconnected clusters embedded in the magnetically disordered or PM matrix. Such a percolative crossover is the most likely reason for . Inelastic neutron scattering ============================ Experimental details -------------------- We have performed a series of INS measurements on the $x=12$% BFMA compound using thermal-neutron triple-axis spectrometers IN8 (ILL, Grenoble, France), PUMA (FRM-II, Garching, Germany), and 1T (LLB, Saclay, France). All measurements were performed with the fixed final neutron wave vector, $k_\text{f}=2.662$Å$^{-1}$ or 3.837Å$^{-1}$. A pyrolytic graphite filter was installed between the sample and the analyzer to eliminate the contamination from higher-order neutrons. The sample was mounted in one of the $(H\,K\,0)_{\text{Fe}_1}$, $(H\,0\,L)_{\text{Fe}_1}$, or $(H\,K\,H)_{\text{Fe}_1}$ scattering planes, depending on the particular goal of the experiment. Low-temperature spin gap ------------------------ ![INS data acquired on the $x=12$% sample at the magnetic zone boundary ($L=2$). (a) Three unprocessed momentum scans, measured at $T=1.5$K along the rocking trajectory in the $(H\,0\,L)_{{\rm Fe}_1}$ plane with $k_\text{f}=2.662$Å$\smash{^{-1}}$. The inset shows the scan trajectory in the $(H,L)$ plane. (b) The background-subtracted scattering intensity, $S(\mathbf{Q},\omega)$, taken at $\mathbf{Q}=({\frac{1}{\protect\raisebox{0.8pt}{\scriptsize 2}}}0\, 2)_{{\rm Fe}_1}$. The filled symbols were obtained from fits of the full momentum scans, such as those shown in panel (a), whereas empty symbols result from 3-point scans. The data taken with $k_\text{f}=2.662$Å$^{-1}$ and 3.837Å$^{-1}$ are shown with circles and squares, respectively. The solid curve is a guide to the eyes. The corresponding energy dependence for the BaFe$_2$As$_2$ parent compound [@ParkFriemel12] is shown for comparison with the dashed curve.[]{data-label="Fig:INS_L2"}](Fig10.pdf){width="\columnwidth"} We start our discussion of the INS data by presenting the low-energy spectrum of spin excitations in Ba(Fe$_{0.88}$Mn$_{0.12}$)$_2$As$_2$ at the magnetic ordering wave vector, $\mathbf{Q}_\text{AFM}$. In Fig.\[Fig:INS\](a), we show several representative low-temperature momentum scans along the Brillouin zone boundary, centered at $({\frac{1}{\protect\raisebox{0.8pt}{\scriptsize 2}}}0{\frac{1}{\protect\raisebox{0.8pt}{\scriptsize 2}}})_{\text{Fe}_1}$. A number of such scans is also summarized in Fig.\[Fig:INS\](b) as a color map. We observe a notable depletion of the scattering intensity at low energies, reminiscent of the spin anisotropy gap in the parent compound [@ParkFriemel12]. However, in contrast to BaFe$_2$As$_2$, where the intensity completely vanishes below $\sim$10meV in the AFM state, here the onset energy of magnetic fluctuations is strongly reduced, so that weak remnant spectral weight persists at least down to 2–3meV. This can be best seen in Fig.\[Fig:INS\](c), where we plot the scattering function, $S(\mathbf{Q},\omega)$, obtained by measuring the background-subtracted amplitude of the peak at various energies and by combining data from $L={\frac{1}{\protect\raisebox{0.8pt}{\scriptsize 2}}}$ and $L={\frac{3}{\protect\raisebox{0.8pt}{\scriptsize 2}}}$ acquired with different $k_\text{f}$. Indeed, a comparison of our data with an equivalent result for BaFe$_2$As$_2$ from Ref. (dashed curve) shows a reduction of the spin-gap energy from $\sim$10meV in BaFe$_2$As$_2$ to $\sim$3meV in Ba(Fe$_{0.88}$Mn$_{0.12}$)$_2$As$_2$, with a weak intensity tail extending to even lower energies. Note that despite this dramatic spin-gap reduction, the characteristic ordering temperature ($T^\ast$) in BFMA is . With increasing temperature, the spin gap in the $x=12$% BFMA sample is suppressed as shown in Fig.\[Fig:INS\](d). Instead of a gradual order-parameter-like reduction of the gap energy, which one would expect for a SDW transition, here the gap energy remains nearly constant with temperature, whereas the magnetic intensity inside the gap is continuously increasing, so that the spin gap is completely filled in upon reaching $T\approx140$K, which coincides with the ordering temperature of the parent compound. This unusual behavior can be naturally explained in the framework of the phase-separation scenario, which we have already established in sections \[Sec:Characterization\] and \[Sec:MuSR\]. The spin-excitation spectrum should be considered as a sum of two components: gapless excitations originating from the PM phase and gapped spin-wave-like excitations from the magnetically ordered regions. As the PM volume of the sample increases upon warming at the expense of the AFM phase, the anisotropy gap appears to be filled in. At the same time, the characteristic energy scale of the residual partial gap in the low-energy magnetic spectrum is nearly unaffected, because it is mainly determined by the rare regions with relatively high local values of $T_\text{N}$. Further insight is obtained by following the temperature dependence of the INS intensity at several fixed energies, shown in Fig.\[Fig:INS\](e). To account for the thermal population factor, in Fig.\[Fig:INS\](f) we have also plotted the imaginary part of the dynamical spin susceptibility, obtained from the same data after Bose-factor correction: $\chi''(\mathbf{Q},\omega)=(1-\mathrm{e}^{-\hslash\omega/k_\textup{B}T})\,S(\mathbf{Q},\omega)$. Remarkably, the anomalies related to the magnetic transition appear to be much sharper for the inelastic signal than for the magnetic Bragg peak in Fig.\[Fig:Transition\](c). This could be due to the fact that for a given energy transfer, $E$, only those magnetic regions whose spin gap is larger than this energy (i.e. those that are characterized by a sufficiently high local value of $T_\text{N}$) would yield an anomaly in the temperature dependence of the INS intensity. Therefore, this measurement effectively selects only a part of the magnetically ordered regions with $T_\text{N} \gtrsim E/k_\text{B}$, whereas the Bragg peak intensity in Fig.\[Fig:Transition\](c) originates from the whole magnetic volume of the sample independently of the local ordering temperature. We have also studied the dispersion of the spin gap along the out-of-plane direction by measuring the spin-excitation spectrum at integer $L$, i.e. at the magnetic zone boundary. In Fig.\[Fig:INS\_L2\](a), we show representative momentum scans through $({\frac{1}{\protect\raisebox{0.8pt}{\scriptsize 2}}}\,0\,2)$ measured at several energies along the rocking trajectory in the $(H\,0\,L)_{\text{Fe}_1}$ plane (see inset), whereas in Fig.\[Fig:INS\_L2\](b) we plot the corresponding background-subtracted scattering function at the same wave vector, obtained in the same way as the similar spectrum in Fig.\[Fig:INS\](c). Again, the reference spectrum for the parent compound from Ref. is shown with the dashed curve for comparison. Here, the 20meV zone-boundary gap observed in BaFe$_2$As$_2$ is also strongly suppressed and smeared out upon Mn substitution, so that the gradual onset of magnetic fluctuations is found near $\sim$5meV, whereas the high-energy offset of the spin gap stays unchanged at $\sim$25meV. In the framework of a localized Heisenberg-type description of spin-wave excitations in iron pnictides [@HarrigerLuo11], the spin-gap magnitude at integer $L$ is directly related to the value of the effective out-of-plane exchange constant, $J_\perp$. The observed smearing of this gap in BFMA is therefore indicative of a broad distribution of $J_\perp$ within the sample, whose maximal value coincides with that in the parent compound, whereas at the opposite extreme of this distribution a small fraction of the sample exhibits a quasi two-dimensional behavior with the much smaller zone-boundary gap of only 5meV. In-plane ellipticity and the absence of charge doping ----------------------------------------------------- ![Comparison of the elliptical cross-sections of the low-temperature INS intensity in the $Q_xQ_y$-plane projection for different compounds, measured at a constant energy, $\hslash\omega$, which is indicated above each panel. (a) Ba(Fe$_{0.88}$Mn$_{0.12}$)$_2$As$_2$ sample from the present work. (b) Parent BaFe$_2$As$_2$ compound from Ref. . (c) Electron-doped Ba(Fe$_{0.85}$Co$_{0.15}$)$_2$As$_2$ sample from Ref. . (d) Hole-doped Ba$_{0.67}$K$_{0.33}$Fe$_2$As$_2$ sample from Ref.. The white dotted lines in all panels mark Brillouin-zone boundaries corresponding to the conventional body-centered tetragonal unit cell. Note that both the orientation and the momentum scales in all panels are identical.[]{data-label="Fig:Ellipticity"}](Fig11.pdf){width="0.85\columnwidth"} In iron arsenides, the ordering wave vector, $\mathbf{Q}_\text{AFM}=({\frac{1}{\protect\raisebox{0.8pt}{\scriptsize 2}}}0 {\frac{1}{\protect\raisebox{0.8pt}{\scriptsize 2}}})_{\text{Fe}_1}$, lies on the axis of twofold rotational symmetry in the unfolded Brillouin zone, which determines its elliptical in-plane cross section. We demonstrated previously [@ParkInosov10] that the orientation of this ellipse and its aspect ratio can serve as an indirect measure of the doping level and can be well described by the band-structure theory. Indeed, in electron-doped Ba(Fe$_{1-x}$Co$_x$)$_2$As$_2$ (BFCA) the ellipse is strongly elongated along the transverse direction [@ParkInosov10; @WangZhang13], whereas in hole-doped Ba$_{0.67}$K$_{0.33}$Fe$_2$As$_2$ (BKFA) its longer axis flips to the longitudinal direction [@WangZhang13; @ZhangWang11]. In comparison to the doped compounds, the cross section of magnetic excitations in BaFe$_2$As$_2$ is nearly isotropic, with only a weak transverse elongation [@HarrigerLuo11]. In Fig.\[Fig:Ellipticity\](a) we present a similar measurement of the in-plane cross section in Ba(Fe$_{0.88}$Mn$_{0.12}$)$_2$As$_2$, measured at $T=4$K at an energy transfer of 10meV. The color map represents an interpolation of several $K$-scans, measured with a regular step along the $H$ direction in the $(H\,K\,H)_{\text{Fe}_1}$ scattering plane. For comparison, we reproduce the corresponding maps for the pure BaFe$_2$As$_2$, electron-doped BFCA and hole-doped BKFA in Figs.\[Fig:Ellipticity\](b), (c) and (d), respectively. One can see that the $x=12$% BFMA sample shows a nearly isotropic in-plane cross section of the INS intensity, which is characterized by the same aspect ratio and orientation as in the parent compound and is clearly different from the much more anisotropic response in the two superconducting samples. This indicates that the nesting properties and consequently the size of the Fermi surface sheets are not affected by the Mn substitution, in accordance with the absence of [@TexierLaplace12]. Spin anisotropy of magnetic excitations --------------------------------------- Recent polarized-neutron scatting measurements [@QureshiSteffens12] revealed two components in the spin-wave spectrum of BaFe$_2$As$_2$, characterized by the out-of-plane and in-plane polarizations, with distinct zone-center spin gaps of 10meV and 16meV, respectively. This observation implies that the gradual onset of magnetic fluctuations, as measured by conventional unpolarized INS \[e.g. Fig.\[Fig:INS\](c) or Ref.\], in fact represents a sum of two steplike functions with different onset energies, similar to those observed in copper oxides [@TranquadaShirane89; @ShamotoSato93; @BourgesSidis94; @PetitgrandMaleyev99]. Usually, the onset of the in-plane scattering in iron pnictides can not be resolved as a separate step in the unpolarized data. As a result, one expects that the low-energy part of the spectrum between the spin-gap energy and the midpoint of the spin-gap edge has an out-of-plane polarization, in contrast to the higher-energy part of the spectrum that should be more isotropic. This gives us an opportunity to investigate the spin anisotropy of magnetic excitations in BFMA and to verify if they adhere to the same kind of behavior as in BaFe$_2$As$_2$ even without employing polarized neutrons. ![(a,b) $L$-dependence of the background-subtracted intensity in Ba(Fe$_{0.88}$Mn$_{0.12}$)$_2$As$_2$ in the low-temperature AFM state ($T=4$K) and in the PM state ($T=130$K), measured at an energy transfer of 2meV and 8meV, respectively. (c)Comparison of the low-temperature datasets ($T=4$K) at several energies. (d) Schematic representation of the scattering function in the normal and ordered states, the latter consisting of two broadened steplike functions corresponding to the magnetic scattering intensity with out-of-plane ($S_{zz}$) and in-plane ($S_{x{\kern-0.7pt}y}$) polarizations. (e) The $S_{x{\kern-0.7pt}y}/S_{zz}$ ratio extracted from the fits in panel (c). The expected energy dependence of this ratio is .[]{data-label="Fig:INS_Ldep"}](Fig12.pdf){width="1.03\columnwidth"} For this purpose, we have investigated the $L$-dependence of the scattering amplitude at the ordering wave vector in the $x=12$% sample, as shown in Fig.\[Fig:INS\_Ldep\]. At the lowest energy of $\hslash\omega=2$meV, which lies well below the onset energy of the spin gap, no measurable signal was found in the magnetically ordered state at $T=4$K. At an elevated temperature of 130K, however, a periodic modulation of intensity with several maxima at half-integer $L$ values could be observed \[Fig.\[Fig:INS\_Ldep\](a)\]. This behavior is typical for the PM state of the pure and lightly doped iron pnictides [@DialloPratt10; @ParkInosov10], indicating the three-dimensional nature of the isotropic paramagnon excitations above $T_\text{N}$ that ultimately gives rise to the $Q_z$-component of the magnetic propagation vector as the system enters the AFM state. Above the spin-gap onset, a similar periodic modulation was observed both above and below $T_\text{N}$ \[Fig.\[Fig:INS\_Ldep\](b,c)\]. At intermediate energies of $\hslash\omega=6$ and 8meV, the reduction of the scattering amplitude with increasing $L$ in the AFM state appears to be more rapid than expected for isotropic spin fluctuations following the Fe$^{2+}$ spin-only magnetic form factor [@ParkInosov10]. This behavior results from the out-of-plane polarization of the fluctuating moment, as the angle between the momentum transfer, $\mathbf{Q}$, and the $\mathbf{c}$-axis falls off with increasing $L$. By fitting the corresponding $L$ dependencies for various energy transfers, as shown in Fig.\[Fig:INS\_Ldep\](c), we could extract the corresponding ratios of the magnetic scattering intensities with in-plane and out-of-plane polarizations, $S_{x{\kern-0.7pt}y}/S_{zz}$, which are presented in Fig.\[Fig:INS\_Ldep\](e). These results are consistent with the presence of two spin gaps for different polarizations, as in the parent compound, though with reduced energy scales \[see Fig.\[Fig:INS\_Ldep\](d)\]. We can therefore confirm that the low-energy onset of the magnetic signal, seen in Fig.\[Fig:INS\](c), originates predominantly from the out-of-plane polarized moments, whereas the spin-gap onset corresponding to the in-plane polarization is located above 8meV, according to Fig.\[Fig:INS\_Ldep\](e). ![Schematic phase diagram of BFMA after Refs., and the present work. Composition of the samples used in this study is indicated by arrows. The SDW transition temperatures ($T_\text{N}$ or $\widetilde{T}_\text{N}$), below which the whole volume of the samples remains fully magnetic, as determined by transverse-field $\mu$SR spectroscopy in section \[Sec:TF-MuSR\], are marked by circles. The diamond symbol marks the onset of the elastic neutron-scattering intensity at the AFM wave vector, $T_\text{CG}$, which we define at 5% of the magnetic Bragg peak’s maximal intensity. It is associated with the formation of long-range magnetic correlations in the CG phase. The star symbol stands for $T^{\ast}$, defined by the position of the inflection point in the temperature dependence of the resistivity (Fig.\[Fig:Transition\]) or by the 50% reduction in the oscillating component of the muon asymmetry in zero field \[Fig.\[Fig:muSR-TF\](c)\].[]{data-label="Fig:PhaseDiagram"}](Fig13.pdf){width="0.9\columnwidth"} Summary and discussion ====================== The $x$-$T$ phase diagram of Ba(Fe$_{1-x}$Mn$_x$)$_2$As$_2$ ----------------------------------------------------------- We summarize our results in a schematic phase diagram presented in Fig.\[Fig:PhaseDiagram\], where we plot various temperature scales characterizing magnetic order in BFMA vs. Mn concentration. Above the critical concentration of $x_\text{c}\approx10$%, we distinguish three distinct crossover temperatures. Below $\widetilde{T}_\text{N}$ (circle), the sample orders antiferromagnetically in its whole volume, as determined by transverse-field $\mu$SR spectroscopy in section \[Sec:TF-MuSR\]. As the temperature is increased, the magnetically ordered volume fraction decreases, whereas the AFM order remains long-range, as evidenced by the persistence of the magnetic Bragg peak intensity and by its temperature-independent resolution-limited width. The inflection point observed in the resistivity at $T^\ast$ (star-shaped symbol in Fig.\[Fig:PhaseDiagram\]) corresponds to the 50% reduction in the magnetically ordered volume fraction (oscillating part of the muon asymmetry), i.e. to the midpoint of the smeared AFM transition. We also associate it with the percolation threshold of the magnetically ordered clusters, reminiscent of that found in Mn-substituted Sr$_3$Ru$_2$O$_7$ upon varying Mn content [@HossainBohnenbuck09; @MesaYe12; @HossainBohnenbuck12]. At $T>T^\ast$, the volume fraction of the AFM ordered clusters corresponding to the oscillatory component of the zero-field $\mu$SR signal rapidly vanishes. However, static magnetic moments still persist in most of the sample volume in the form of two distinct phases: (i) the CG phase, characterized by long-range AFM correlations responsible for the remnant magnetic Bragg peak intensity persisting up to 240K, and (ii) the SG phase that leads to a rapid exponential depolarization of the muon asymmetry without long-range AFM correlations. We define the CG onset temperature, $T_{\rm CG}$ (diamond symbol in Fig.\[Fig:PhaseDiagram\]), at a point where the magnetic Bragg peak reaches 5% of its maximal intensity. Above $T_{\rm CG}$, the PM volume fraction reaches its saturation value and becomes nearly temperature-independent, marking the upper boundary of the smeared AFM transition. The region of phase coexistence, where magnetically ordered (CG-type) or spin-frozen (SG-type) clusters coexist with paramagnetic regions on the nanoscale within the sample, is in line with the Griffiths-phase concept [@Vojta06; @Vojta10; @NozadzeVojta11; @NozadzeVojta12]. It is natural to associate the observed magnetic clusters with the AFM rare regions, which are pinned at the local statistical fluctuations of the Mn-ion distribution. As a result, the AFM quantum critical point that is typical for most families of iron-pnictide compounds is destroyed in the case of Mn substitution by the phase-transition smearing, giving way to a Griffiths-type behavior with the nanoscopic phase coexistence. Local moments in a metal with Fermi-surface nesting --------------------------------------------------- In the present study, we have uncovered the microscopic mechanisms that underlie the previously reported [@KimKreyssig10] smearing of the AFM phase transition in BFMA at high Mn concentrations. Most remarkably, we have demonstrated that long-range AFM correlations between the static magnetic clusters persist up to temperatures that are much higher than the $T_{\rm N}$ of the parent compound and exist well above the percolation threshold. Indeed, although nearly 80% of the sample volume is paramagnetic at $T>T_{\rm CG}$, a clearly detectable magnetic Bragg peak persists in the $x=12$% sample even above this temperature, at least up to 240K. Moreover, the absence of oscillations in the zero-field $\mu$SR response of the CG phase implies a nanoscopic size of the magnetic clusters, such that the muons locally implanted inside such clusters do not see them as a bulk ordered phase. They possibly represent individual Mn moments or small random configurations of such moments (rare regions) surrounded by the spin-polarization clouds of the neighboring Fe electrons. These observations necessarily require the presence of some long-range magnetic interaction, acting between the small separated clusters through the PM volume in order to establish the long-range coherence of their magnetic moments. The most natural candidate for such an interaction is the RKKY exchange, which in the case of iron pnictides is known to be strongly affected by the nearly perfect nesting property of the Fermi surface [@AkbariEremin11; @AkbariThalmeier13]. The BFMA compound therefore represents a model system, in which localized magnetic moments are randomly embedded into a SDW metal, providing an interesting playground for theorists to study the spin-glass behavior of magnetic impurities in metals with Fermi surface nesting. So far, the influence of disorder on magnetic properties of iron pnictides has been mostly investigated only for the case of nonmagnetic impurities. For instance, in a recent theoretical study [@WeberMila12] is has been shown using Monte Carlo simulations that the introduction of non-magnetic impurity sites into the Fe sublattice can lead to the formation of anticollinear magnetic order, i.e. qualitatively alter the magnetic ground state of the material. There is also a persistent interest in understanding the influence of disorder on the superconducting properties of doped iron pnictides [@FernandesVavilov12; @LiShen13]. Future theories extending these results to magnetic impurities, which have not been addressed in detail until now, should be informed by our present work. In particular, it would be desirable to explain theoretically the existence of a well defined critical concentration of Mn ions, $x_\text{c}$, below which no smearing of the AFM transition is observed. Understanding thermodynamical properties of a nesting-driven SDW metal with embedded local moments also represents a challenge that should be addressed in future studies.   Acknowledgments {#acknowledgments .unnumbered} =============== This work has been supported, in part, by the DFG within the priority program SPP1458, under Grants No.  and , by the MPI–UBC Center for Quantum Materials, and by the ANR Pnictides. The authors are grateful to D. Efremov, I. Eremin, C. Weber and A.Yaresko for stimulating discussions and encouragement. [119]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****,  ()](http://iopscience.iop.org/0305-4470/39/22/R01/) [****,  ()](http://link.aps.org/doi/10.1103/RevModPhys.79.1015) [****,  ()](http://link.aps.org/abstract/PRL/v23/p17) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.58.2740) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevLett.48.344) [****,  ()](http://link.aps.org/abstract/PRL/v89/e177201) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevLett.89.177202) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevLett.93.097201) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.72.045438) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.70.205108) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.57.7791) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevLett.88.197203) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.68.014411) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.76.104428) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevLett.100.017209) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.81.144423) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.82.064419) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.84.064410) [****,  ()](http://www.springerlink.com/content/mj7845322m07140p/) [****,  ()](http://epljournal.edpsciences.org/articles/epl/abs/2011/17/epl13781/epl13781.html) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.85.174202) [****,  ()](http://link.aps.org/doi/10.1103/PhysRev.96.99) [****,  ()](http://ptp.ipap.jp/link?PTP/16/45/) [****,  ()](http://link.aps.org/doi/10.1103/PhysRev.106.893) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.11.2025) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.62.14975) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevLett.94.187203) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevLett.102.206404) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevLett.104.066402) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevLett.102.046401) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.25.6015) [****,  ()](http://link.aps.org/doi/10.1103/RevModPhys.58.801) @noop [**]{} (, ) [****,  ()](http://www.jetpletters.ac.ru/ps/1402/article_21282.shtml) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.56.8841) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.84.134513) [****,  ()](http://iopscience.iop.org/1367-2630/15/3/033034/) [****,  ()](http://onlinelibrary.wiley.com/doi/10.1002/adma.200901049/abstract) [****,  ()](http://iopscience.iop.org/0953-8984/22/20/203203) [****,  ()](http://www.nature.com/nphys/journal/v5/n11/abs/nphys1449.html) [****,  ()](http://www.nature.com/nphys/journal/v6/n9/abs/nphys1759.html) [****,  ()](http://www.tandfonline.com/doi/abs/10.1080/00018732.2010.513480) [****,  ()](http://link.aps.org/doi/10.1103/RevModPhys.83.1589) [****,  ()](http://arjournals.annualreviews.org/doi/abs/10.1146/annurev-conmatphys-070909-104041) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.79.224524) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.83.060509) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.84.014405) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.82.024510) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.84.144528) [****,  ()](http://link.aps.org/abstract/PRB/v82/p220503) [****,  ()](http://link.aps.org/abstract/PRB/v83/p054514) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.81.174529) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.85.214509) [****,  ()](http://dx.doi.org/10.1209/0295-5075/99/17002) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.79.094519) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.80.100403) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.84.094445) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevLett.108.087005) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.85.144523) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.87.144418) [****, ()](http://iopscience.iop.org/1367-2630/11/2/025023) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.79.094429) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.86.020503) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.86.024437) [****,  ()](http://www.sciencedirect.com/science/article/pii/S0921453409007369) [****, ()](http://link.aps.org/abstract/PRB/v78/e020503) [****, ()](http://link.aps.org/abstract/PRL/v105/e027003) [****,  ()](http://link.aps.org/abstract/PRB/v82/e134503) [****,  ()](http://www.nature.com/nature/journal/v453/n7197/abs/nature07057.html) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevLett.101.257003) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.78.140504) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.78.212502) [****, ()](http://link.aps.org/abstract/PRB/v78/e100506) [****,  ()](http://iopscience.iop.org/1367-2630/14/5/053044/) [****,  ()](http://link.aps.org/abstract/PRB/v81/e024511) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.85.144509) [****,  ()](http://www.scientific.net/MSF.321-324.258) [****, ()](http://iopscience.iop.org/0295-5075/54/3/342/) [****,  ()](http://scripts.iucr.org/cgi-bin/paper?S0021889801017812) [****,  ()](http://www.springerlink.com/content/952xnm2b6j4cxl9b/) [****, ()](http://link.aps.org/abstract/PRB/v79/e224503) @noop [**]{} (, ) p.  [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.79.054525) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevLett.108.177004) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.86.094521) [****,  ()](http://link.aps.org/doi/10.1103/RevModPhys.69.1119) [****,  ()](http://www.tandfonline.com/doi/abs/10.1080/001075199181521) [****,  ()](http://www.nature.com/nmat/journal/v8/n4/abs/nmat2396.html) [****,  ()](http://link.aps.org/abstract/PRL/v102/e117006) [****, ()](http://link.aps.org/abstract/PRB/v80/e024508) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevLett.103.027002) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevLett.107.237001) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.84.100501) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevLett.106.117602) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevLett.109.017003) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.85.100501) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.86.184509) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.78.214503) [****,  ()](http://dx.doi.org/10.1016/0167-5087(82)90183-1) [****, ()](http://link.aps.org/doi/10.1103/PhysRevB.31.546) [****,  ()](\doibase/10.1002/pssb.2221300102) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevE.55.1514) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.84.054544) [****,  ()](http://www.nature.com/srep/2011/111019/srep00115/full/srep00115.html) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.86.060410) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.40.4503) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.48.13817) [****,  ()](http://dx.doi.org.ezproxy.aai.mpg.de/10.1016/0921-4534(94)92063-X) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.59.1079) [****, ()](http://link.aps.org/abstract/PRB/v81/e214407) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.85.180410) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.86.041102) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.86.184432) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevB.85.140512) [****,  ()](http://iopscience.iop.org/0295-5075/102/3/37003/)
--- abstract: 'We report far-infrared optical properties of YbRh$_{2}$Si$_{2}$ for photon energies down to 2 meV and temperatures 0.4 – 300 K. In the coherent heavy quasiparticle state, a [*linear*]{} dependence of the low-energy scattering rate on both temperature and photon energy was found. We relate this distinct dynamical behavior different from that of Fermi liquid materials to the non-Fermi liquid nature of YbRh$_{2}$Si$_{2}$ which is due to its close vicinity to an antiferromagnetic quantum critical point.' author: - 'S. Kimura' - 'J. Sichelschmidt' - 'J. Ferstl' - 'C. Krellner' - 'C. Geibel' - 'F. Steglich' title: ' Observation of an optical non-Fermi-liquid behavior in the heavy fermion state of YbRh$_{2}$Si$_{2}$ ' --- The investigation of $4f$-containing metals by far-infrared optical spectroscopy provides valuable insight into the nature of strong electronic correlations. This in particular holds true for heavy fermion (HF) compounds where at low temperatures a weak $4f$-conduction electron ($cf$-)hybridization generates mass-renormalized quasiparticles with a coherent ground state which is in many HF systems of the Landau Fermi liquid (LFL) type. [@stewart01] The quasiparticles influence thermodynamic quantities which are described in terms of a large effective mass $m^{*}$ exceeding the free electron mass $m_{0}$ by three orders of magnitude. Furthermore, in typical HF materials, below a single-ion Kondo temperature ($T_{\rm K}$), the coherent state is characterized by a dynamical screening of the $4f$ magnetic moments through the conduction electrons. Several highly correlated metals exhibit so-called non-Fermi liquid (NFL), [*i.e.*]{}, strong deviations from a renormalized LFL behavior when $T\rightarrow0$ K. [@stewart01] The system YbRh$_{2}$Si$_{2}$ studied in this paper is one of a few clean stoichiometric HF metals with pronounced NFL behavior at ambient pressure which is related to both antiferromagnetic (AF) as well as ferromagnetic quantum critical spin fluctuations in close proximity to an AF quantum critical point (QCP). [@geg02; @cus03; @geg05] Those NFL effects manifest as a divergence of the $4f$-derived increment to the specific heat $\Delta C/T \propto -\ln T$ and in the electrical resistivity $\rho(T)$ showing a power law exponent close to 1 in a temperature range substantially larger than one decade and extending up to $T\simeq10$ K. [@tro00] Transport and thermodynamic properties are consistent with a single-ion Kondo temperature $T_{\rm K}=25$ K (associated with the crystalline-electric-field-derived doublet ground state [@sto05]). The electrodynamical response of HF systems is characterized by an optical conductivity $\sigma(\omega)$ which follows at room temperature the [*classical*]{} Drude model \[$\sigma(\omega)~=~N e^{2} \tau/{m^{*} (1 + \omega^{2} \tau^{2})}$; $N$: charge carrier density\] with frequency independent $m^{*}$ and scattering rate $1/\tau$. [@DG02] At low temperatures, upon entering the coherent state, large deviations are observed which are caused by many-body effects. Then a narrow, renormalized peak at zero photon energy $\hbar\omega$ = 0 eV is formed and a so-called hybridization gap appears which is related to the transition between the bonding and antibonding states resulting from the $cf$-hybridization. [@deg01; @mil87a; @mil87b] The coherent part of the underlying strong electron-electron correlations are treated in an [*extended*]{} Drude model by renormalized and frequency dependent $m^{*}(\omega)/m_{0}$ and $1/\tau(\omega)$; [@web86; @awa93; @kim94; @deg99] $$\frac{m^{*}(\omega)}{m_{0}} = \frac{N e^2}{m_0 \omega} \cdot Im\left(\frac{1}{\tilde{\sigma}(\omega)}\right), \frac{1}{\tau(\omega)} = \frac{N e^2}{m_0} \cdot Re\left(\frac{1}{\tilde{\sigma}(\omega)}\right).$$ Here, $\tilde{\sigma}(\omega)$ is the complex optical conductivity derived from the Kramers-Kronig analysis (KKA) of the reflectivity spectrum $R(\omega)$. The LFL theory predicts a dynamical scattering rate $1/\tau(\omega)~\propto~(2\pi k_{\rm B}T)^2+(\hbar\omega)^2$ which also accounts for the electrical resistivity, $\rho(T)$, growing quadratically with temperature [@deg99]. The $(\hbar\omega)^2$ behavior is indicated in $1/\tau(\omega)$ of many renormalized LFL metals, e.g. YbAl$_3$ [@oka04], CePd$_3$ [@web86], and CeAl$_3$ [@awa93]. At the same time, $m^*(\omega)$ increases with decreasing $T$ and $\omega$ indicating the formation of heavy quasiparticles at low temperatures. NFL behavior in optical properties is typically indicated by a linear frequency dependence of $1/\tau(\omega)$. [@deg99] Up to now, optical NFL effects were explicitly investigated for correlated materials whose NFL state is believed to be related to disorder (several U-based Kondo alloys) or to two-channel Kondo physics (UBe$_{13}$). [@deg99] Yet, to our knowledge, the optical properties of a heavy-fermion NFL state due to spin fluctuations in close proximity to a QCP, as is the case for YbRh$_{2}$Si$_{2}$, have not been investigated so far. As shown by our preliminary optical experiments on YbRh$_{2}$Si$_{2}$ the $T$-linear NFL behavior of the zero-frequency resistivity, $\rho_{\rm DC}(T)$, is also reflected in $\sigma(\omega,T)$ for $T<20$ K, $\hbar\omega<20$ meV and $\omega\tau\gg1$, assuming a frequency independent $\tau$ consistent with a [*classical*]{} Drude approximation of the data. [@kim04] This behavior was interpreted as the temperature dependence of a renormalized scattering rate of a Drude peak whose tail at $T=2.7$ K was observable just above the lowest measured energy of 10 meV. Moreover, a peak at around 0.2 eV, visible already at $T=300$ K and gradually developing with decreasing temperature, appears beyond a pseudogap-like structure similar to that reported for several other Kondo-lattice systems. [@deg01; @oka04; @web86; @men05] Here we report the extension of our optical investigations down to energies of 2 meV and temperatures down to 0.4 K. This allowed us to obtain yet inaccessible information on the low-energy HF optical response of YbRh$_{2}$Si$_{2}$ and provides a detailed characterization of the electrodynamic NFL properties. In particular the low-energy heavy quasiparticle excitations could be analyzed within the [*extended*]{} Drude model which yields $m^*(\omega,T)$ and $1/\tau(\omega,T)$. Near-normal incident $R(\omega)$ spectra were acquired in a very wide photon-energy region of 2 meV – 30 eV to ensure an accurate KKA. We investigated the tetragonal $ab$-plane of two single crystalline samples with as-grown sample surfaces and sizes of $2.2 \times 1.5 \times 0.1$ mm$^3$ and $3.5 \times 4.2 \times 0.5$ mm$^3$, respectively. The preparation as well as the magnetic and transport properties has been described elsewhere. [@tro00; @geg02; @cus03] The high quality of the single crystals is evidenced by a residual resistivity ratio of $\rho_{\rm 300K}/\rho_0\simeq 65$ $(\rho_0\simeq1\mu\Omega {\rm cm})$ and a very sharp anomaly in the specific heat at $T = T_{\rm N}$. [@cus03] Rapid-scan Fourier spectrometers of Martin-Puplett and Michelson type were used at photon energies of 2–30 meV and 0.01–1.5 eV, respectively, at sample temperatures between 0.4–300 K using a $^4$He ($T\rightarrow 5.5$ K) and a $^3$He ($T\rightarrow 0.4$ K) cryostat. To obtain $R(\omega)$, a reference spectrum was measured by using the sample surface evaporated [*in-situ*]{} with gold. At $T=300$ K, $R(\omega)$ was measured for energies 1.2–30 eV by using synchrotron radiation. [@fuk01] In order to obtain $\sigma(\omega)$ via a KKA of $R(\omega)$ the spectra were extrapolated below 2 meV with $R(\omega)=1-(2\omega/\pi \sigma_{DC})^{1/2}$ and above 30 eV with a free-electron approximation $R(\omega) \propto \omega^{-4}$. [@DG02] ![ (Color online) Temperature dependence of the reflectivity spectrum $R(\omega)$ in the photon energy range of 2 – 500 meV. Inset: $R(\omega)$ at 5.5 and 300 K in the complete accessible range of photon energies up to 30 eV. []{data-label="fig1"}](fig1){width="35.00000%"} The temperature dependence of the $R(\omega)$ spectra of YbRh$_{2}$Si$_{2}$ is shown in Fig. \[fig1\]. The inset shows an extended energy region where above 500 meV $R(\omega)$ is dominated by interband transitions. In this study, we focus only on the intraband transition region below 500 meV where the spectra display a strong temperature dependence. With decreasing temperature $R(\omega)$ gets strongly suppressed, creating a dip structure at around 100 meV. Simultaneously, below 12 meV, $R(\omega)$ approaches unity with decreasing temperature. These pronounced temperature dependences are typical for HF compounds. [@deg99] Most clear coincidence is found when comparing the optical properties of YbRh$_{2}$Si$_{2}$ with those of the intermediate-valent compound YbAl$_{3}$. [@oka04] Their low-temperature, low-energy shapes of $R(\omega)$ are very similar, albeit a weaker temperature dependence is found for YbAl$_{3}$ reflecting its much stronger $cf$-hybridization which underlies its intermediate-valence behavior. However, very similar to $R(\omega)$ of YbRh$_{2}$Si$_{2}$ at $T=300$ K, the $R(\omega)$ of the non-magnetic reference compound LaAl$_{3}$ does not show any dip-structure. Therefore, as already identified for YbAl$_{3}$, the pronounced low-temperature dip in $R(\omega)$ of YbRh$_{2}$Si$_{2}$ can be related to Yb-$4f$ electronic states near the Fermi energy. By decreasing the temperature, due to $cf$-hybridization, the character of the 4f states changes from localized to itinerant where optical transitions between the $cf$-hybridization states are expected. [@deg01] This is consistent with the observed $R(\omega)$ dip structure and its temperature evolution in YbRh$_{2}$Si$_{2}$. ![ (Color online) Temperature dependence of the optical conductivity $\sigma(\omega)$ (solid lines) with corresponding direct current conductivity ($\sigma_{DC}$, symbols). Dashed lines: [*Classical*]{} Drude model with implicit Drude masses $m^*$ as indicated. Corresponding $\sigma_{DC}$ and carrier densities (derived from the Hall coefficient) were used. []{data-label="fig2"}](fig2){width="35.00000%"} The KKA of $R(\omega)$ yields optical quantities as shown in Fig. \[fig2\]. At $T=300$ K, $\sigma(\omega)$ shows normal metallic behavior, [*i.e.*]{}, a monotonic decrease with increasing photon energy, and a zero-energy extrapolation consistent with $\sigma_{DC}$ (symbols at left axis of Fig. \[fig2\]). However, as shown by the dashed line in Fig. \[fig2\], the experimental $\sigma(\omega)$ is poorly represented by a [*classical*]{} Drude fit \[with parameters $m^{*} = 15 m_{0} $, $1/\tau = 4.0 \cdot 10^{13}$ sec$^{-1}$, $N = 2.7 \cdot 10^{22}$ cm$^{-3}$ (Hall effect result [@pas04])\]. This discrepancy indicates that the scattering rate depends on photon energy, as discussed below and as shown in Fig. \[fig3\]b. With decreasing temperature, a pseudogap-like suppression of $\sigma(\omega)$ appears below 100 meV with a simultaneous increase in $\sigma_{DC}$. A minimum of $\sigma(\omega)$ develops whose position continuously decreases towards low temperatures. The onset temperature of pseudogap formation between 80 and 160 K corresponds to the maximum of $\rho_{\rm DC}(T)$ at $T_{\rm coh}=120$ K which marks the onset of coherence effects upon $cf$-hybridization. This suggests that the temperature dependence of $\sigma(\omega)$ is indeed related to the formation of heavy quasiparticles and the formation of a minimum in $\sigma(\omega)$ may be associated with a heavy plasma mode. As already indicated from the above discussion, the energy and temperature behavior of the optical conductivity implies that highly energy dependent $m^{*}$ and $1/\tau(\omega)$ are involved. For example, $\sigma(\omega)$ at $T=5.5$ K cannot be represented by energy-independent values of both $m^{*}$ and $1/\tau$ within a [*classical*]{} Drude curve as shown in Fig. \[fig2\]. Moreover, due to the different temperature dependences in $\sigma(\omega)$ and $\sigma_{DC}$, the [*classical*]{} Drude analysis emphasizes the need for strongly temperature dependent and, at low temperatures, very heavy effective masses ($m_{\rm Drude}^{*} = 600~m_{0}$, $1/\tau = 1.6 \cdot 10^{11}$ sec$^{-1}$ at 5.5 K). In general, such behavior of the optical mass and scattering rate reflects electron-electron scattering or electron scattering off spin fluctuations. In case of HF compounds, a many-body effect due to the $cf$-hybridization is effective at low energies and temperatures where the conduction electrons are scattered resonantly off the hybridized charge carriers. [@deg01] ![ (Color online) Temperature dependence of (a) the effective mass relative to the free electron mass, $m^{*}(\omega)/m_{0}$, and (b) the scattering rate $1/\tau(\omega)$ as a function of photon energy $\hbar\omega$. Inset of (b) is the low energy part of $1/\tau(\omega)$. Dashed line emphasizes a $1/\tau\propto\hbar\omega$ behavior. []{data-label="fig3"}](fig3){width="30.00000%"} Such scattering process is reflected in the temperature- and photon-energy dependences of $m^{*}$ and $1/\tau$ which we obtained from an [*extended*]{} Drude analysis and which are shown in Fig. \[fig3\] for energies lower than the interband transition spectrum. At $T=300$ K both $m^{*}(\omega)/m_{0}$ and $1/\tau(\omega)$ are almost constant, with values of about $15m_{0}$ and $1\cdot10^{14}$ sec$^{-1}$, respectively. Therefore, it is not surprising that $\sigma(\omega)$ at 300 K clearly contains the features of a [*classical*]{} Drude model as shown in Fig. \[fig2\]. With decreasing temperature from 300 K to 0.4 K, $m^{*}(\omega)/m_{0}$ below $\simeq20$ meV monotonically increases and exceeds values of 130. Clearly, this enhancement can be related to the HF state formation in YbRh$_{2}$Si$_{2}$ as the enhancement of $m^*(\omega)$ occurs at energies comparable to $k_{\rm B}T_{\rm coh}$. Interesting to note, below 10 meV, $m^{*}(\omega)/m_{0}$ does not seem to saturate with decreasing temperature and energy but rather increases continuously. We speculate that this behavior indicates an energy equivalence to the electron effective mass temperature divergence to infinity as observed in the electronic specific heat. [@tro00; @geg02; @cus03] The appearance of a negative mass at energies above $\simeq30$ meV and at low $T$ is caused by a positive $\varepsilon_{1}(\omega)$ indicating a heavy plasma mode (not shown). Equivalently, one may relate transitions across the hybridization gap to the observed negative optical mass. Such behavior is observed in many other heavy-fermion materials. [@bon88; @men05; @dre02; @dor01] The $m^{*}(\omega)/m_{0}$ enhancement with decreasing temperature is accompanied by a formation of a broad peak in $1/\tau(\omega)$ in the energy region where the pseudogap-like suppression of $\sigma(\omega)$ appears as shown in Fig. \[fig3\]b. It is related to the process of mass renormalization as, at 0.4 K, the $1/\tau(\omega)$ reaches the maximum position of $\simeq22$ meV which corresponds to the onset of the $m^{*}(\omega)/m_{0}$ enhancement. Again, transitions across the hybridization gap lead to such enhanced dynamical scattering rates reflecting the particular quasiparticle excitation in accord with hybridization-gap scenarios for HF-derived optical properties. [@mil87a; @deg01; @awa93] As shown in the inset of Fig. \[fig3\]b the HF state is characterized by $1/\tau~\propto~\hbar\omega$ for energies up to $\simeq7$ meV which is a pronounced NFL behavior, see the dashed line for the data at 5.5 K. It is worth to remind that in stoichiometric YbRh$_{2}$Si$_{2}$ NFL effects due to disorder can be excluded. [@tro00] Therefore, we attribute the low-energy linear in $\omega$ behavior of $1/\tau(\omega)$ to spin fluctuations due to the close vicinity to the QCP. ![ (Color online) Temperature dependence of (a) the dynamical mass $m^{*}(T)/m_{0}$ and (b) the scattering rate $1/\tau(\omega,T)$ at specific photon energies as indicated. Dashed line in (b) shows a non-Fermi liquid $1/\tau\propto T$ behavior. []{data-label="fig4"}](fig4){width="30.00000%"} The extended Drude description of the optical properties of correlated electron systems yields the energy dependence of the renormalization effects. In the low-energy limit the frequency dependence of both $m^*$ and $1/\tau$ should resemble their temperature dependence. [@dre02] This expectation is satisfied when comparing the data of Fig. \[fig3\] with Fig. \[fig4\]. The latter shows the temperature dependence of $m^{*}(T)/m_{0}$ at 5 meV and that of $1/\tau(T)$ at 5 meV and at 18 meV obtained from Fig. \[fig3\]. Note that the $m^{*}(T)/m_{0}$ enhancement occurs below 160 K which roughly corresponds to the onset energy of mass enhancement. Similar to $m^{*}(\omega)/m_{0}$, $m^{*}(T)/m_{0}$ does not saturate even at the lowest accessible temperature below $T_{\rm K}=25$ K. However, in contrast to the divergence of the electronic specific heat coefficient $\Delta C/T \propto -\ln T$, $m^{*}(T,\omega)/m_{0}$ shows an almost linear increase towards low temperatures, at least for photon energies down to 5 meV. From this discrepancy, we anticipate that a divergence of the optical mass renormalization may occur below the single-ion Kondo energy scale of $k_{\rm B}T_{\rm K}=2$ meV. The mass enhancement with decreasing temperature corresponds to a continuous increase of $1/\tau(T)$ at 18 meV as shown in Fig. \[fig4\]b. However, below $T_{\rm K}$ the increase of $1/\tau(T)$ at 18 meV becomes stronger. At the same time, $1/\tau(T)$ at 5 meV assumes a NFL temperature dependence which is approximately linear as the dashed line emphasizes in Fig. \[fig4\]b. Therefore, at the single-ion Kondo temperature $T_{\rm K}$, the charge dynamics changes while a single ion Kondo scenario fails to explain the magnetic properties of YbRh$_{2}$Si$_{2}$ below $T_{\rm K}$ (large fluctuating $4f$-magnetic moments [@cus03] and a sharp electron spin resonance line [@sic03]). At temperatures near 80 K, $m^{*}(T)/m_{0}$ starts to get enhanced and $1/\tau(T)$ shows a kink or a small peak. This temperature range corresponds to that at which both the $^{29}$Si-NMR Knight shift and relaxation rate show an anomaly, [@ish02] indicating a change in the magnetic characteristics at 80 K. For a proper interpretation of the peak in $1/\tau(T)$, carrier scattering by phonons should also be taken into account. In conclusion, we found distinct electrodynamical non-Fermi liquid behavior of the low-energy charge dynamics of clean ([*i.e.*]{},atomically ordered, stoichiometric) YbRh$_{2}$Si$_{2}$. We relate our results to the close proximity of YbRh$_{2}$Si$_{2}$ to an antiferromagnetic quantum critical point as the latter is the origin of the pronounced NFL effects of thermodynamic and transport properties. [@tro00; @geg02; @cus03] Our findings were accomplished by measuring the temperature dependence of the optical conductivity of YbRh$_{2}$Si$_{2}$ down to $T=0.4$ K in the photon energy range 2 meV – 30 eV. From an extended Drude analysis, the scattering rate below $\hbar\omega\simeq7$ meV and below $T\simeq20$ K is consistent with a NFL linear proportionality both to the photon energy and temperature. Moreover, towards low temperatures, clear signatures of heavy fermion behavior are found: formation of an interband peak at 0.2 eV and a heavy plasmon mode below 30 meV which both can be related to $cf$-hybridization. The low-temperature optical effective mass is strongly enhanced below 20 meV and continues to increase down to the accessible lowest energies (2 meV) and temperatures (0.4 K). We would like to thank Q. Si and O. Sakai for fruitful discussions. This work was a joint studies program of the Institute for Molecular Science and was partially supported by Grants-in-Aid for Scientific Research (Grant No. 18340110) from MEXT of Japan and by DFG under the auspices of SFB 463 of Germany. [21]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , ****, (). , , , , , , , , ****, (). , , , , , , , , , , ****, (). , , , , , ****, (). , , , , , , , , , ****, (). , , , , , , ****, (). , ** (, , ). , , , ****, (). , ****, (). , , , ****, (). , ****, (). , , , ****, (). , , , , , ****, (). , , , , ****, (). , , , , ****, (). , , , , , , , ****, (). , , , ****, (). , , , , , , , , , ****, (). , , , , , , , , , ****, (). , , , , , , , , ****, (). , , , ****, (). , , , , , ****, (). , , , , , ****, (). , , , , , , , ****, ().
--- abstract: 'In this note we recast the Geronimus transformation in the framework of polynomials orthogonal with respect to symmetric bilinear forms. We also show that the double Geronimus transformations lead to non-diagonal Sobolev type inner products.' address: - | Maxim Derevyagin\ Department of Mathematics MA 4-2\ Technische Universität Berlin\ Strasse des 17. Juni 136\ D-10623 Berlin\ Germany - | Francisco Marcellán\ Departamento de Matemáticas\ Universidad Carlos III de Madrid\ Avenida de la Universidad 30\ 28911 Leganés\ Spain author: - Maxim Derevyagin - Francisco Marcellán title: A note on the Geronimus transformation and Sobolev orthogonal polynomials --- Introduction ============ Let us consider the following problem. Let $ \{P_{n} \}_{n=0}^{\infty}$ be a sequence of monic polynomials orthogonal with respect to a nontrivial probability measure supported on an infinite subset of the real line. The problem consists in finding necessary and sufficient conditions for the real numbers $A_n$, $n=0,1,\dots$, to make the sequence of monic polynomials $$P_n(t)+A_nP_{n-1}(t), A_n \neq 0, \quad n=1, 2, \dots,$$ orthogonal with respect to some measure supported on the real line. The idea of studying this problem goes back to Shohat’s paper [@Sh37] concerning quadrature formulas associated to $n$ nodes with a degree of exactness less that $2n-1$. A few years later after the Shohat’s publication a complete and final answer to that problem was given by Geronimus [@Ger40]. Thus [@Ger40] provided us with a procedure of constructing new families of orthogonal polynomials from the given ones. One can also reduce some families of orthogonal polynomials to the known ones with the help of such a procedure. Recall that if we have a sequence of monic orthogonal polynomials $\{P_n\}_{n=0}^{\infty}$ then the polynomial transformation $$P_n(t)\to P_n(t)+A_nP_{n-1}(t), A_n\neq 0, \quad n=1,2, \dots,$$ that gives a new family of orthogonal polynomials, is said to be the Geronimus transformation [@BM04; @SpZh95; @Zh]. In fact, the Geronimus transformation divides the measure of orthogonality by the spectral parameter minus the point of transformation and adds a mass to it at the point of transformation. See also [@Maro], where the sequence of polynomials associated with such a perturbation in a more general algebraic framework (orthogonality with respect to a linear functional defined in the linear space of polynomials with complex coefficients) is studied. Besides the measure interpretation, the Geronimus transformation can be also interpreted in terms of Jacobi matrices in the framework of the so called discrete Darboux transformations and it is related to $LU$ and $UL$ factorizations of shifted Jacobi matrices [@BM04]. Although the Geronimus transformation has its origin in mechanical quadrature [@Sh37], it has also found many applications in classical analysis, numerical analysis, and physics [@BM04; @SpZh95; @SpZh97]. In particular, it should be stressed that the Geronimus transformation together with the Christoffel transformation (both called discrete Darboux transformations) give a bridge between orthogonal polynomials and discrete integrable systems [@SpZh95; @SpZh97]. To go deeper in understanding the Geronimus transformation it is somehow natural to consider its iterations. Say, two iterations of the Geronimus transformation lead to the families of orthogonal polynomials defined by $$P_n(t)\to P_n(t)+B_nP_{n-1}(t)+C_nP_{n-2}(t), n\geq 1, C_n\neq 0, n\geq 2.$$ Such families have been extensively studied in the literature (see [@APRM; @BM; @HHR], among others). A particular case of the corresponding inverse problem in terms of perturbations of linear functionals has been analyzed in [@BegMaro]. For more iterations of the Geronimus transformation, see the results contained in [@AMPR; @KLMP; @MaroSfax]. Some particular cases of inverse problems for the cubic case have been analyzed in [@MaroNic]. On the other hand, in [@Il] the higher order ordinary linear differential equations associated with polynomials orthogonal with respect to iterated Geronimus transformations of Laguerre orthogonal polynomials, the so called Krall-Laguerre orthogonal polynomials, are studied in a framework of commutative algebras with orthogonal polynomials as eigenfunctions. An interesting point in analysis of iterations of the Geronimus transformation is the following. It is well known that the sequence of monic polynomials $\{\widetilde{Q}^{(\alpha)}_n \}_{n=0}^{\infty}$, which are orthogonal with respect to the Laguerre-Sobolev type inner product $$[f,g]=\int_{0}^{\infty}f(t)g(t)t^{\alpha}e^{-t}dt+Mf(0)g(0)+Nf'(0)g'(0)\quad f,g\in\cP$$ defined on the linear space $\cP$ of polynomials with real coefficients, can be represented in terms of the sequence of classical monic Laguerre polynomials $\{L_n^{(\alpha)}\}_{n=0}^{\infty}$ as follows $$\widetilde{Q}^{(\alpha)}_{n}(t)=L_n^{(\alpha+2)}(t)+B_nL_{n-1}^{(\alpha+2)}(t)+C_nL_{n-2}^{(\alpha+2)}(t).$$ Obviously, one cannot get the Laguerre-Sobolev inner product by dividing by $t^{2}$ the measure $ t^{\alpha+2}e^{-t}dt$ and adding masses to it despite the above formula suggests that the Laguerre-Sobolev type orthogonal polynomials are the two consecutive Geronimus transformations of the classical Laguerre polynomials. This problem brings us to one of the aims of this note. One of the main ideas of the present paper is to include the Laguerre-Sobolev type orthogonal polynomials and similar Sobolev orthogonal polynomials into the scheme of Darboux transformations. To this end we propose to reconsider the Geronimus transformation in a more general framework related to symmetric bilinear forms. Recall that a symmetric bilinear form $B(\cdot,\cdot)$ in the linear space $\cP$ is a mapping $$B(\cdot,\cdot): \cP\times\cP\to\dR$$ that is linear with respect to each of their arguments and has the symmetry property $$B(f,g)=B(g,f),\quad f,g\in\cP.$$ For instance, the form $$(f,g)_0=\int_{\dR} f(t)g(t)d\mu(t)$$ is symmetric and bilinear. It is not so hard to see that the Gram matrix $\left((t^i,t^j)_0\right)_{i,j=0}^{\infty}$ is a Hankel matrix and is positive definite. A bilinear form is said to be regular (resp. positive definite) if all leading principal submatrices of its Gram matrix are nonsingular (positive definite). In such cases, the bilinear form generates a sequence of monic orthogonal polynomials in a simple way by using the Gram-Schmidt process. Nonetheless, the main advantage of considering bilinear forms in the context of orthogonality is the ability to include many types of orthogonality such that the corresponding Gram matrix associated with their moments is not a Hankel matrix, e.g. the Sobolev orthogonality (see [@BulBar; @BM06]) and other types of orthogonality related to matrix measures (see [@Dur93]) based on the symmetry of a polynomial operator with respect to a bilinear form. The paper is organized as follows. In Section 2 the classical Geronimus transformation is considered. The structure of the symmetric Jacobi matrix corresponding to the transformed polynomials is discussed in the next section. The double Geronimus transformation in the framework of bilinear forms is presented in Section 4. The last section gives details of the structure of the symmetric pentadiagonal matrix associated with the recurrence coefficients for the transformed polynomials. The classical Geronimus transformations ======================================= In this section we review some of the results of [@Ger40] from the point of view of symmetric bilinear forms. We start with the precise definition of the Geronimus transformation in the framework under consideration. \[SGtrans\]Let us consider a symmetric bilinear form $$(f,g)_0=\int_{\dR} f(t)g(t)d\mu(t).$$ The Geronimus transformation of $(\cdot,\cdot)_0$ is a symmetric bilinear form $[\cdot,\cdot]_1$ defined on the set $\cP$ of real polynomials as follows $$\label{defSGT} [tf(t),g(t)]_1=[f(t),tg(t)]_1=(f,g)_{0}=\int_{\dR} f(t)g(t)d\mu(t), \quad f,g\in{\mathcal P}.$$ Evidently, this definition doesn’t determine $[\cdot,\cdot]_1$ uniquely. However, we can see how the Geronimus transformation looks like. \[SG\_h2\] Suppose that $d\mu$ has the following representation $$\label{SG_h1} d\mu(t)=td\mu_1(t),$$ where $d\mu_1$ is a positive measure and it has finite moments. Then the bilinear form $[\cdot,\cdot]_1$ admits the representation $$\label{SG_FforBF} [f,g]_1=\int_{0}^{\infty}f(t)g(t)d\mu_1(t)+\left(s_0^*-\int_{0}^{\infty}d\mu_1(t)\right)f(0)g(0),\quad f,g\in{\mathcal P},$$ where $s_0^*$ is an arbitrary real number. It is clear that the value $[1,1]_1$ can be arbitrary. So, let us denote it by $s_0^*$, i.e. $s_0^*=[1,1]_1$. Further, let us compute $[f,g]_1$ for any $f,g\in{\cP}$: $$\begin{split} [f,g]_1=&[f(t)-f(0)+f(0),g(t)]_1=[f(t)-f(0),g(t)]_1+[f(0),g(t)]_1\\ =&[f(t)-f(0),g(t)]_1+[f(0),g(t)-g(0)]_1+[f(0),g(0)]_1\\ =&\left(\frac{f(t)-f(0)}{t},g(t)\right)+\left(f(0),\frac{g(t)-g(0)}{t}\right)+f(0)g(0)s_0^*\\ =&\int_{0}^{\infty}\frac{f(t)-f(0)}{t}g(t)d\mu(t)+\int_{0}^{\infty}f(0)\frac{g(t)-g(0)}{t}d\mu(t)+f(0)g(0)s_0^*. \end{split}$$ Next, using  we arrive at $$[f,g]_1= \int_{0}^{\infty}\left(f(t)-f(0)\right)g(t)d\mu_1(t)+\int_{0}^{\infty}f(0)\left(g(t)-g(0)\right)d\mu_1(t)+f(0)g(0)s_0^*,$$ which can be easily simplified to . To get an idea about the Geronimus transformation, let us consider one particular example of the initial inner product: $$(f,g)_0=\int_0^{+\infty}f(t)g(t)t^\alpha e^{-t}dt,\quad \alpha>0.$$ Clearly, one of the possible choices for the Geronimus transformation is the following bilinear form $$[f,g]_1=\int_0^{+\infty}f(t)g(t)t^{\alpha-1} e^{-t}dt,\quad \alpha>0,$$ that is the case where $s_0^*=\int_{0}^{\infty}t^{\alpha-1} e^{-t}dt$. In this case the forms $(\cdot,\cdot)_0$ and $[\cdot,\cdot]_1$ generate the sequences of monic Laguerre polynomials $\{L_n ^{(\alpha)}\}_{n=0}^{\infty}$ and $\{L_n ^{(\alpha-1)}\}_{n=0}^{\infty}$, respectively. These polynomials are related as follows $$L_{n} ^{(\alpha)} (t)+ n L_{n-1}^{(\alpha)}(t)=L_{n} ^{(\alpha-1)}(t), \quad n=0,1,\dots.$$ It turns out that a similar relation is also valid for the Geronimus transformation in general. Let assume that $(\cdot,\cdot)_0$ and $[\cdot,\cdot]_1$ are positive definite and regular bilinear forms, respectively. Let $\{P_n\}_{n=0}^{\infty}$ be a sequence of monic polynomials orthogonal with respect to $(\cdot,\cdot)_0$. Then a monic polynomial $P_n^*$ of degree $n$ is orthogonal with respect to $[\cdot,\cdot]_1$ if and only if it can be represented as follows $$\label{SG_hh1} P_n^*(t)=\frac{1}{d_n^*} \begin{vmatrix} P_n(t)&s_0^*P_n(0)+Q_n(0)\\ P_{n-1}(t) &s_0^*P_{n-1}(0)+Q_{n-1}(0) \end{vmatrix},$$ where $d_n^*=s_0^*P_{n-1}(0)+Q_{n-1}(0)\ne 0$. Here, $\{Q_n\}_{n=0}^{\infty}$ denotes the sequence of monic orthogonal polynomials of the second kind with $\deg Q_{n} = n-1$ and defined by $ Q_{n} (x)=\int_{0}^{\infty} \frac{P_n(t)-P_n(x)}{t-x}d\mu(t)$. Since $[\cdot,\cdot]_1$ is regular there exists the corresponding sequence of monic orthogonal polynomials. Suppose that $P_n^*$ is orthogonal, that is, $$[P_n^*(t),t^k]_1=[t^k,P_n^*(t)]_1=0,\quad k=0,1,2,\dots, n-1.$$ In turn, for the original bilinear form we have $$(P_n^*(t),t^{k-1})_0=[P_n^*(t),t^k]_1=0,\quad k=0,1,2,\dots, n-1,$$ which obviously implies that $$\label{SD_OP_repr} P_n^*(t)=P_n(t)+A_n P_{n-1}(t),$$ where $A_n$ is a real number. Next, let us calculate $A_n$, $n\geq1$. To this end we are going to use the following equation $$\begin{split} 0=&[P_n^*(t),1]=s_0^*P_n^*(0)+\left(\frac{P_n^*(t)-P_n^*(0)}{t},1\right)=\\ =&s_0^*P_n^*(0)+\left(\frac{P_n(t)-P_n(0)}{t},1\right)+A_n\left(\frac{P_{n-1}(t)-P_{n-1}(0)}{t},1\right)\\ =&s_0^*P_n^*(0)+\int_0^{\infty}\frac{P_n(t)-P_n(0)}{t}d\mu(t)+ A_n\int_0^{\infty}\frac{P_{n-1}(t)-P_{n-1}(0)}{t}d\mu(t)\\ =&s_0^*(P_n(0)+A_nP_{n-1}(0))+Q_n(0)+A_nQ_{n-1}(0)\\ =&s_0^*P_n(0)+Q_n(0)+A_n(s_0^*P_{n-1}(0)+Q_{n-1}(0)), n\geq1. \end{split}$$ We see that the equation is equivalent to the orthogonality of the polynomial $P_n+A_n P_{n-1}$ with respect to $[\cdot,\cdot]_1$. Hence, the equation has a unique solution and, so, one has $$s_0^*P_{n-1}(0)+Q_{n-1}(0)\ne 0.$$ Furthermore, the unique solution of the above equation is $$A_n=-\frac{s_0^*P_{n}(0)+Q_{n}(0)}{s_0^*P_{n-1}(0)+Q_{n-1}(0)},$$ which leads us to formula . Finally, it is worth mentioning that the Geronimus transformation can be also considered in the case when $(\cdot,\cdot)_0$ is regular and is not necessarily positive definite [@BM04]. Moreover, necessary and sufficient conditions for the regularity of $[\cdot, \cdot]_{1}$ are analyzed in  [@BM04; @D13; @DD11]. The structure of the transformed Jacobi matrix ============================================== It is very well known that, assuming $[\cdot, \cdot]_{1}$ is positive definite, we can associate with the sequence of monic orthogonal polynomials $\{P_n^{*}\}_{n=0}^{\infty}$ a monic tridiagonal Jacobi matrix $$J_{mon}^*=\begin{pmatrix} {b}_{0}^* & 1 & &\\ ({c}_{0}^*)^2 &{b}_1^* &{1}&\\ &({c}_1^*)^2 &{b}_{2}^* &\ddots\\ & &\ddots &\ddots\\ \end{pmatrix}.$$ Recall that the entries of $J_{mon}^*$ are defined by the corresponding three-term recurrence relation $$\label{recrelmonp} t P_j^*(t) = P_{j+1}^*(t) + b_j^*P_j^*(t) + (c_{j-1}^*)^2P_{j-1}^*(t),\quad j\in\dZ_+,$$ with the initial conditions $$P_{-1}^*(t)=0,\quad P_{0}^*(t) =1,$$ where $b_j^*\in\dR$ and $c_j^*>0$, $j\in\dZ_+$. Depending on circumstances it can be also convenient to consider a symmetric tridiagonal Jacobi matrix $$J^*=\begin{pmatrix} {b}_{0}^* & {c}_{0}^* & &\\ {c}_{0}^* &{b}_1^* &{c}_{0}^*&\\ &{c}_1^* &{b}_{2}^* &\ddots\\ & &\ddots &\ddots\\ \end{pmatrix}=\Psi^{-1}J_{mon}^*\Psi,$$ where $\Psi=\diag(1,c_0^*,c_0^*c_1^*,c_0^*c_1^*c_2^*,\dots)$. Indeed, $J^*$ is the matrix of the multiplication operator with respect to the basis of orthonormal polynomials $$\widehat{P}_n^*(t)=\frac{1}{h_n^*}P_n^*(t),\quad (h_n^*)^2=[P_n^*,P_n^*]_1,\quad h_n^*>0.$$ In other words, we have the following representation $$J^*=\left([t\widehat{P}_n^*(t),\widehat{P}_m^*(t)]_1\right)_{n,m=0}^{\infty}.$$ Since $J^*$ corresponds to the Geronimus transformation, it has a special structure, which can be expressed in terms of the coefficients $A_{n}$, $n= 1,2,\dots$, and the free parameter. Let us assume that $(\cdot,\cdot)_0$, $[\cdot,\cdot]_1$ are positive definite and $\{P_n\}_{n=0}^{\infty}$ and $\{P_n^{*}\}_{n=0}^{\infty}$ are, respectively, the corresponding sequences of monic orthogonal polynomials. Then the matrix $J^*$ admits the following Cholesky decomposition $$\label{SD_Cholesky} J^*=LL^{\top},$$ where the bidiagonal lower triangular matrix $L$ has the form $$\label{SD_bidiag} L=\begin{pmatrix} \frac{h_0}{s_0^*}& 0& &&\\ A_1\frac{h_0}{s_0^*} & \frac{h_1}{\sqrt{A_1}h_0} & 0 &&\\ &\frac{A_2h_1}{\sqrt{A_1}h_0} & \frac{h_2}{\sqrt{A_2}h_1} &0&\\ & &\frac{A_3h_2}{\sqrt{A_2}h_1} & \frac{h_3}{\sqrt{A_3}h_2} &\ddots\\ && &\ddots &\ddots\\ \end{pmatrix}.$$ We begin by noticing that $$\label{SD_mrepr_h1} J^*=\begin{pmatrix} \frac{1}{h_0^*} & 0 & \\ 0 &\frac{1}{h_1^*} &\ddots\\ &\ddots &\ddots\\ \end{pmatrix} \begin{pmatrix} [tP_0^*(t),P_0^*(t)]_1 & [tP_0^*(t),P_1^*(t)]_1 & \\ [tP_1^*(t),P_0^*(t)]_1 &[tP_1^*(t),P_1^*(t)]_1 &\ddots\\ &\ddots &\ddots\\ \end{pmatrix} \begin{pmatrix} \frac{1}{h_0^*} & 0 & \\ 0 &\frac{1}{h_1^*} &\ddots\\ &\ddots &\ddots\\ \end{pmatrix}.$$ Since $[tP_n^*(t),P_m^*(t)]_1=(P_n^*(t),P_m^*(t))_0$, the symmetric tridiagonal matrix in the middle of the right hand side of  reduces to $$\begin{split} \left((P_n^*,P_m^*)_0\right)_{n,m=0}^{\infty}=&\begin{pmatrix} (P_0^*,P_0^*)_0 & (P_0^*,P_1^*)_0 & 0&\\ (P_1^*,P_0^*)_0 &(P_1^*,P_1^*)_0 &(P_1^*,P_2^*)_0&\\ 0 &(P_2^*,P_1^*)_0 &(P_2^*,P_2^*)_0 &\ddots\\ & &\ddots &\ddots\\ \end{pmatrix}\\ =&\begin{pmatrix} h_0^2 & A_1h_0^2 & 0&\\ A_1h_0^2 &h_1^2+A_1^2h_0^2 &A_2h_1^2&\\ 0 &A_2h_1^2 &h_2^2+A_2^2h_1^2&\ddots\\ & &\ddots &\ddots\\ \end{pmatrix}\\ =&\begin{pmatrix} h_0 & 0 & &\\ {A}_{1}h_0 & h_1 &0 &\\ &{A}_2h_1 &h_2 &\ddots\\ & &\ddots &\ddots\\ \end{pmatrix}\begin{pmatrix} {h}_{0} & A_1h_0 & &\\ {0} &{h}_1 &{A}_2h_1&\\ &0 &{h}_{2} &\ddots\\ & &\ddots &\ddots\\ \end{pmatrix} \end{split}$$ in view of formula . Hence it is clear that  holds with $$\label{SD_mrepr_h2} L=\begin{pmatrix} \frac{1}{h_0^*} & 0 & \\ 0 &\frac{1}{h_1^*} &\ddots\\ &\ddots &\ddots\\ \end{pmatrix}\begin{pmatrix} h_0 & 0 & &\\ {A}_{0}h_1 & h_1 &0 &\\ &{A}_2h_1 &h_2 &\ddots\\ & &\ddots &\ddots\\ \end{pmatrix}.$$ Now, observe that $$\begin{split} (h_{n+1}^*)^2=&[P_{n+1}^*,P_{n+1}^*]_1=[tP_{n}^*(t),P_{n+1}^*(t)]_1=(P_{n}^*,P_{n+1}^*)_0\\ =&(P_n+A_nP_{n-1},P_{n+1}+A_{n+1}P_n)_0\\ =&A_{n+1}h_n^2, \end{split}$$ which gives $$h_{n+1}^*=\sqrt{A_{n+1}}h_n^*.$$ Combining this and $(h_{0}^*)^2=s_0^*$ we get that  can be easily simplified to . As a matter of fact, this statement is a trace of the fact that the Geronimus transformation can be interpreted in the matrix language (for details, see [@BM04], as well as  [@DD11] for the non-regular case). In order the paper to be self-contained a direct connection between the matrices $J_{mon}$ and $J_{mon}^{*}$ associated with the monic orthogonal polynomial sequences $\{P_n\}_{n=0}^{\infty}$, $\{P_n^{*}\}_{n=0}^{\infty}$, respectively, will be stated. Let $$L_{mon}= \begin{pmatrix} 1 & 0& &&\\ A_{1} & 1 & 0 &&\\ 0 &A_2 & 1 &0&\\ 0 & 0 &A_3 &1 &\ddots\\ && &\ddots &\ddots\\ \end{pmatrix}$$ be an infinite matrix such that $ P^{*} = L_{mon} P$, where $P^{*}= (P_{0}^{*}, P_{1}^{*}, \dots )^{\top}$ and $P= (P_{0}, P_{1}, \dots )^{\top}.$ On the other hand, according to the Christoffel formula (see [@Chi]) or, equivalently, to the fact that $$[tP_n(t),P_m^*(t)]_1=(P_n(t),P_m^*(t))_0,\quad m=0,\dots, n-1,$$ we get the relation $$tP_{n}(t)= P_{n+1}^*(t) + F_{n+1} P_{n}^*(t),\quad F_{n+1}\neq0,\quad n\geq 0.$$ In matrix terms, we have that $t P = U_{mon}P^{*} $, where $$U_{mon} ^{\top} = \begin{pmatrix} F_1 & 0& &&\\ 1 & F_2 & 0 &&\\ 0 &1 & F_3 &0&\\ 0 & 0 &1 &F_4 &\ddots\\ && &\ddots &\ddots\\ \end{pmatrix}$$ Thus, we can state the following We have that $$J_{mon} = U_{mon} L_{mon},$$ $$J_{mon}^{*}= L_{mon }U_{mon}.$$ Notice that $$t P= U_{mon} P^{*}= U_{mon} L_{mon} P.$$ Thus, one gets $$J_{mon} = U_{mon} L_{mon}.$$ On the other hand, we se that $$t P^{*} = L_{mon} t P = L_{mon} U_{mon} P^{*}.$$ As a consequence, we arrive at $$J_{mon}^{*}= L_{mon } U_{mon}.$$ Thus, we have a simple proof of a very well known result (see [@BM04]) in terms of a Darboux transformation with parameter (see also [@D13; @DD11] for similar results in the non-regular case). The double Geronimus transformation and the Sobolev orthogonality ================================================================= In this section we present the double Geronimus transformation in the framework of symmetric bilinear forms. Also, it is shown that this transformation leads to Sobolev inner products and, therefore, to Sobolev orthogonal polynomials. First, let us clarify what we mean by the double Geronimus transformation. \[DGtrans\]Let us consider a symmetric bilinear form $(\cdot,\cdot)_0$. The double Geronimus transformation of $(\cdot,\cdot)_0$ is a symmetric bilinear form defined on the linear space $\mathcal P$ of polynomials with real coefficients as follows $$\label{defGT} [t^2f(t),g(t)]_2=[f(t),t^2g(t)]_2=(f,g)_{0}=\int_{\dR} f(t)g(t)d\mu(t), \quad f,g\in{\mathcal P}.$$ From  one can see that the form $[\cdot,\cdot]_2$ is not uniquely defined. In particular, the symmetric matrix (since the form is symmetric) $$\begin{pmatrix} [1,1] &[1,t] \\ [t,1]& [t,t]\\ \end{pmatrix}= \begin{pmatrix} s_0^{**} &s_1^{**} \\ s_1^{**}& s_2^{**}\\ \end{pmatrix}$$ can be chosen arbitrarily. Despite this, one can see the structure of the double Geronimus transformation. Suppose that $d\mu$ has the following representation $$\label{DG_h1} d\mu(t)=t^2d\mu_2(t),$$ where $d\mu_2$ is a positive measure and it has finite moments. Then the double Geronimus transformation of $(\cdot,\cdot)_0$ admits the representation $$\label{DG_FforBF} [f,g]_2=\int_{\dR}f(t)g(t)d\mu_2(t)+ \begin{pmatrix} f(0)& f'(0)\\ \end{pmatrix} M\begin{pmatrix} g(0)\\ g'(0)\\ \end{pmatrix},$$ where the symmetric matrix $M$ has the following form $$M=\begin{pmatrix} s_0^{**} &s_1^{**} \\ s_1^{**}& s_2^{**}\\ \end{pmatrix}- \begin{pmatrix} \int_{\dR}d\mu_2(t) &\int_{\dR}td\mu_2(t) \\ \int_{\dR}td\mu_2(t)& \int_{\dR}t^2d\mu_2(t)\\ \end{pmatrix} .$$ The proof is similar to that of Proposition \[SG\_h2\]. First, we see that $$\begin{split} [f,g]_2=&[f(t)-f(0)-tf'(0),g(t)]_2+[f(0)+tf'(0),g(t)]_2\\ =&\left(\frac{f(t)-f(0)-tf'(0)}{t^2},g(t)\right)+[f(0)+tf'(0),g(t)]_2. \end{split}$$ Then, making use of the representation $g(t)=g(t)-g(0)-tg'(0)+g(0)+tg'(0)$ and taking into account  we get the desired result . Since the symmetric matrix $M$ can be arbitrary, from formula  one can see that, in general, the double Geronimus transformation $[\cdot,\cdot]_2$ generates Sobolev type inner products. In particular, one recovers the positive diagonal Sobolev type inner products when $$M=\begin{pmatrix} \lambda_1 &0 \\ 0& \lambda_2\\ \end{pmatrix}, \quad \lambda_1\ge 0\text{ and } \lambda_2> 0.$$ At the same time, we also get that the double Geronimus transformation adds a matrix mass at the point in some sense. Thus, the corresponding sequence of orthogonal polynomials, which are called in the literature Sobolev type orthogonal polynomials, are not already standard scalar orthogonal polynomials (the scalar Hankel structure of the Gram matrix is destroyed by the perturbation) but not yet essentially matrix orthogonal polynomials although it is convenient to consider them as matrix orthogonal since the Gram matrix is in fact a $2\times 2$ block Hankel matrix (see [@AMRR] for some basic properties of Sobolev type orthogonal polynomials). Now we are in a position to give an explicit formula for the transformed polynomials $\{P_n^{**}\}_{n=0}^{\infty}$ orthogonal with respect to $[\cdot,\cdot]_2$ in terms of the original polynomials $\{P_n\}_{n=0}^{\infty}$. Let us assume that $(\cdot,\cdot)_0$ and $[\cdot,\cdot]_2$ are both positive definite and regular bilinear forms, respectively. Let $\{P_n\}_{n=0}^{\infty}$ be a sequence of monic polynomials orthogonal with respect to $(\cdot,\cdot)_0$. Then a monic polynomial $P_n^{**}$ of degree $n$ is orthogonal with respect to $[\cdot,\cdot]_2$ if and only if it can be represented as follows $$\label{DG_PolRepr} P_n^{**}(t)=\frac{1}{d_n^{**}} \begin{vmatrix} P_n(t)&R'_n(0;s_1^{**})+s_0^{**}P_n(0)&R_{n}(0;s_1^{**})+(s_2^{**}-s_0^{**})P'_{n}(0)\\ P_{n-1}(t) &R'_{n-1}(0;s_1^{**})+s_0^{**}P_{n-1}(0) & R_{n-1}(0;s_1^{**})+(s_2^{**}-s_0^{**})P'_{n-1}(0)\\ P_{n-2}(t)& R'_{n-2}(0;s_1^{**})+s_0^{**}P_{n-2}(0)& R_{n-2}(0;s_1^{**})+(s_2^{**}-s_0^{**})P'_{n-2}(0) \end{vmatrix},$$ where $R_n(t;s)=sP_{n}(t)+Q_{n}(t)$, $R'_n(t;s)=sP'_{n}(t)+Q'_{n}(t)$, and $$d_n^{**}= \begin{vmatrix} R'_{n-1}(0;s_1^{**})+s_0^{**}P_{n-1}(0) & R_{n-1}(0;s_1^{**})+(s_2^{**}-s_0^{**})P'_{n-1}(0)\\ R'_{n-2}(0;s_1^{**})+s_0^{**}P_{n-2}(0)& R_{n-2}(0;s_1^{**})+(s_2^{**}-s_0^{**})P'_{n-2}(0) \end{vmatrix}$$ is nonzero. The orthogonality of $P_n^{**}$ is equivalent to the following condition $$[P_n^{**}(t),t^k]_2=0, \quad k=0,\dots,n-1,$$ which for $n\ge 3$ further reduces to $$(P_n^{**}(t),t^{k-2})_{0}=0, \quad k=2,\dots, n-1.$$ The latter relation is obviously equivalent to the representation $$\label{DD_OP_repr} P_n^{**}(t)=P_n(t)+B_nP_{n-1}(t)+C_nP_{n-2}(t).$$ Therefore one can see that the coefficients $B_n$ and $C_n$ are uniquely determined by the relations $$\begin{split} [P_n^{**}(t),1]_2=&0,\\ [P_n^{**}(t),t]_2=&0, \end{split}$$ which can be rewritten as follows $$\label{DG_h2} \begin{split} [P_n(t),1]_2+B_n[P_{n-1}(t),1]_2+C_n[P_{n-2}(t),1]_2=& 0,\\ [P_n(t),t]_2+B_n[P_{n-1}(t),t]_2+C_n[P_{n-2}(t),t]_2=& 0. \end{split}$$ Since the monic orthogonal polynomial $P_n^{**}$ of degree $n$ is uniquely defined, the system  has a unique solution. Indeed, on the one hand, it is clear that the system has at least one solution because there exists a monic orthogonal polynomial of degree $n$. On the other hand, if it has two different solutions then these solutions would give two different monic orthogonal polynomials of degree $n$. The latter fact is not possible according the uniqueness of such a sequence. So, we conclude that the determinant $$d_n^{**}=\begin{vmatrix} [P_{n-1}(t),1]_2 &[P_{n-1}(t),t]_2\\ [P_{n-2}(t),1]_2 & [P_{n-2}(t),t]_2 \end{vmatrix}$$ is nonzero and the orthogonality of $P_n^{**}$ is equivalent to the representation $$P_n^{**}(t)=\frac{1}{d_n^{**}} \begin{vmatrix} P_n(t)&[P_n(t),1]_2&[P_n(t),t]_2\\ P_{n-1}(t) &[P_{n-1}(t),1]_2 &[P_{n-1}(t),t]_2\\ P_{n-2}(t)&[P_{n-2}(t),1]_2 & [P_{n-2}(t),t]_2 \end{vmatrix}.$$ Now, to get formula  it remains to re-express the entries of the corresponding determinants. To this end, for the last column one can get $$\begin{split} [P_n(t),t]_2=&[P_n(t)-P_n(0)-tP'_n(0),t]_2+P_n(0)[1,t]_2+P'_n(0)[t,t]_2\\ =&\left(\frac{P_n(t)-P_n(0)-tP'_n(0)}{t^2},t\right)_0+s_1^{**}P_n(0)+s_2^{**}P'_n(0)\\ =&\left(\frac{P_n(t)-P_n(0)-tP'_n(0)}{t},1\right)_0+s_1^{**}P_n(0)+s_2^{**}P'_n(0)\\ =&Q_n(0)+s_1^{**}P_n(0)+(s_2^{**}-s_0^{**})P'_n(0)\\ =& R_n(0;s_1^{**})+(s_2^{**}-s_0^{**})P'_n(0). \end{split}$$ Next, we also have that $$\begin{split} [P_n(t),1]_2=&[P_n(t)-P_n(0)-tP'_n(0),1]_2+P_n(0)[1,1]_2+P'_n(0)[t,1]_2\\ =&\left(\frac{P_n(t)-P_n(0)-tP'_n(0)}{t^2},1\right)_0+s_0^{**}P_n(0)+s_1^{**}P'_n(0).\\ \end{split}$$ On the other hand, notice that $$\begin{split} Q'_n(0)=&\lim_{\epsilon\to 0}\frac{Q_n(\epsilon)-Q_n(0)}{\epsilon}\\ =&\lim_{\epsilon\to 0}\frac{1}{\epsilon}\left(\left(\frac{P_n(t)-P_n(\epsilon)}{t-\epsilon},1\right)_0 -\left(\frac{P_n(t)-P_n(0)}{t},1\right)_0\right)\\ =&\lim_{\epsilon\to 0}\frac{1}{\epsilon}\int_{\dR}\frac{\epsilon(P_n(t)-P_n(0))-t(P_n(\epsilon)-P_n(0))}{t(t-\epsilon)}d\mu(t)\\ =&\int_{\dR}\frac{P_n(t)-P_n(0)-tP'_n(0)}{t^2}d\mu(t)\\ =&\left(\frac{P_n(t)-P_n(0)-tP'_n(0)}{t^2},1\right)_0. \end{split}$$ Note that we can interchange the limit and integral due to Lebesgue’s dominated convergence theorem. Finally, we get $$\begin{split} [P_n(t),1]_2=& Q'_n(0)+s_0^{**}P_n(0)+s_1^{**}P'_n(0)\\ =& R'_n(0;s_1^{**})+s_0^{**}P_n(0), \end{split}$$ which completes the proof. The structure of the transformed pentadiagonal matrix ===================================================== In the case when the sequence of polynomials $\{P_n^{**}\}_{n=0}^{\infty}$ is orthogonal with respect to an inner product of the form , it is quite natural to consider the matrix representation of the square of the multiplication operator [@Dur93; @ELMMR]. Indeed, according to formula  the multiplication operator is not necessarily symmetric with respect to $[\cdot,\cdot]_2$ and as a consequence one cannot apply some classical tricks in this case. Nevertheless, the square of the multiplication operator is symmetric by the definition of the double Geronimus transformation. Thus, the classical machinery works for this symmetric operator. So, assuming $[\cdot, \cdot]_{2}$ is positive definite, let us introduce the following symmetric matrix $$J^{**}=\left([t^2\widehat{P}_n^{**}(t),\widehat{P}_m^{**}(t)]_2\right)_{n,m=0}^{\infty},$$ where the corresponding orthonormal polynomials $\{\widehat{P}_n^{**}\}_{n=0}^{\infty}$ are given by $$\widehat{P}_n^{**}(t)=\frac{1}{h_n^{**}}P_n^{**}(t),\quad (h_n^{**})^2=[P_n^{**},P_n^{**}]_2,\quad h_n^{**}>0.$$ In fact, $J^{**}$ is pentadiagonal and the following statement holds true. Let us assume that $(\cdot,\cdot)_0$, $[\cdot,\cdot]_2$ are positive definite and $\{P_n\}_{n=0}^{\infty}$, $\{P_n^{**}\}_{n=0}^{\infty}$, respectively, are the corresponding sequences of monic orthogonal polynomials. Then the matrix $J^{**}$ admits the following Cholesky decomposition $$\label{DD_Cholesky} J^{**}=LL^{\top},$$ where the lower triangular matrix $L$ has only three nonvanishing diagonals and is of the form $$\label{DD_bidiag} L=\begin{pmatrix} \frac{h_0}{h_0^{**}}& 0& &&\\ B_1\frac{h_0}{h_0^{**}} & \frac{h_1}{h_1^{**}} & 0 &&\\ C_2\frac{h_0}{h_0^{**}} &B_2\frac{h_1}{h_1^{**}} &\frac{h_2}{h_2^{**}} &0&\\ 0 & C_3\frac{h_1}{h_1^{**}} &B_3\frac{h_2}{h_2^{**}} &\frac{h_3}{h_3^{**}} &\ddots\\ && &\ddots &\ddots\\ \end{pmatrix},$$ where the ratio $h_{n+2}/h_{n+2}^{**}$ can be expressed in terms of the coefficients $B_{n}$ and $C_{n}$, $n=1, 2,...$ of the linear combination as follows $$\label{DG_mrepr_h1} \frac{h_{n+2}}{h_{n+2}^{**}}=\frac{h_{n+2}}{\sqrt{C_{n+2}}h_n},\quad n=0,1,\dots.$$ It should be stressed that $h_0^{**}$ and $h_1^{**}$ can be parametrized by the free parameters: $$(h_0^{**})^2=s_0^{**},\quad (h_1^{**})^2=s_2^{**}+s_1^{**}\left(B_1-\frac{s_1}{s_0}\right).$$ Obviously, we have that $$\label{DD_mrepr_h1} J^{**}=\begin{pmatrix} \frac{1}{h_0^{**}} & 0 & \\ 0 &\frac{1}{h_1^{**}} &\ddots\\ &\ddots &\ddots\\ \end{pmatrix} \begin{pmatrix} [t^2P_0^{**}(t),P_0^{**}(t)]_2 & [t^2P_0^{**}(t),P_1^{**}(t)]_2 & \\ [t^2P_1^{**}(t),P_0^{**}(t)]_2 &[t^2P_1^{**}(t),P_1^{**}(t)]_2 &\ddots\\ &\ddots &\ddots\\ \end{pmatrix} \begin{pmatrix} \frac{1}{h_0^{**}} & 0 & \\ 0 &\frac{1}{h_1^{**}} &\ddots\\ &\ddots &\ddots\\ \end{pmatrix}.$$ Since $[t^2P_n^{**}(t),P_m^{**}(t)]_2=(P_n^*(t),P_m^*(t))_0$, the pentadiagonal matrix in the middle of the right hand side of  reduces to $$\begin{split} \left((P_n^{**},P_m^{**})_0\right)_{n,m=0}^{\infty}=&\begin{pmatrix} (P_0^{**},P_0^{**})_0 & (P_0^{**},P_1^{**})_0 & (P_0^{**},P_2^{**})_0&0&\\ (P_1^{**},P_0^{**})_0 &(P_1^{**},P_1^{**})_0 &(P_1^{**},P_2^{**})_0&(P_1^{**},P_3^{**})_0&\\ (P_2^{**},P_0^{**})_0 &(P_2^{**},P_1^{**})_0 &(P_2^{**},P_2^{**})_0 &(P_2^{**},P_3^{**})_0&\ddots\\ 0&(P_3^{**},P_1^{**})_0 &(P_3^{**},P_2^{**})_0 &(P_3^{**},P_3^{**})_0 &\ddots\\ & & \ddots &\ddots &\ddots\\ \end{pmatrix}\\ =&\begin{pmatrix} h_0^2 & B_1h_0^2 & C_2h_0^2&0&\\ B_1h_0^2 &h_1^2+B_1^2h_0^2&B_2h_1^2+B_1C_2h_0^2&C_3h_1^2&\\ C_2h_0^2 &B_2h_1^2+B_1C_2h_0^2 &h_2^2+B_2^2h_1^2+C_2^2h_0^2&B_3h_2^2+B_2C_3h_1^2&\ddots\\ 0&C_3h_1^2 &B_3h_2^2+B_2C_3h_1^2 &h_3^2+B_3^2h_2^2+C_3^2h_1^2 &\ddots\\ & & \ddots &\ddots &\ddots\\ \end{pmatrix}\\ =&\begin{pmatrix} h_0 & 0 & &&\\ {B}_{1}h_0 & h_1 &0 &&\\ C_2h_0 &{B}_2h_1 &h_2 &0&\\ 0&C_3h_1 &{B}_3h_2 &h_3 &\ddots\\ & & \ddots&\ddots &\ddots\\ \end{pmatrix}\begin{pmatrix} {h}_{0} & B_1h_0 & C_2h_0 &&\\ {0} &{h}_1 &{B}_2h_1&C_3h_1&\\ &0 &{h}_{2} &B_3h_2&\ddots\\ &&0&h_3&\ddots\\ && &\ddots &\ddots\\ \end{pmatrix} \end{split}$$ in view of formula . Hence it is clear that  holds with $$\label{DD_mrepr_h2} L=\begin{pmatrix} \frac{1}{h_0^{**}} & 0 & \\ 0 &\frac{1}{h_1^{**}} &\ddots\\ &\ddots &\ddots\\ \end{pmatrix}\begin{pmatrix} h_0 & 0 & &&\\ {B}_{1}h_0 & h_1 &0 &&\\ C_2h_0 &{B}_2h_1 &h_2 &0&\\ 0&C_3h_1 &{B}_3h_2 &h_3 &\ddots\\ & & \ddots&\ddots &\ddots\\ \end{pmatrix}.$$ Thus we arrive at  after simple computations. Finally, it remains to see that $$\begin{split} (h_{n+1}^{**})^2=&[P_{n+2}^{**},P_{n+2}^{**}]_2=[t^2P_{n}^{**}(t),P_{n+2}^*(t)]_2=(P_{n}^{**},P_{n+2}^{**})_0\\ =&(P_n+B_nP_{n-1}+C_nP_{n-2},P_{n+2}+B_{n+2}P_{n+1}+C_{n+2}P_n)_0\\ =&C_{n+2}h_n^2, \end{split}$$ which gives . The next step will be to establish a direct connection between the matrices $J_{mon}$ and $J_{mon}^{**}$ associated with the monic orthogonal polynomial sequences $\{P_n\}_{n=0}^{\infty}$, $\{P_n^{**}\}_{n=0}^{\infty}$, respectively.\ Let $$L_{mon}= \begin{pmatrix} 1 & 0& &&\\ B_1 & 1 & 0 &&\\ C_2 &B_2 & 1 &0&\\ 0 & C_3 &B_3 &1 &\ddots\\ && &\ddots &\ddots\\ \end{pmatrix}$$ be an infinite matrix such that $ P^{**} = L_{mon} P$, where $P^{**}= (P_{0}^{**}, P_{1}^{**}, \dots )^{\top}$ and $P= (P_{0}, P_{1}, \dots )^{\top}.$ At the same time, from the equality $$[ t^{2}P_{n}(t),P_{m}^{**}(t)]_{2}= [P_{n}(t),P_{m}^{**}(t)]_{0}= 0, \quad m= 0, 1, \dots , n-1,$$ one concludes that $$t^{2}P_{n}(t) = P_{n+2}^{**}(t)+ D_{n+1}P_{n+1}^{**}(t) + E_{n+1}P_{n}^{**}(t),\quad E_{n+1}\neq 0,\quad n= 0, 1, \dots.$$ The above connection formula reads in a matrix form as $t^{2} P = U_{mon}P^{**} $, where $$U_{mon} ^{\top} = \begin{pmatrix} E_1 & 0 &&\\ D_1 & E_2 & 0 &&\\ 1 & D_2 & E_3 & 0 &&\\ 0 & 1 & D_3 &E_4&0 &&\\ 0 & 0 & 1& D_4 & E_5&\ddots &&\\ && &\ddots &\ddots&\ddots\\ \end{pmatrix}$$ Thus, we can state the following. We have that $$J_{mon}^{2} = U_{mon} L_{mon},$$ $$J_{mon}^{**}= L_{mon } U_{mon}.$$ Notice that $$t^{2} P= U_{mon} P^{**}= U_{mon} L_{mon} P.$$ Thus, one sees that $$J_{mon}^{2} = U_{mon} L_{mon}.$$ On the other hand, we have $$t^{2} P^{**} = L_{mon} t^{2} P = L_{mon} U_{mon} P^{**}.$$ As a consequence, one gets $$J_{mon}^{**}= L_{mon } U_{mon}.$$ Notice that this is the analogue for pentadiagonal matrices of the Darboux transformation with parameter considered in Section 3. Moreover, the structure of the matrix representing the multiplication operator by $t^{2}$ with respect to the orthonormal polynomial basis associated with the inner product $[\cdot , \cdot]_{2}$ is stated in terms of the corresponding UL factorization of the pentadiagonal matrix $J^{2}$.\ [**Acknowledgments**]{} The research of MD is supported by the European Research Council under the European Union Seventh Framework Programme (FP7/2007-2013)/ERC grant agreement no. 259173. The research of FM has been supported by Dirección General de Investigación, Ministerio de Economía y Competitividad of Spain, grant MTM2012-36732-C03-01. [99]{} M. Alfaro, F. Marcellán, M. L. Rezola, A. Ronveaux, *On orthogonal polynomials of Sobolev type: Algebraic properties and zeros*, SIAM J. Math. Anal. 23 (3) (1992), 737–757. M. Alfaro, F. Marcellán, A. Peña, M. L. Rezola, *When do linear combinations of orthogonal polynomials yield new sequences of orthogonal polynomials?*, J. Comput. Appl. Math. 233 (2010), no. 6, 1446–1452. M. Alfaro, A. Peña, M. L. Rezola, F. Marcellán , *Orthogonal polynomials associated with an inverse quadratic spectral transform*, Comput. Math. Appl. 61 (2011), no. 4, 888–900. D. Beghdadi, P. Maroni, *On the inverse problem of the product of a semi-classical form by a polynomial*, J. Comput. Appl. Math. 88 (1998), no. 2, 377–399. A. Branquinho, F. Marcellán, *Generating new classes of orthogonal polynomials*, Internat. J. Math. Math. Sci. 19 (1996), no. 4, 643–656. A. Bultheel and M. Van Barel, *Formal orthogonal polynomials and rational approximation for arbitrary bilinear forms*, Tech. Report TW 163, Department of Computer Science, KU Leuven (Belgium), 1991. M. I. Bueno, F. Marcellán, *Darboux transformation and perturbation of linear functionals*, [Linear Algebra Appl.]{}, 384 (2004), 215–242. M. I. Bueno, F. Marcellán, *Polynomial perturbations of bilinear functionals and Hessenberg matrices,* Linear Algebra Appl. 414 (2006), no. 1, 64–83. T. S. Chihara, *An Introduction to Orthogonal Polynomials*. Mathematics and its Applications, Vol. 13. Gordon and Breach Science Publishers, New York–London–Paris, 1978. M. Derevyagin, *On the relation between Darboux transformations and polynomial mappings*, J. Approx. Theory 172 (2013), 4–22. M. Derevyagin, V. Derkach, *Darboux transformations of Jacobi matrices and Padé approximation,* Linear Algebra Appl. 435 (2011), no. 12, 3056–3084. A. J. Durán, *A generalization of Favard’s theorem for polynomials satisfying a recurrence relation,* J. Approx. Theory 74 (1993), no. 1, 83–109. D. Evans L. L. Littlejohn, F. Marcellán, C. Markett, A. Ronveaux, *On recurrence relations for Sobolev polynomials*, SIAM J. Math. Anal. 26 (2) (1995), 446–467. J. Geronimus, *On polynomials orthogonal with regard to a given sequence of numbers*, Comm. Inst. Sci. Math. Mec. Univ. Kharkoff \[Zapiski Inst. Mat. Mech.\] (4) 17 (1940), 3–18 (Russian). C. Hounga, M. N. Hounkonnou, A. Ronveaux, *New families of orthogonal polynomials*, J. Comput. Appl. Math. 193 (2006), no. 2, 474–483. P. Iliev, *Krall-Laguerre commutative algebras of ordinary differential operators* , Ann. Mat. Pura Appl. (4) 192 (2013), no 2, 203–224. K. H. Kwon, D. W. Lee, F. Marcellán, S. B. Park, *On kernel polynomials and self-perturbation of orthogonal polynomials*, Ann. Mat. Pura Appl. (4) 180 (2001), no. 2, 127–146. P. Maroni, *Sur la suite de polynômes orthogonaux associée à la forme $u=\delta_{c}+\lambda (x-c)^{-1}$,* Period. Math. Hungar. 21 (1990), no. 3, 223–248 (French). P. Maroni, I. Nicolau, *On the inverse problem of the product of a form by a polynomial: the cubic case*, Appl. Numer. Math. 45 (2003), no. 4, 419–451. P. Maroni, R. Sfaxi, *Diagonal orthogonal polynomial sequences*, Methods Appl. Anal. 7 (2000), no. 4, 769–791. J. Shohat, *On mechanical quadratures, in particular, with positive coefficients*, Trans. Amer. Math. Soc. 42 (1937), no. 3, 461–496. V. Spiridonov, A. Zhedanov, *Discrete Darboux transformations, the discrete-time Toda lattice, and the Askey-Wilson polynomials*, Methods Appl. Anal. 2 (1995), no. 4, 369–398. V. Spiridonov, A. Zhedanov, *Discrete-time Volterra chain and classical orthogonal polynomials*, J. Phys. A: Math. Gen. 30 (1997), 8727–8737. A. Zhedanov, *Rational spectral transformations and orthogonal polynomials*, J. Comput. Appl. Math. 85 (1997), no. 1, 67–86.
--- author: - 'Simon K. Schnyder' - Yuki Tanaka - 'John J. Molina' - Ryoichi Yamamoto bibliography: - 'supplementalMaterials.bib' title: 'Collective motion of cells crawling on a substrate: roles of cell shape and contact inhibition' --- **Supplemental Materials:\ Collective motion of cells crawling on a substrate: roles of cell shape and contact inhibition** Derivation of the model from explicit crawling motion ===================================================== The cell migration mechanism presented in this letter can be derived from a coarse-graining of a two stage periodic crawling cycle that is repeated cyclically with a time period $\Delta T$, see \[fig:CellMigrationCycle\_smaller\]. In the first stage of the crawling motion $0 < t < \Delta T/2$, the pseudopod is pushed forward against friction with the substrate by an extensional force, $F_\text{fene}+F_\text{mig}$, which acts between the two disks of opposite sign, while the cell body adheres to the substrate with $\zeta_b=\infty$, $$\begin{aligned} \vec v_b(t) =0,\ \ \ \vec v_f(t)=\frac{\vec F_\text{fene}+\vec F_\text{mig}}{\zeta_f}.\end{aligned}$$ For times $\Delta T/2 < t < \Delta T$, the cell is in the second stage in which the pseudopod adheres to the substrate $\zeta_f=\infty$ and the cell body is drawn in with a contractional force $\vec F_\text{fene}$. $$\begin{aligned} \vec v_b(t) =\frac{-\vec F_\text{fene}}{\zeta_b},\ \ \ \vec v_f(t)=0.\end{aligned}$$ The average velocities of back and front disks over the cycle $\Delta T$ are then given by $$\begin{aligned} \bar{\vec v}_f &= \frac{1}{\zeta_f\Delta T}\int_{0}^{\Delta T/2}[\vec F_\text{fene}(\vec r_{bf}(t))+F_\text{mig}(\vec r_{bf}(t))] dt,\\ \bar{\vec v}_b &= -\frac{1}{\zeta_b\Delta T}\int_{\Delta T/2}^{\Delta T}\vec F_\text{fene}(\vec r_{bf}(t))dt.\end{aligned}$$ In the limit $\Delta T \to 0$, $\vec r_{bf}(t)$ doesn’t change during the cycle thus, $$\begin{aligned} \vec v_f(t) &= \frac{1}{2\zeta_f}[\vec F_\text{fene}(\vec r_{bf}(t))+F_\text{mig}(\vec r_{bf}(t))],\\ \vec v_b(t) &= -\frac{1}{2\zeta_b}\vec F_\text{fene}(\vec r_{bf}(t)).\end{aligned}$$ Eq. (2) is finally obtained by using $\zeta=2\zeta_b=2\zeta_f$ and adding the cell-cell interaction force $F_\text{WCA}$. The original alternated application of each force is replaced by the application of both forces at the same time in eq. (2). This can be seen as the assumption that in a real cell, contraction, expansion, as well as fixing and unfixing of body and pseudopod happen at the same time or in close succession. ![ (a) The cell migration cycle which underlies our coarse-grained mechanics. (b) The forces acting in a single cell on the two disks being at distance $r_{bf}$. The two-phase cell migration cycle of (a) is marked as a black path. In the limit of vanishing cycle period, the cell obtains constant extension ${\ensuremath{r^\text{ss}_{bf}}}$ where the forces exactly balance (marked by grey line).[]{data-label="fig:CellMigrationCycle_smaller"}](CellMigrationCycle.pdf){width="0.8\columnwidth"} Cell-cell interaction ===================== The interaction between disks of different cells is modelled with the short-ranged Weeks-Chandler-Andersen potential [@Weeks1971], since interactions occur mainly via direct contact. All back disks have diameter $\sigma_b$, all front disks have diameter $\sigma_f$. To allow for different cell shapes, $\sigma_b$ and $\sigma_f$ can be different. For the interaction of a pair of disks $\alpha$ and $\beta$ ($\alpha, \beta \in [b,f]$) of two different cells at distance $\vec r$, the interaction diameter is given by $\sigma_{\alpha\beta} = (\sigma_\alpha + \sigma_\beta)/2$, the energy scale is given by ${\ensuremath{\varepsilon}}$, and the force by $$\begin{aligned} \vec F_\text{LJ}(r) = \begin{cases} -24{\ensuremath{\varepsilon}}\left[2\left(\frac{\sigma_{\alpha\beta}}{r}\right)^{12} - \left(\frac{\sigma_{\alpha\beta}}{r}\right)^6\right]\vec r/r^2, & r < r{\ensuremath{_\mathrm{cut}}}, \\ 0, & r \geq r{\ensuremath{_\mathrm{cut}}}. \end{cases}\end{aligned}$$ The cutoff at $r{\ensuremath{_\mathrm{cut}}}= 2^{1/6} \sigma_{\alpha\beta}$ makes the present inter-cellular forces purely repulsive, but inclusion of attractive terms modelling inter-cellular adhesion would be straightforward. Cell area ========= The area $A$ of a cell with a back disk of diameter $\sigma_b$, a front disk of diameter $\sigma_f$ and a distance $r_{bf}$ between the particles is given by $$\begin{aligned} A &= A_1 + A_2 - \text{overlap} \nonumber\\ & = \frac{\pi}{4} (\sigma_b^2 + \sigma_f^2) - \frac{\sigma_b^2}{4} \cdot \cos^{-1} \left(\frac{4r_{bf}^2 + \sigma_b^2 - \sigma_f^2}{8 r_{bf}\sigma_b}\right) - \frac{\sigma_f^2}{4} \cdot \cos^{-1} \left(\frac{4r_{bf}^2 - \sigma_b^2 + \sigma_f^2}{8 r_{bf}\sigma_f}\right) \nonumber\\ &+ \frac{1}{8} \sqrt{(-2r_{bf} + \sigma_b + \sigma_f)(2r_{bf} + \sigma_b + \sigma_f)} \cdot \sqrt{(2r_{bf} + \sigma_b - \sigma_f)(2r_{bf} - \sigma_b + \sigma_f)}\end{aligned}$$ We fix the diameters of the disks such that at the steady-state distance ${\ensuremath{r^\text{ss}_{bf}}}$, the area of the cells is constant, $A = 0.29 R^2_\text{max}$, regardless of the shape, i.e. the shape anisotropy $\sigma_b/\sigma_f$. For the three shapes, the cell sizes are given in Table SI. The length of the cells thus is then always of order $R_\text{max}$. $\sigma_b/\sigma_f$ $\sigma_b$ $\sigma_f$ Cell length --------------------- -------------------- -------------------- -------------------- 1.25 0.55$R_\text{max}$ 0.44$R_\text{max}$ 0.68$R_\text{max}$ 0.80 0.44$R_\text{max}$ 0.55$R_\text{max}$ 0.68$R_\text{max}$ 0.44 0.27$R_\text{max}$ 0.60$R_\text{max}$ 0.62$R_\text{max}$ : Size parameters for the cells. Cell length is given as the length of the cell in its steady state $(\sigma_b + \sigma_f)/2 + {\ensuremath{r^\text{ss}_{bf}}}$. \[table:sizes\] Cell noise ========== The dynamics of individual cells are often observed to fluctuate in time and space. These apparently random differences and fluctuations can have important biological and medical consequences. We implemented cell noise to allow for comparison to Keratocytes. The noise, which is applied to all cell disks, is given by the force $$\begin{aligned} \vec F_\text{noise} = \sqrt{2d}\, \vec\xi(t)\end{aligned}$$ with both components of $\vec \xi(t) = (\xi_x(t), \xi_y(t))$ being normally distributed random variables obeying ${{\ensuremath{\left\langle \xi_i(t) \right\rangle}}} = 0$ and ${{\ensuremath{\left\langle \xi_i(t)\xi_j(t') \right\rangle}}} = \delta_{ij} \delta(t - t')$ with Kronecker-$\delta$ $\delta_{ij}$ and $\delta$-function $\delta(t - t')$. Since the simulation is performed with a finite time step $\Delta t$, the force per timestep is $$\begin{aligned} \vec F_\text{noise} = \sqrt{2d/\Delta t}\, \vec\xi(t)\end{aligned}$$ The timestep is $1.9\cdot 10^{-4}\tau_\text{mig}$. The black line in Fig. 3d) was calculated for $d = 8.0\cdot 10^{-4} R_\text{max}{\ensuremath{v^\text{ss}}}$. When noise is added, which is the more realistic situation, the order parameter becomes strongly dependent on the area fraction. At small densities, the dynamics are dominated by noise, whereas at large densities collisions between cells play a strong role. This leads to an increase of the order parameter with area fraction in the case of $\sigma_b/\sigma_f = 0.44$ and the order parameter saturates near its noise-free value only at very high densities, see Fig. 3d) of the letter and \[fig:with\_noise\]. This behavior is qualitatively the same as observed in migrating sheets of epithelial cells such as goldfish keratocytes [@Szabo2006]. ![ Simulation snapshots of CIL cells with cell noise in the steady states at different area fractions. The cell velocities are shown as arrows, the color hue indicates deviation from the average directions, and slower cells are lighter in color. Cells are randomly migrating at a low density (a), some coherency appears at an intermediate density (b), and highly coherent migration is observed at a high density (c). Animation videos for these systems are also available, which compare well with similar experimental videos for Keratocytes [@Szabo2006]. []{data-label="fig:with_noise"}](snapshotPaperPlot_Keratocytes.png){width="0.8\columnwidth"} Genetically identical cells can still have different sizes and structures. This type of cellular noise arising from individual differences is neglected in the present simulations. It could however be easily implemented by introducing some polydispersity in model parameters such as $\sigma_b$, $\sigma_f$, $R_\text{max}$, $m$, and $\kappa$.
--- abstract: | With an increasing number of users sharing information online, privacy implications entailing such actions are a major concern. For explicit content, such as user profile or GPS data, devices (mobile phones) as well as web services (facebook) offer to set privacy settings in order to enforce the users’ privacy preferences. We propose the first approach that extends this concept to image content in the spirit of a [*Visual Privacy Advisor*]{}. First, we categorize personal information in images into 68 image attributes and collect a dataset, which allows us to train models that predict such information directly from images. Second, we run a user study to understand the privacy preferences of different users w.r.t. such attributes. Third, we propose models that predict user specific privacy score from images in order to enforce the users’ privacy preferences. Our model is trained to predict the user specific privacy risk and even outperforms the judgment of the users, who often fail to follow their own privacy preferences on image data. author: - | Tribhuvanesh Orekondy Bernt Schiele Mario Fritz\ Max Planck Institute for Informatics\ Saarland Informatics Campus\ Saabrücken, Germany\ [{orekondy,schiele,mfritz}@mpi-inf.mpg.de]{} bibliography: - 'paper.bib' title: | Towards a Visual Privacy Advisor:\ Understanding and Predicting Privacy Risks in Images --- Introduction ============ As more people obtain access to the internet, a large amount of personal information becomes accessible to other users, web service providers and advertisers. To counter these problems, more and more devices (mobile phone) and web services (facebook) are equipped with mechanisms where the user can specify privacy settings to comply with his/her personal privacy preference. ![Users often fail to enforce their privacy preferences when sharing images online. We propose a first *Visual Privacy Advisor* to provide user-specific privacy feedback.[]{data-label="fig:intro_ampel"}](fig/teaser2.pdf){width="\linewidth"} While this has proven useful for explicit and textual information, we ask how this concept can generalize to visual content. While users can be asked (as we also do in our study) to specify how comfortable they are releasing a certain type of image content, the actual presence of such content is implicit in the image and not readily available for a privacy preference enforcing mechanism nor the user. In fact – as our study shows – people frequently misjudge the privacy relevant information content in an image – which leads to the failure of enforcing their own privacy preferences. Hence, we work towards a [*Visual Privacy Advisor*]{} () that helps users enforce their privacy preferences and prevents leakage of private information. We approach this complex problem by first making personal information explicit by categorizing personal information into 68 image attributes. Based on such attribute predictions and user privacy preferences, we infer a privacy score that can be used to prevent unintentional sharing of information. Our model is trained to predict the user specific privacy risk and interestingly, it outperforms human judgment on the same images. Our main contributions in this paper are as follows: To the best of our knowledge, we are the first to formulate the problem of identifying a diverse set of personal information in images and personalizing predictions to users based on their privacy preferences We provide a sizable dataset[^1] of 22k images annotated with 68 privacy attributes We conduct a user study and analyze the diversity of users’ privacy preferences as well as the level to which they achieve to follow their privacy preferences on image data We propose the first model for Privacy Attribute Prediction. We also extend it to directly estimate user-specific privacy risks Finally, we show that our models outperform users in following their own privacy preferences on images Related Work ============ Privacy is becoming an increasing concern [@young2009information; @dwyer2007trust], especially due to the rise of social networking websites allowing individuals to share personal information, without explaining consequences of these actions. In this section, we discuss work that highlights these concerns and explores consequences of such actions. We also discuss literature that deals with identifying private content in images and text. [ **Identifying Personal Information** ]{} There is a comparably small body of work that aims to recognize personal information. Aura [@aura2006scanning] explore this in the context of electronic documents, where they propose a tool to remove user names, identifiers, organization names and other private information from text-based documents with metadata. [@bier2014detection; @geng2008using] study this in the context of textual email-content. Bier [@bier2014detection] model this as a privacy-classification problem, whereas Geng [@geng2008using] detect four types of personal information – email addresses, telephone numbers, addresses and money. The closest related work to ours is [@tonge2015privacy], who are also motivated by unwanted disclosure and privacy violation on social media. They approach the task as classifying if an image is public or private based on features extracted from a Convolutional Neural Network and user-generated tags for the image. However, we later show that users have different notions of privacy and hence cannot be modeled as a binary classification problem. Instead, we first tackle a more principled problem of predicting the privacy-sensitive elements present in images and use these in combination with users preferences to estimate privacy risk. [ **Leakage and De-anonymization** ]{} A problem closely related to ours is *privacy leakage*, which deals with uncovering and analyzing methods leading to disclosure of personal information, rather than detection before such incidents. [@krishnamurthy2009leakage; @Krishnamurthy2011PrivacyLV] uncover privacy leakage when websites accidentally provide user information embedded in HTTP requests when contacting third-party aggregators. As leakages can be user-intended, Yang [@Yang2013AppIntentAS] explore this case in Android applications. Some works [@Narayanan2009DeanonymizingSN; @veiga2016privacy] study the case where users identity, location or other details can be de-anonymized when aggregating anonymized data across multiple social networks. In contrast to these, our approach is concerned about image content and privacy preferences. [ **Privacy Preferences and Social Networks** ]{} [@lenhart2007teens; @gross2005information; @krishnamurthy2008characterizing] study types of personal information disclosed on social networking websites. Other tasks include preserving one’s privacy while using social networks [@guha2008noyb; @zhou2008preserving; @li2011findu] and exploring privacy settings [@fang2010privacy; @danezis2009inferring; @liu2011analyzing]. However in our user study, apart from collecting and analyzing user studies on privacy preferences for images, we additionally use them to train models based on image data. [ **Privacy and Computer Vision** ]{} Several works explore detecting individual privacy attributes such as license plates [@Zhou2012PrincipalVW; @Zhang2006LearningBasedLP; @Chang2004AutomaticLP], age estimation from facial photographs [@Bauckhage2010AgeRI], social relationships [@Wang2010SeeingPI], face detection [@Sun2017FaceDU; @Viola2001RobustRF], landmark detection [@Zheng2009TourTW] and occupation recognition [@shao2013you]. Apart from detecting attributes, some works introduce new privacy challenges in vision such as adversarial perturbations [@moosavi2016universal; @papernot2016distillation], privacy-preserving video capture [@aditya2016pic; @pittaluga2015privacy; @neustaedter2006blur; @raval2014markit], person re-identification [@ahmed2015improved; @mclaughlin2016recurrent], avoiding face detection [@wilber2016can; @harvey2012cv], full body re-identification [@oh2016faceless] and privacy-sensitive lifelogging [@Hoyle:2015:SLP:2702123.2702183; @screenavoider2016chi]. In this work, we present a new challenge in computer vision designed to help users assess privacy risk before sharing images on social media that encompasses a broad range of personal information in a single study. [ **Datasets for Privacy Tasks** ]{} Crucial to exploring privacy tasks are images revealing private details such as faces, names or opinions. However, many available datasets do not contain a significant number of such images to effectively study privacy tasks. Although some datasets [@gallagher_cvpr_08_clothing] contain such information, they are either too small or not representative of images on social networks. The closest candidate is the PIPA dataset [@Zhang_2015_CVPR] with 37,107 Flickr images, proposed for people recognition in an unconstrained setting and does not include images covering many other privacy aspects such as license plates, political views or official identification documents. In this paper, we introduce the first dataset of real-life images capturing important privacy-relevant attributes. The Visual Privacy (VISPR) Dataset {#sec:papds} ================================== Mobile devices and social media platforms provide privacy settings, so that users can communicate their privacy preferences on the disclosure of different type of textual information. How does this concept transfer to image data? We need to establish a similar concept of privacy relevant information types – but now for [*images*]{}. This will allow us to query users about their privacy preferences on the disclosure of various information types, as we will do in the next section. Therefore, we propose in this section a categorization of personal information into 68 privacy attributes such as gender, tattoo, email address or fingerprint. We collect a dataset of  22k images that allows the study of privacy relevant attributes in images and the training of automatic recognizers. ![image](fig/occurrences.pdf){width="\textwidth"} ### Privacy Attributes {#privacy-attributes .unnumbered} As motivated before, we need to categorize different types of personal content in images – akin to the privacy settings deployed in today’s devices and services. Therefore, we define a list of *privacy attributes* an image can disclose. The primary challenge here is the lack of a standard list of privacy attributes. We thus compile attributes from multiple sources. First, we consolidate relevant attributes from the guidelines for handling *Personally Identifiable Information* [@mccallister2010guide] provided in the EU Data Protection Directive 95/46/EC [@directive199595] and the US Privacy Act of 1974. Second, we add relevant attributes from the rules on prohibiting sharing personal information on various social networking websites (, Twitter, Reddit, Flickr). Finally, we manually examine images that are shared on these websites and identify additional attributes. As a result, we draft an initial set of 104 potential privacy attributes. As discussed in the next section, these are reduced to 68 attributes (see ) after pruning. ### Annotation Setup {#annotation-setup .unnumbered} The annotation was set up as a multi-label task to three annotators annotating independent sets of images. A web-based tool was provided to select multiple options corresponding to the 104 privacy attributes per image. Additionally, annotators could mark if they were unsure about their annotation. In case none of the provided privacy labels applied, they were instructed to label the image as *safe*, which we use as one of our privacy attributes. Images were discarded if annotators were unsure, or if the image contained a copyright watermark, was a historic photograph, contained primarily non-English text, or was of poor quality. ### Data Collection and Annotation Procedure {#data-collection-and-annotation-procedure .unnumbered} In this section, we discuss the steps taken to obtain the final set of 22k images annotated with 68 privacy attributes. [ **Seed Sample** ]{} We first gather 100k random images from the OpenImages dataset [@openimages], a collection of $\sim$9 million Flickr images. Using the definition and examples of the privacy attributes, the annotators annotate 10,000 images randomly selected from the downloaded images. [ **Handling Imbalance** ]{} Based on the label statistics from these 10,000 images, we add images to balance attributes with fewer than 100 occurrences. These additional images are added by querying relevant OpenImages labels possibly representative of insufficient privacy attributes. [ **Extended Search for Rare Classes** ]{} In spite of using the above strategy, 37 attributes contain under 40 images. We manually add images for these attributes by querying relevant keywords on Flickr. We do not add multiple images from the same album. For credit cards, we manually obtain  50 high-quality images from Twitter, which are the only non-Flickr images in our dataset. [ **Selected Attributes** ]{} After annotating the dataset with the initial 104 labels, we discard 19 labels because either images were difficult to obtain manually (iris/retinal scan, insurance details) or the set of images did not clearly represent the attribute. We additionally merge groups of attributes which capture similar concepts (work and home phone number). In the end, we obtain a dataset of 22,167 images, each annotated with one or more of 68 privacy attributes. [ **Curation** ]{} To reduce labeling mistakes, we organize the dataset into batches of images with each batch corresponding to a privacy attribute. We curate attribute batches which either contain fewer than 500 images or are considered sensitive by users. [ **Splits** ]{} We perform a random 45-20-35 split with 10,000 training, 4,167 validation and 8,000 test images. The final statistics of our dataset is presented in . The labels and its distribution in our dataset is shown in . Split All Train Val Test ------------------ --------- -------- -------- -------- Images 22,167 10,000 4,167 8,000 Labels 115,742 51,799 22,026 41,917 Avg Labels/Image 5.22 5.18 5.29 5.24 Max Images/Label 10,460 4,710 1,969 3,781 Min Images/Label 44 20 7 12 : Dataset Statistics[]{data-label="tab:dataset_statistics"} Understanding Privacy Risks {#sec:papds_user_preferences} =========================== In this section, we explore how users’ personal privacy preferences relate to the attributes in Section \[sec:privacy\_preferences\]. Furthermore, we study how good users are at enforcing their own privacy preferences on visual data when making judgments based on image data in Section \[sec:user\_study\_img\]. Understanding Users’ Privacy Preferences {#sec:privacy_preferences} ---------------------------------------- In this section, we study the degree to which various users are sensitive to the privacy attributes discussed in Section \[sec:papds\]. [ **User Study** ]{} We present each user with a series of 72 questions in a randomized order. Each of these questions corresponds to either exactly one of 67 privacy attributes (excluding the safe attribute) or a control question. In each question, the users are asked how much they would find their privacy violated if they accidentally shared details of a particular attribute publicly online. For instance: Responses for the question are collected on a scale of 1 to 5, where: Privacy is not violated Privacy is slightly violated Privacy is somewhat violated Privacy is violated Privacy is extremely violated. We treat these responses as users privacy preference for this particular privacy attribute. [ **Participants** ]{} We collect responses of 305 unique AMT workers in this survey. Out of the 305 respondents, 59% were male, 78% were under 40 years of age with 57% from USA and 38% from India. Additionally, 75% were regular Facebook users, 80% and 44% reported to be aware of and have used Twitter and Flickr at least once. ![image](fig/user_profiles.pdf){width="\textwidth"} [ **Analysis** ]{} In order to understand the diversity in users’ privacy preferences, we first cluster the users based on their preferences into *user privacy profiles*. We cluster using $K$-means and choose $K$ based on silhouette score [@rousseeuw1987silhouettes], which considers distance between points within the cluster and additionally distance between points and their neighbouring cluster. We choose $K=30$ as this yields the lowest silhouette score. This enables visualizing the preferences over the attributes, as seen in , where each row represents the preferences for one of the 30 user profiles (ordered based on number of users associated with the profile). We observe from this study: Users show a wide variety of preferences. This supports requiring user-specific privacy risk predictions. The *majority* (Profiles 1-4, 7-11, 13-14, 18-20 in ) display a similar order of sensitivity to the attributes A *minority* (Profiles 21-30) of users are particularly sensitive to some attributes such as their political view, sexual orientation or religion The *uniformly-sensitive users* (Profiles 5, 6, 12, 15, 17) are uniformly sensitive to all attributes even though to different degrees. Users and Visual Privacy Judgment {#sec:user_study_img} --------------------------------- In this study, we first ask participants to judge their personal privacy risk based on images representing an attribute (providing a visual privacy risk score) and afterwards asking the actual user’s privacy preferences for the same attribute (providing a desired or explicit privacy risk score). Hence, we study how good users are at assessing their personal privacy risks based on images. [ **User Study** ]{} In this study, we split the survey into two parts. In the first part, the users are shown a group of 3-6 images. Given the sensitive nature of attributes, we cannot obtain or ask users to rate their personal images and hence use images from the dataset. They are asked how comfortable they are sharing such images publicly, considering they are the subject in these images. Responses are collected on a scale of 1 to 5, where: Extremely comfortable Slightly comfortable Somewhat comfortable Not comfortable Extremely uncomfortable. Each group of images represents one of the 68 privacy attributes. In most cases, the attributes occur isolated and are the most prominent visual cue in the image. We refer to these responses as *human visual privacy score*. The second part is identical to questions and the setting in the previous user-study on privacy preferences. Each question is designed to obtain the privacy preference of the user for each attribute. As before, the user rates on a scale of 1 (Not Violated) to 5 (Extremely Violated). We refer to these responses as *privacy preference score*. [ **Participants** ]{} We split the study into two parts to prevent user fatigue. Each part contains only half of the attributes. We obtain 50 unique responses for this survey from AMT. In each of these parts, roughly: 70% of the respondents were under 40 years, 57% were male and 87% were from USA. Additionally, 80% responded that they use Facebook, 84% Twitter and 46% Flickr. ![Users are asked to rate on a scale of 1 (Not violated) to 5 (Extremely violated) how much an attribute affects their privacy. $X$-axis denotes their desired privacy preference and $Y$-axis denotes their evaluation of risk on images. The red markers indicate privacy attributes with highly underestimated or overestimated user ratings[]{data-label="fig:img_vs_attr"}](fig/img_vs_attr_2.pdf){width="0.7\linewidth"} [ **Analysis** ]{} We compute for each attribute average privacy preference score and human visual scores, and visualized them as a scatter plot in . From the results, we observe: The off-diagonal data points show a clear inconsistency in the users between the required privacy preference and their judgment of privacy risk in images. For cases close to the diagonal, like credit cards, passport and national identification documents, users display consistent behaviour on images and attributes. However, when photographs are natural scenes containing people or vehicles, users underestimate (below diagonal) the privacy score, such as in the case of family photographs or cars displaying license plate numbers. We speculate this is indicative of personal photographs commonly shared online. They overestimate (above diagonal) the privacy risk of some photographs showing birth place or their name. We speculate this is because the photographs are often official documents, making users more cautious. Predicting Privacy Risks {#sec:pap} ======================== In this section, we make a step towards our overall goal of a [*Visual Privacy Advisor*]{}. As illustrated in , we follow a similar paradigm on social networks that defines privacy risk based on both the content type and user-specific privacy settings. In our case, the content type is described by (user-independent) attributes in the previous section. We combine these with the user-specific privacy preferences to determine if the image contains a privacy violation. We describe our model for privacy attribute prediction in Section \[sec:pap\], followed by our approaches to personalized privacy risk prediction in Section \[sec:pprp\]. We conclude with a comparison of human judgment of privacy risks in images against the prediction of our proposed models in Section \[sec:hvm\]. \[sec:prs\] ![We learn an end-to-end model for user-specific privacy risk estimation.[]{data-label="fig:ppp_prcnn_overview"}](fig/prcnn_overview.pdf){width="1.0\linewidth"} Privacy Attribute Prediction {#sec:pap} ---------------------------- In this section, we define the *user-independent* task of predicting privacy attributes from images. Then, we present and evaluate different methods on our new VISPR dataset. [ **Task** ]{} We propose the task of *Privacy Attribute Prediction*, which is to predict one or more of 68 privacy attributes based on an image. This can be seen as a multilabel classification problem that recognizes different type of personal information visual data and therefore has the potential to make this information explicit. shows multiple examples for this task. The task is challenging due to image diversity, subtle cues and high level semantics. [ **Metric** ]{} To assess performance of methods for this task, we compute the Average Precision (AP) per class, which is the area under Precision-Recall curve for the attribute. Additionally, the overall performance of a method is given by Class-based Mean Average Precision (C-MAP), the average of the AP score across all 68 attributes. [ @ c &gt; m[4cm]{} &gt; m[4cm]{} &gt; m[4cm]{} @]{} & True Positives & False Positives & False Negatives\ & & &\ Tattoo & ![image](fig/pap_qual/tattoo_tp_1.jpg){width="10.00000%"} ![image](fig/pap_qual/tattoo_tp_3.jpg){width="10.00000%"} & ![image](fig/pap_qual/tattoo_fp_1.jpg){width="10.00000%"} ![image](fig/pap_qual/tattoo_fp_2.jpg){width="10.00000%"} & ![image](fig/pap_qual/tattoo_fn_1.jpg){width="10.00000%"} ![image](fig/pap_qual/tattoo_fn_2.jpg){width="10.00000%"}\ ------------ Physical Disability ------------ & ![image](fig/pap_qual/pdis_tp_1.jpg){width="10.00000%"} ![image](fig/pap_qual/pdis_tp_2.jpg){width="10.00000%"} & ![image](fig/pap_qual/pdis_fp_1.jpg){width="10.00000%"} ![image](fig/pap_qual/pdis_fp_2.jpg){width="10.00000%"} & ![image](fig/pap_qual/pdis_fn_1.jpg){width="10.00000%"} ![image](fig/pap_qual/pdis_fn_2.jpg){width="10.00000%"}\ Landmark & ![image](fig/pap_qual/land_tp_1.jpg){width="10.00000%"} ![image](fig/pap_qual/land_tp_2.jpg){width="10.00000%"} & ![image](fig/pap_qual/land_fp_1.jpg){width="10.00000%"} ![image](fig/pap_qual/land_fp_2.jpg){width="10.00000%"} & ![image](fig/pap_qual/land_fn_1.jpg){width="10.00000%"} ![image](fig/pap_qual/land_fn_2.jpg){width="10.00000%"}\ Training Features C-MAP ---------- ----------- ------- CaffeNet 37.93 GoogleNet 39.88 Resnet-50 40.50 CaffeNet 42.99 GoogleNet 43.29 Resnet-50 47.45 : Accuracy of our methods given by Class-based Mean Average Precision, evaluated on test[]{data-label="tab:pap_cmap"} [ **Methods** ]{} \[sec:pap\_methods\_eval\] We experiment with three types of visual features extracted from CNNs – CaffeNet [@jia2014caffe], GoogleNet [@szegedy2015going] and ResNet-50 [@he2016deep]. First, we train a linear SVM model using features from the layer preceding the last fully-connected layer of these CNNs. In a pilot study, we found that the multilabel SVM with smoothed hinge loss [@lapin2016loss] yields better results than SVM multi-label prediction [@crammer2003family] and cross-entropy loss. Second, we fine-tune the CNNs initialized with pretrained ImageNet models, based on a multi-label classification loss with sigmoid activations. ![image](fig/ap_class_scores.pdf){width="\textwidth"} [ **Results** ]{} Quantitative results of our method are shown in and qualitative results in (more discussed in supplementary). We additionally present the Average Precision scores per class in . We make the following observations: The CNN performs well in attributes such as tickets, passports, medical treatment that correlated well with scenes (airport, hospital). It also performs well in recognizing attributes which are human-centric, such as faces, gender and age. Fine-grained differences cause confusions such as predicting student IDs for drivers licenses or differentiating between street and other signboards. We observe failure modes due to small details in the image, such as tattoos, marriage rings or a credit card in the hands of a child. Another shortcoming is not being able to recognize relationship-based attributes (, personal or social relationships, vehicle ownership) which requires reasoning based on interaction of multiple visual cues in an image rather than just their presence. Personalizing Privacy Risk Prediction {#sec:pprp} ------------------------------------- In the previous section, we discussed predicting privacy attributes in images, a task independent of user privacy preferences. In this section, we investigate *user-specific* visual privacy feedback. The goal is to compute a [*privacy risk score*]{} per image, representing the risk of privacy leakage for the particular user. [ **Task** ]{} As illustrated in , we combine privacy attributes (user independent) together with the privacy preferences based on these attributes (user specific) to arrive at the privacy risk score. As we allow the users to give scores for each attributes based on their privacy preferences, we define the following [*privacy risk score*]{}. [*Privacy Risk Score.*]{} For some image $\bm{x}$, attributes $\bm{y} \in [0, 1]^A$ and user preference $\bm{u} \in [0, 5]^A$, the privacy risk score of image $\bm{x}$ containing attributes $\bm{y}$ on user $\bm{u}$ is $\max_a \bm{y}_a\bm{u}_a$ This represents the user-specific score of the most sensitive attribute, most likely to be present in an image. As a result, the privacy-risk score is comparable to the preference-score: 1 (Not Sensitive) to 5 (Extremely Sensitive). As illustrated in , we compute the ground-truth privacy risk score based on ground-truth attribute annotation for an image (represented as a $k$-hot vector $\bm{y} \in \{0, 1\}^A$) and privacy preferences of users. [ **Method: Attribute Prediction-Based Privacy Risk (AP-PR)** ]{} Our first method performs Attributed-Based Privacy Risk ([*AP-PR*]{}) prediction. As illustrated in , we combine the privacy attribute prediction and the profile’s privacy preferences (that we can assume as provided by users at test time) to compute the privacy risk score as defined above. [ **Method: Privacy Risk CNN (PR-CNN)** ]{} We propose a Privacy Risk CNN ([*PR-CNN*]{}) that does not directly use the user profile’s privacy preferences – but only indirectly via the ground-truth. The key observation is that AP-PR scores suffer from erroneous attribute predictions (see ). Therefore, we extend the the privacy attribute prediction network by additional fully-connected layers to directly predict the privacy risk score. A parameter search yielded best results using additional two fully-connected hidden layers of 128 neurons, each followed by sigmoid activations. We finetune this network from our Googlenet Privacy Attribute Prediction network for 30 user profiles described in Section \[sec:papds\_user\_preferences\] and a Euclidean loss. ------------- ----------- -- ----------- ----------- ----------- ----------- (lr)[4-7]{} 1+ 2+ 3+ 4+ AP-PR 0.656 **94.94** **94.27** 87.97 77.89 PR-CNN **0.637** 94.35 93.65 **88.14** **78.38** ------------- ----------- -- ----------- ----------- ----------- ----------- : Evaluation of Personalized Privacy Risk[]{data-label="tab:ppp_eval"} ![Performance of our approach in predicting Privacy Risks of images. Our approach performs better on high privacy-risk images.[]{data-label="fig:ppp_pr_graphs"}](fig/retrain-user-30p-max-128-128-sigmoid-overall.pdf){width="1.0\linewidth"} [ **Evaluation** ]{} We use two metrics for evaluation. First, the $L1$ error averaged over all images and profiles; it represents the mean absolute difference between the ratings. Secondly, we calculate the Precision-Recall curves for varying thresholds of sensitivity which indicates how well our models detect images above a certain true privacy risk. By calculating the area under the Precision-Recall curves over all user profiles, we additionally report the Mean Average Precision (MAP). In our experiments, we use the previously introduced user-profiles instead of individual users in order to cater to all the diverse privacy preferences equally that we have seen in the previous section. We assign a privacy risk score of 0.5 for the *safe* attribute for all profiles. The evaluation of our approach on these metrics is presented in . Each graph in represents PR curves over the ground-truth thresholded to obtain a particular risk interval, such that any score above this threshold is considered private. This allows us to estimate performance of methods at various levels of sensitivity. We then obtain the PR-curves for each sensitivity interval by thresholding scores estimated by AP-PR and PR-CNN. From these results, we observe: PR-CNN performs better in predicting risk compared to using the intermediate attributes predictions. Notably, the prediction is on average less than one step on the scale from 1 to 5 away from the true privacy risk. Moreover, it is better at detecting high-risk images, as shown in . In particular, we notice better recall for high-risk images. We discuss profile-specific PR curves in the supplementary material. Humans vs. Machine {#sec:hvm} ------------------ ![The Precision-Recall curves of three risk estimations are displayed – users implicitly evaluating risk from images and our two methods AP-PR and PR-CNN. []{data-label="fig:human_cnn_pr"}](fig/retrain-user-30p-max-128-128-sigmoid-overall-profile-mean.pdf){width="1.0\linewidth"} In Section \[sec:papds\_user\_preferences\], we have shown inconsistency in users’ privacy preferences and their assessment of privacy risks in images. In this section, we compare our proposed approach for evaluating privacy risk against human judgments. In our second user study (), for each attribute, users first assessed their personal privacy risk on images (providing a visual privacy risk score) and later rated their privacy preference (providing a desired privacy risk score). We have computed scores with our privacy risk models AP-PR and PR-CNN on those very same images. As a result, for each image, we have users’ privacy preference users’ privacy risk judgment from images our AP-PR privacy risk score from images our PR-CNN privacy risk score from images. All these scores are on a scale of 1 (Not Sensitive) to 5 (Extremely Sensitive). Using the users desired preference as the ground-truth, we now ask: *who is better at reproducing the user’s desired privacy preference on images?* As from the previous section, we use precision-recall and $L_1$-error as metrics to compare the desired preference score (a) and predicted privacy risk score for evaluation (b, c, d). The precision-recall-curves for the three candidates are presented in . Evaluation using the $L_1$-error is discussed in the supplementary material. We observe: AP-PR achieves better precision-recall for the task than PR-CNN and – remarkably – is even [*consistently better than the users’ image-based judgment*]{}. On average, the PR-CNN estimates privacy risks ($L_1$ error = 1.03) slightly better than the user’s image-based judgment ($L_1$ error = 1.1) and AP-PR ($L_1$ error = 1.27). Conclusion ========== We have extended the concept of privacy settings to visual content and have presented work towards a *Visual Privacy Advisor* that can provide feedback to the users based on their privacy preferences. The significance of this research direction is highlighted by our user study which shows users often fail to enforce their own privacy preferences when judging image content. Our survey also captures typical privacy preference profiles that show a surprising level of diversity. Our new VISPR dataset allowed us to train visual models that recognize privacy attributes, predict privacy risk scores and detect images that conflict with user’s privacy. In particular, a final comparison of human vs. machine prediction of privacy risks on images, shows an improvement by our model over human judgment. This highlights the feasibility and future opportunities of the overarching goal – a *Visual Privacy Advisor*. Acknowledgement {#acknowledgement .unnumbered} =============== This research was supported by the German Research Foundation (DFG CRC 1223). We thank Paarijaat Aditya, Philipp Müller and Julian Steil for advice on the user study. We also thank Dr. Mykhaylo Andriluka and Seong Joon Oh for valuable feedback on the paper. Changelog {#changelog .unnumbered} ========= #### Version 2 (31-July-2017) {#version-2-31-july-2017 .unnumbered} - New teaser figure - Improving writing - Citing more related work - Additional information on User Study - Project web-page link Privacy Attributes and Examples =============================== A complete list of privacy attributes with descriptions and an example image is given in . We consider all these cases when viewing the image in its original high-resolution form. We use these definitions to any subject in the image – either in the foreground or background. Using these definitions, attributes can be typically inferred from an image in multiple ways: *Direct*: it is explicitly mentioned, such as in a form or document (gender on an identity card) *Visual*: based on visual cues (gender from clothing or facial features) *Reasoning*: it is inferred by some additional reasoning (relationships based on age differences between multiple people). Dataset is available on the project website: <https://tribhuvanesh.github.io/vpa/>. [p[0.1]{}p[0.2]{}p[0.45]{}m[0.12]{}]{} Group & Attribute & Description & Examples\ Personal Description & Gender & Subject’s gender is clearly visible using one or more gender-specific discriminative visual cues such as more than 50% body being visible, clothing, facial/head hair or colored nails. & ![image](fig/def_fig/a4_gender.jpg){width="10.00000%"}\ & Eye Color & If eyes are visible and can be categorized as one of: brown, hazel, blue or green. & ![image](fig/def_fig/a5_eye_color.jpg){width="10.00000%"}\ & Hair Color & Subject’s head hair color is visible & ![image](fig/def_fig/a6_hair_color.jpg){width="10.00000%"}\ & Fingerprint & Fingerprint is visible through either a close-up shot of one’s finger or adu -sh . n imprint on some surface. & ![image](fig/def_fig/a7_fingerprint.jpg){width="10.00000%"}\ & Signature & Complete signature is visible in an image, such as in a form or document & ![image](fig/def_fig/a8_signature.jpg){width="10.00000%"}\ & Face (Complete) & A face is completely visible. Also includes photographs of faces on identity cards, documents or billboards. & ![image](fig/def_fig/a9_face_complete.jpg){width="10.00000%"}\ & Face (Partial) & Less than 70% of the face is visible or there is occlusion, such as when the subject is wearing sun-glasses. & ![image](fig/def_fig/a10_face_partial.jpg){width="10.00000%"}\ & Tattoo & Subject displays either a tattoo or body paint. & ![image](fig/def_fig/a11_tattoo.jpg){width="10.00000%"}\ & Nudity (Partial) & Subject appears in undergarments & ![image](fig/def_fig/a12_semi_nudity.jpg){width="10.00000%"}\ & Nudity (Complete) & Human subject appears without clothing & ![image](fig/def_fig/a13_full_nudity.jpg){width="10.00000%"}\ & Race & Any subject in the photograph can be categorized into one of Caucasian, Asian or Negroid. & ![image](fig/def_fig/a16_race.jpg){width="10.00000%"}\ & (Skin) Color & One’s skin color can be categorized into one of White, Brown or Black. & ![image](fig/def_fig/a17_color.jpg){width="10.00000%"}\ & Traditional Clothing & Subject appears in clothing which is indicative of a particular region or country dirndl, sari. & ![image](fig/def_fig/a18_ethnic_clothing.jpg){width="10.00000%"}\ [p[0.1]{}p[0.2]{}p[0.45]{}m[0.12]{}]{} Group & Attribute & Description & Examples\ & Full Name & A recognizable full name which appears in the context of a form, document or a badge. Also includes if the name can be inferred from a signature. & ![image](fig/def_fig/a19_name_full.jpg){width="10.00000%"}\ & Name (First) & Only if the first name is visible on a form, document, badge or clothing. & ![image](fig/def_fig/a20_name_first.jpg){width="10.00000%"}\ & Name (Last) & Only if the last name is visible on a form, document, badge or clothing. & ![image](fig/def_fig/a21_name_last.jpg){width="10.00000%"}\ & Place of Birth & Place of Birth is explicitly mentioned, such as in a form or in an identification document. & ![image](fig/def_fig/a23_birth_city.jpg){width="10.00000%"}\ & Date of Birth & Date of Birth is explicitly mentioned in writing. Includes year, month or the day of birth. & ![image](fig/def_fig/a24_birth_date.jpg){width="10.00000%"}\ & Nationality & A passport indicating country is clearly visible. Includes the case if a subject appears holding a country’s flag or wearing a uniform bearing the flag (such as a soldier or an international athlete). & ![image](fig/def_fig/a25_nationality.jpg){width="10.00000%"}\ & Handwriting & Hand-written text on any surface. & ![image](fig/def_fig/a26_handwriting.jpg){width="10.00000%"}\ & Marital status & A subject is wearing an engagement ring. Includes wedding photographs taken of the bride and groom. & ![image](fig/def_fig/a27_marital_status.jpg){width="10.00000%"}\ Documents & National Identification & Documents such as a Green Card or a European national identity card, not including passports. & ![image](fig/def_fig/a29_ausweis.jpg){width="10.00000%"}\ & Credit Card & Either the front or back of a credit card. Includes cases when the card is partially visible in someone’s hand or in a shredded form & ![image](fig/def_fig/a30_credit_card.jpg){width="10.00000%"}\ & Passport & A photograph of any page in the passport or its front cover. & ![image](fig/def_fig/a31_passport.jpg){width="10.00000%"}\ & Drivers License & Either front or back of a drivers license or a driving permit. & ![image](fig/def_fig/a32_drivers_license.jpg){width="10.00000%"}\ & Student ID & Front or back of a student identity card, with at least the name of a school, college or university clearly readable. & ![image](fig/def_fig/a33_student_id.jpg){width="10.00000%"}\ [p[0.1]{}p[0.2]{}p[0.45]{}m[0.12]{}]{} Group & Attribute & Description & Examples\ & Mail & Contents of a mail or the envelope. & ![image](fig/def_fig/a35_mail.jpg){width="10.00000%"}\ & Receipts & Purchase receipts indicating a financial transaction with an amount clearly visible, a restaurant receipt. & ![image](fig/def_fig/a37_receipt.jpg){width="10.00000%"}\ & Tickets & A travel, movie or concert ticket which specifies travel location or an event. & ![image](fig/def_fig/a38_ticket.jpg){width="10.00000%"}\ Health & Physical disability & Subject appears with a permanent physical disability an amputee or a person in a wheelchair. & ![image](fig/def_fig/a39_disability_physical.jpg){width="10.00000%"}\ & Medical Treatment & Subject appears either with an injury or indicates hospital admittance. & ![image](fig/def_fig/a41_injury.jpg){width="10.00000%"}\ & Medical History & Photographs of medicine or medical prescriptions. & ![image](fig/def_fig/a43_medicine.jpg){width="10.00000%"}\ Employment & Occupation & Subject appears in a distinguishable occupation-specific uniform doctor, policemen, construction worker. & ![image](fig/def_fig/a46_occupation.jpg){width="10.00000%"}\ & Work Occasion & Subject is photographed while giving a talk, presentation, attending a work-related or broad-casting event. Includes photographs of people in formal attire in an office. & ![image](fig/def_fig/a48_occassion_work.jpg){width="10.00000%"}\ Personal Life & Religion & Subject appears associated with a distinguishable religious symbol, religion-specific clothing or at a religious location. & ![image](fig/def_fig/a55_religion.jpg){width="10.00000%"}\ & Sexual Orientation & Two subjects are photographed in an intimate setting & ![image](fig/def_fig/a56_sexual_orientation.jpg){width="10.00000%"}\ & Culture & Subjects appear celebrating a traditional festival or attending an art or culture related activity concert, play. & ![image](fig/def_fig/a57_culture.jpg){width="10.00000%"}\ & Hobbies & A non-professional related activity of a subject is visible playing a musical instrument, taking photographs. & ![image](fig/def_fig/a58_hobbies.jpg){width="10.00000%"}\ & Sports & Subject appears taking part in an indoor or outdoor sports activity & ![image](fig/def_fig/a59_sports.jpg){width="10.00000%"}\ [p[0.1]{}p[0.2]{}p[0.45]{}m[0.12]{}]{} Group & Attribute & Description & Examples\ & Education history & Photographs contains cues indicating subject’s education history, such as at a graduation ceremony, clothing indicating university or an academic or school certificate & ![image](fig/def_fig/a70_education_history.jpg){width="10.00000%"}\ & Legal involvement & Photographs indicating subject’s involvement with law-related activities someone being arrested, in a court hearing. & ![image](fig/def_fig/a99_legal_involvement.jpg){width="10.00000%"}\ & Personal Occasion & Photographs of people celebrating a personal occasion with friends or family members wedding, birthday. & ![image](fig/def_fig/a60_occassion_personal.jpg){width="10.00000%"}\ & General Opinion & Subject appears associated with a placard or clothing indicating opinion on general issues wars, taxes, LGBT rights. & ![image](fig/def_fig/a61_opinion_general.jpg){width="10.00000%"}\ & Political Opinion & Subject appears with either clothing, placard or in a crowd at a political rally. & ![image](fig/def_fig/a62_opinion_political.jpg){width="10.00000%"}\ Relationships & Personal Relationships & Photographs of people in a visually-identifiable personal relationship mother-son, husband-wife. & ![image](fig/def_fig/a64_rel_personal.jpg){width="10.00000%"}\ & Social Circle & Subjects of the same age-group photographed in a casual setting friends at a party, walking together on a street. & ![image](fig/def_fig/a65_rel_social.jpg){width="10.00000%"}\ & Professional Circle & A group of people who share an occupation (a group of policemen) or who are dressed for a professional event (a conference or meeting). & ![image](fig/def_fig/a66_rel_professional.jpg){width="10.00000%"}\ & Competitors & A group of people taking part in team sports. Also includes the case when subjects belong to the same team. & ![image](fig/def_fig/a67_rel_competitors.jpg){width="10.00000%"}\ & Spectators & A group of people spectating an event such as a concert or play. & ![image](fig/def_fig/a68_rel_spectators.jpg){width="10.00000%"}\ & Similar view & A group of people at a rally or a protest who share opinions on a general issue. Only includes the case when placards or clothing denoting a cause or rallying for a political party is visible. & ![image](fig/def_fig/a69_rel_views.jpg){width="10.00000%"}\ Whereabouts & Visited Landmark & Photograph contains text indicating a business’ name, street sign or a well-known landmark. & ![image](fig/def_fig/a73_landmark.jpg){width="10.00000%"}\ [p[0.1]{}p[0.2]{}p[0.45]{}m[0.12]{}]{} Group & Attribute & Description & Examples\ & Visited Location (Complete) & Text indicating a *complete* address (restaurant receipt with the address of the restaurant) or a screen-shot of GPS-based location. & ![image](fig/def_fig/a74_address_current_complete.jpg){width="10.00000%"}\ & Visited Location (Partial) & Text which partially indicates the subject’s location, such as street name, city or country where the photograph was taken. & ![image](fig/def_fig/a75_address_current_partial.jpg){width="10.00000%"}\ & Home address (Complete) & Photograph containing a complete non-commercial postal address. & ![image](fig/def_fig/a78_address_home_complete.jpg){width="10.00000%"}\ & Home address (Partial) & Photograph containing a partial non-commercial postal address. & ![image](fig/def_fig/a79_address_home_partial.jpg){width="10.00000%"}\ & Date/Time of Activity & Photograph contains information of date and/or time of subject’s location or activity such as a time-stamp watermark in an image, or a clock in the photograph. & ![image](fig/def_fig/a82_date_time.jpg){width="10.00000%"}\ & Phone no. & A phone number that is visible in the photograph (either personal or commercial). & ![image](fig/def_fig/a49_phone.jpg){width="10.00000%"}\ Internet Activity & Username & A screen shot of a website which mentions any username or internet handles. & ![image](fig/def_fig/a85_username.jpg){width="10.00000%"}\ & Email address & Any complete valid email-address that appears in a photograph or a screen-shot. & ![image](fig/def_fig/a90_email.jpg){width="10.00000%"}\ & Email content & Screenshots of emails including the subject of the email, or parts of the email body content. & ![image](fig/def_fig/a92_email_content.jpg){width="10.00000%"}\ & Online conversations & Screenshots of online conversations, posts, tweets or internet activity by any user. & ![image](fig/def_fig/a97_online_conversation.jpg){width="10.00000%"}\ Automobile & Vehicle Ownership & Photograph of a person riding a motor vehicle. & ![image](fig/def_fig/a102_vehicle_ownership.jpg){width="10.00000%"}\ & License Plate (Complete) & A clearly visible license plate or registration number of any motor vehicle. & ![image](fig/def_fig/a103_license_plate_complete.jpg){width="10.00000%"}\ & License Plate (Partial) & A partial license plate or registration number of any motor vehicle & ![image](fig/def_fig/a104_license_plate_partial.jpg){width="10.00000%"}\ Additional Details on User Study {#sec:appendix_user_study} ================================ In this section, we provide additional details on the user study discussed in Section 4. Understanding Users’ Privacy Preferences {#understanding-users-privacy-preferences} ---------------------------------------- The task in this user study is to obtain user preferences over the 67 privacy attributes (excludes the attribute *safe*). The questionnaire instructs the user on a fictitious website (similar to Flickr or Twitter), where content posted is by default visible to everyone else on the platform. By unintentionally posting information about a particular attribute, the user exposes private information comprising his/her anonymity. Each question is a verbal description of one of the attributes (Figure \[fig:p1\_1\]). We collect responses on a scale of 1-5 of how much the user finds his/her privacy violated as a consequence of this action. ![Questions from user study to understand privacy preferences[]{data-label="fig:p1_1"}](fig/p1_1.jpg){width="\linewidth"} ### Instructions provided to the Users {#instructions-provided-to-the-users .unnumbered} In this academic survey we want to understand how sensitive you are to certain details of your personal or private life. For instance, are you more comfortable sharing your full name, gender or details on your personal relationships? We refer to these details of your personal or private life as “Personally Identifiable Information” (PII). PII is information that can be used on its own or with other information to identify, contact, or locate a single person, or to identify an individual. Such information could be one or more of your: Full Name, Home Address, Political Opinion, etc. Following this description are a list of PIIs. For each of these PIIs, consider the following situation: On an online public platform, you create an anonymous account. On this platform, once you post something, you cannot delete it. Only the moderators can delete this post. However, they can be extremely slow and unresponsive. One day, you unintentionally shared/posted this PII about yourself. Immediately, you realize that you cannot delete this post. On a scale of 1-5, please rate how much you feel your privacy is violated by this action, where:\ 1 - I feel my privacy is not violated. So, I wouldn’t care.\ 2 - I feel my privacy is slightly violated. However, it’s not worth taking any action.\ 3 - I feel my privacy is somewhat violated. I will message the moderator. In case there’s no response, I will give up.\ 4 - I feel my privacy is violated. I will inform the moderator and follow up for a few days. In case there’s no response after that, I will give up.\ 5 - I feel my privacy is extremely violated. I will not give up until this post is deleted. Users and Visual Privacy Judgment {#users-and-visual-privacy-judgment} --------------------------------- In order to understand how good are users at identifying privacy risks from images, we conduct this user study in two parts. In the first part, we instruct users on a fictitious photo-sharing website, where images shared are publicly available. For each of the 68 privacy attributes, we present a question on a group of images from the dataset representing this attribute (Figure \[fig:p2\_3\]). The user responds how comfortable he/she is posting such images on the website. The exact instructions for this part is provided below. In the second part, we obtain user preferences over the attributes following the exact instructions in the previous section. ![Questions from the user study to evaluate user privacy judgment[]{data-label="fig:p2_3"}](fig/p2_3.jpg "fig:"){width="\linewidth"} ![Questions from the user study to evaluate user privacy judgment[]{data-label="fig:p2_3"}](fig/p2_5.jpg "fig:"){width="\linewidth"} ### Instructions provided to the Users {#instructions-provided-to-the-users-1 .unnumbered} In this academic survey we want to understand your comfort level sharing things on the internet. Following this description are groups of images. For each of these groups of images, consider the following situation: On an online public platform, you create an account. On this platform, you are allowed to post photographs, which anyone can view. Moreover, you can also interact with other users who shared their photographs and can comment on or like them. Important: For each of the below groups of images, picture yourself as either being the subject in the photograph, or the one who took the photograph of a family-member. On a scale of 1-5, rate how comfortable you are sharing such photographs, where:\ 1 - You are extremely comfortable sharing such photographs\ 2 - You are slightly comfortable sharing such photographs\ 3 - You are somewhat comfortable sharing such photographs\ 4 - You are not comfortable sharing such photographs\ 5 - You are extremely uncomfortable sharing such photographs Additional Qualitative Examples for Privacy Attribute Prediction {#sec:appendix_pap_qual} ================================================================ In Section 5.1 we discussed our approach to *Privacy Attribute Prediction* – a user-independent method of predicting multiple privacy attributes given an image. In this section, in addition to Figure 6, we present additional qualitative examples in . Each row represents images of a particular privacy attribute. The True Positives column indicate the case when this attribute is in both the ground-truth and predicted set of privacy attributes. The False Positives column indicate images when the attribute is incorrectly predicted. The False Negatives column indicate images when the attribute is in ground-truth, but is not predicted. We observe our method associates privacy attributes to distinctive visual cues such as clothing (for occupation and ethnic clothing), exposed skin (for tattoos, nudity), metallic objects with wheels (for physical disability, license plates) and text (for names, drivers license, username, handwriting). As a result, apart from correct predictions, we find that this also leads to incorrectly predicting attributes (predicting card-shaped identification documents as drivers licenses, cars for license plates) or failing to recognize attributes in a different context (handwriting on a wall instead of documents, new types of drivers licenses). We also observe our approach underperform in differentiating between full, first and last names, or usernames and email addresses (which requires text-based reasoning), identifying relationships and sexual orientation (which requires interpreting interaction between multiple people) and differentiating occupations, religion and ethnic clothing (which requires fine-grained recognition). [ @ c &gt; m[4cm]{} &gt; m[4cm]{} &gt; m[4cm]{} @]{} & True Positives & False Positives & False Negatives\ Credit Card & ![image](fig/pap_qual/cc_tp_1.jpg){width="11.00000%"} ![image](fig/pap_qual/cc_tp_2.jpg){width="11.00000%"} & ![image](fig/pap_qual/cc_fp_1.jpg){width="11.00000%"} ![image](fig/pap_qual/cc_fp_2.jpg){width="11.00000%"} & ![image](fig/pap_qual/cc_fn_1.jpg){width="11.00000%"} ![image](fig/pap_qual/cc_fn_2.jpg){width="11.00000%"}\ ---------- Ethnic Clothing ---------- & ![image](fig/pap_qual/etc_tp_1.jpg){width="11.00000%"} ![image](fig/pap_qual/etc_tp_2.jpg){width="11.00000%"} & ![image](fig/pap_qual/etc_fp_1.jpg){width="11.00000%"} ![image](fig/pap_qual/etc_fp_2.jpg){width="11.00000%"} & ![image](fig/pap_qual/etc_fn_1.jpg){width="11.00000%"} ![image](fig/pap_qual/etc_fn_2.jpg){width="11.00000%"}\ Full Name & ![image](fig/pap_qual/fname_tp_1.jpg){width="11.00000%"} ![image](fig/pap_qual/fname_tp_2.jpg){width="11.00000%"} & ![image](fig/pap_qual/fname_fp_1.jpg){width="11.00000%"} ![image](fig/pap_qual/fname_fp_2.jpg){width="11.00000%"} & ![image](fig/pap_qual/fname_fn_1.jpg){width="11.00000%"} ![image](fig/pap_qual/fname_fn_2.jpg){width="11.00000%"}\ Hobbies & ![image](fig/pap_qual/hobbies_tp_1.jpg){width="11.00000%"} ![image](fig/pap_qual/hobbies_tp_2.jpg){width="11.00000%"} & ![image](fig/pap_qual/hobbies_fp_1.jpg){width="11.00000%"} ![image](fig/pap_qual/hobbies_fp_2.jpg){width="11.00000%"} & ![image](fig/pap_qual/hobbies_fn_1.jpg){width="11.00000%"} ![image](fig/pap_qual/hobbies_fn_2.jpg){width="11.00000%"}\ Passport & ![image](fig/pap_qual/pport_tp_1.jpg){width="11.00000%"} ![image](fig/pap_qual/pport_tp_2.jpg){width="11.00000%"} & ![image](fig/pap_qual/pport_fp_1.jpg){width="11.00000%"} ![image](fig/pap_qual/pport_fp_2.jpg){width="11.00000%"} & ![image](fig/pap_qual/pport_fn_1.jpg){width="11.00000%"} ![image](fig/pap_qual/pport_fn_2.jpg){width="11.00000%"}\ ------------- Sexual Orientation ------------- & ![image](fig/pap_qual/sexo_tp_1.jpg){width="11.00000%"} ![image](fig/pap_qual/sexo_tp_2.jpg){width="11.00000%"} & ![image](fig/pap_qual/sexo_fp_1.jpg){width="11.00000%"} ![image](fig/pap_qual/sexo_fp_2.jpg){width="11.00000%"} & ![image](fig/pap_qual/sexo_fn_1.jpg){width="11.00000%"} ![image](fig/pap_qual/sexo_fn_2.jpg){width="11.00000%"}\ --------- Medical History --------- & ![image](fig/pap_qual/med_tp_1.jpg){width="11.00000%"} ![image](fig/pap_qual/med_tp_2.jpg){width="11.00000%"} & ![image](fig/pap_qual/med_fp_1.jpg){width="11.00000%"} ![image](fig/pap_qual/med_fp_2.jpg){width="11.00000%"} & ![image](fig/pap_qual/med_fn_1.jpg){width="11.00000%"} ![image](fig/pap_qual/med_fn_2.jpg){width="11.00000%"}\ --------- Drivers License --------- & ![image](fig/pap_qual/dl_tp_1.jpg){width="11.00000%"} ![image](fig/pap_qual/dl_tp_2.jpg){width="11.00000%"} & ![image](fig/pap_qual/dl_fp_1.jpg){width="11.00000%"} ![image](fig/pap_qual/dl_fp_2.jpg){width="11.00000%"} & ![image](fig/pap_qual/dl_fn_1.jpg){width="11.00000%"} ![image](fig/pap_qual/dl_fn_2.jpg){width="11.00000%"}\ Handwriting & ![image](fig/pap_qual/hw_tp_1.jpg){width="11.00000%"} ![image](fig/pap_qual/hw_tp_2.jpg){width="11.00000%"} & ![image](fig/pap_qual/hw_fp_1.jpg){width="11.00000%"} ![image](fig/pap_qual/hw_fp_2.jpg){width="11.00000%"} & ![image](fig/pap_qual/hw_fn_1.jpg){width="11.00000%"} ![image](fig/pap_qual/hw_fn_2.jpg){width="11.00000%"}\ Occupation & ![image](fig/pap_qual/occup_tp_1.jpg){width="11.00000%"} ![image](fig/pap_qual/occup_tp_2.jpg){width="11.00000%"} & ![image](fig/pap_qual/occup_fp_1.jpg){width="11.00000%"} ![image](fig/pap_qual/occup_fp_2.jpg){width="11.00000%"} & ![image](fig/pap_qual/occup_fn_1.jpg){width="11.00000%"} ![image](fig/pap_qual/occup_fn_2.jpg){width="11.00000%"}\ --------------- Personal Relationships --------------- & ![image](fig/pap_qual/relp_tp_1.jpg){width="11.00000%"} ![image](fig/pap_qual/relp_tp_2.jpg){width="11.00000%"} & ![image](fig/pap_qual/relp_fp_1.jpg){width="11.00000%"} ![image](fig/pap_qual/relp_fp_2.jpg){width="11.00000%"} & ![image](fig/pap_qual/relp_fn_1.jpg){width="11.00000%"} ![image](fig/pap_qual/relp_fn_2.jpg){width="11.00000%"}\ Username & ![image](fig/pap_qual/uname_tp_1.jpg){width="11.00000%"} ![image](fig/pap_qual/uname_tp_2.jpg){width="11.00000%"} & ![image](fig/pap_qual/uname_fp_1.jpg){width="11.00000%"} ![image](fig/pap_qual/uname_fp_2.jpg){width="11.00000%"} & ![image](fig/pap_qual/uname_fn_1.jpg){width="11.00000%"} ![image](fig/pap_qual/uname_fn_2.jpg){width="11.00000%"}\ --------------- License Plate (Complete) --------------- & ![image](fig/pap_qual/lpc_tp_1.jpg){width="11.00000%"} ![image](fig/pap_qual/lpc_tp_2.jpg){width="11.00000%"} & ![image](fig/pap_qual/lpc_fp_1.jpg){width="11.00000%"} ![image](fig/pap_qual/lpc_fp_2.jpg){width="11.00000%"} & ![image](fig/pap_qual/lpc_fn_1.jpg){width="11.00000%"} ![image](fig/pap_qual/lpc_fn_2.jpg){width="11.00000%"}\ Additional Results for Personalized Privacy Prediction {#sec:appendix_profile_pr} ====================================================== ![image](fig/mean_l1_distance_image_us_2.pdf){width="\textwidth"} Qualitative Results ------------------- In this section, we discuss additional results for Section 5.2: Personalizing Privacy Risk Prediction. presents qualitative results for our approach to user-specific *Personalized Privacy Risk Prediction* discussed in Section 5.2. To visualize the qualitative results over all 30 user profiles simultaneously, we present a scatter plot of ground-truth vs. predicted scores for each image. Each point in the scatter plot represents one user-profile. In these plots, points closer to the diagonal (dotted line) indicate lower errors. Points above the diagonal indicate risk over-estimation and under the diagonal indicate risk under-estimation. We observe from the qualitative results and w.r.t each row in : (First row) presents examples with correct high confidence attribute predictions according to the posterior probability. Here, both AP-PR and PR-CNN perform equally well. (Second row) presents examples where attribute predictions are noisy. In these, PR-CNN outperforms AP-PR. (Third row) Both AP-PR and PR-CNN are challenged by difficult images (low contrast, unnatural angles, low lighting, occlusion). However, we see that PR-CNN often performs slightly better than AP-PR in these cases. (Fourth row) presents examples where AP-PR with correct attribute predictions performs better than PR-CNN. -------------------------------------------------------- -- -------------------------------------------------------- -- -------------------------------------------------------- -- -------------------------------------------------------- ![image](fig/ppp_qual/25_fig.jpg){width="15.00000%"} ![image](fig/ppp_qual/38_fig.jpg){width="15.00000%"} ![image](fig/ppp_qual/74_fig.jpg){width="15.00000%"} ![image](fig/ppp_qual/192_fig.jpg){width="15.00000%"} ![image](fig/ppp_qual/25_plot.pdf){width="15.00000%"} ![image](fig/ppp_qual/38_plot.pdf){width="15.00000%"} ![image](fig/ppp_qual/74_plot.pdf){width="15.00000%"} ![image](fig/ppp_qual/192_plot.pdf){width="15.00000%"} ![image](fig/ppp_qual/106_fig.jpg){width="15.00000%"} ![image](fig/ppp_qual/142_fig.jpg){width="15.00000%"} ![image](fig/ppp_qual/278_fig.jpg){width="15.00000%"} ![image](fig/ppp_qual/383_fig.jpg){width="15.00000%"} ![image](fig/ppp_qual/106_plot.pdf){width="15.00000%"} ![image](fig/ppp_qual/142_plot.pdf){width="15.00000%"} ![image](fig/ppp_qual/278_plot.pdf){width="15.00000%"} ![image](fig/ppp_qual/383_plot.pdf){width="15.00000%"} ![image](fig/ppp_qual/114_fig.jpg){width="15.00000%"} ![image](fig/ppp_qual/389_fig.jpg){width="15.00000%"} ![image](fig/ppp_qual/438_fig.jpg){width="15.00000%"} ![image](fig/ppp_qual/874_fig.jpg){width="15.00000%"} ![image](fig/ppp_qual/114_plot.pdf){width="15.00000%"} ![image](fig/ppp_qual/389_plot.pdf){width="15.00000%"} ![image](fig/ppp_qual/438_plot.pdf){width="15.00000%"} ![image](fig/ppp_qual/874_plot.pdf){width="15.00000%"} ![image](fig/ppp_qual/87_fig.jpg){width="15.00000%"} ![image](fig/ppp_qual/279_fig.jpg){width="15.00000%"} ![image](fig/ppp_qual/500_fig.jpg){width="15.00000%"} ![image](fig/ppp_qual/341_fig.jpg){width="15.00000%"} ![image](fig/ppp_qual/87_plot.pdf){width="15.00000%"} ![image](fig/ppp_qual/279_plot.pdf){width="15.00000%"} ![image](fig/ppp_qual/500_plot.pdf){width="15.00000%"} ![image](fig/ppp_qual/341_plot.pdf){width="15.00000%"} -------------------------------------------------------- -- -------------------------------------------------------- -- -------------------------------------------------------- -- -------------------------------------------------------- ![image](fig/profile_vs_users.pdf){width="\textwidth"} ![image](fig/retrain-user-30p-max-128-128-sigmoid-v2-selected-Q1.pdf){width="\textwidth"} ![image](fig/retrain-user-30p-max-128-128-sigmoid-v2-selected-Q2.pdf){width="\textwidth"} ![image](fig/retrain-user-30p-max-128-128-sigmoid-v2-selected-Q3.pdf){width="\textwidth"} ![image](fig/retrain-user-30p-max-128-128-sigmoid-v2-selected-Q4.pdf){width="\textwidth"} Precision-Recall Curves for User Profiles ----------------------------------------- Section 5.2 discussed Precision-Recall curves evaluated over all profiles. These were obtained by treating the privacy risk-prediction as a binary classification problem, where images above a certain risk score (3+ and 4+ previously) is considered private per user profile. In , we present the Precision-Recall curves evaluated over groups of profiles and additional risk thresholds. To generate the curves in these figures, we first create four groups of profiles, with an equal number of profiles in each group. We refer to these groups as quartiles Q1-Q4. We then obtain the Precision-Recall curves for each of these quartiles. We observe that PR-CNN displays better performance for high-risk images over *all* quartiles of the 30 user profiles and hence contributing to an overall better performance. Additionally, we observe a similar pattern with the $L_1$-error metric (the absolute difference in scores), where PR-CNN (error = 0.67) incurs lower error in scores for private images compared to AP-PR (error = 0.84). However, AP-PR (error = 0.34) performs better for safe images in comparison to PR-CNN (error = 0.58). Additional Results for Humans vs. Machine {#sec:appendix_humans_machines} ========================================= In Section 5.3, we discussed the performance of our Privacy Risk Evaluation Methods when compared to the users themselves. The performance evaluation was primarily with Precision-Recall curves. In this section, we discuss performance when evaluated using $L_1$ as a distance metric between the ground-truth privacy scores (user’s specified preferences) and the privacy risk estimation using three approaches (user’s visual risk assessment and our two proposed approaches – AP-PR and PR-CNN). The $L_1$ distance here measures the absolute difference in risk score (where risk scores are between 1–5). presents these errors per attribute. We observe from these results: On average (horizontal lines), the PR-CNN estimates privacy risks ($L_1$ error = 1.03) slightly better than the user’s image-based judgment ($L_1$ error = 1.1) Users often misjudge the risk (right end of figure) from natural-looking images such as cars with visible license plates or family photographs depicting relationships. In these cases, PR-CNN is better at evaluating risks. Considering the attributes in which AP-PR incurs high errors (relationships, addresses, username, signature, credit card), we see that PR-CNN outperforms in all these cases bypassing incorrect attribute predictions. [^1]: Refer to project website: <https://tribhuvanesh.github.io/vpa/>
--- abstract: 'Properties of X-ray radiation emitted from the polar caps of a radio pulsar depend not only on the cap temperature, size, and position, but also on the surface chemical composition, magnetic field, and neutron star’s mass and radius. Fitting the spectra and the light curves with neutron star atmosphere models enables one to infer these parameters. As an example, we present here results obtained from the analysis of the pulsed X-ray radiation of a nearby millisecond pulsar J0437–4715. In particular, we show that stringent constraints on the mass-to-radius ratio can be obtained if orientations of the magnetic and rotation axes are known, e.g., from the radio polarization data.' author: - 'G. G. Pavlov' - 'V. E. Zavlin' title: 'Mass-to-Radius Ratio for the Millisecond Pulsar J0437–4715' --- 10[R\_[10]{}]{} Introduction ============ Virtually all of the different models of radio pulsars (e. g., Cheng & Ruderman 1980; Arons 1981; Michel 1991; Beskin, Gurevich & Istomin 1993) predict a common phenomenon: the presence of polar caps (PCs) around the neutron star (NS) magnetic poles heated up to X-ray temperatures by the backward accretion of relativistic particles and gamma-quanta from the pulsar magnetosphere. A typical size of the PC is estimated to be close to the radius within which the open magnetic field lines originate from the NS surface, $\Rpc\sim (2\pi R^3/Pc)^{1/2}~(\sim 0.1 - 3$ km for the period $P\sim 2~{\rm s}-2$ ms). However, expected PC temperatures, $\Tpc\sim 5\times 10^5 - 5\times 10^6$ K, and luminosities, $L_{\rm pc} \sim 10^{28}-10^{32}$ erg s$^{-1}$, are much less certain and strongly depend on the specific pulsar model. Studying X-ray radiation from the PCs is particularly useful to discriminate between different models. The best candidates for the investigation of the PC radiation are nearby, old pulsars of ages $\tau{\raisebox{-.6ex}{\mbox{ $\stackrel{>}{\mbox{\scriptsize$\sim$}}\:$}}}10^6$ yr, including very old millisecond pulsars, for which the NS surface outside the PCs is expected to be so cold, $T{\raisebox{-.6ex}{\mbox{ $\stackrel{<}{\mbox{\scriptsize$\sim$}}\:$}}}10^5$ K, that its thermal radiation is negligibly faint in the soft X-ray range. Indeed, available observational data allow one to assume the PC origin of soft X-rays detected from, e. g., PSR B1929+10 (Yancopoulos, Hamilton & Helfand 1994; Wang & Halpern 1997), B0950+08 (Manning & Willmore 1994; Wang & Halpern 1997), and J0437–4715 (Becker & Trümper 1993). Moreover, there are some indications that hard tails of the X-ray spectra of younger pulsars, B0656+14 and B1055–52, may contain a thermal PC component (Greiveldinger et al. 1996). On the other hand, nonthermal (e. g., magnetospheric) X-ray radiation may dominate even in very old pulsars (cf. Becker & Trümper 1997). For example, $ASCA$ observations of the millisecond pulsar B1821–24 (Saito et al. 1997), whose luminosity in the 0.5–10 keV range exceeds that predicted by PC models by a few orders of magnitude (see discussion in Zavlin & Pavlov 1997; hereafter ZP97), proved its X-rays to be of a magnetospheric origin. Thus, thorough investigations are needed in each specific case to ascertain the nature of the observed radiation. The closest known millisecond pulsar J0437–4715 ($P=5.75$ ms, $\tau = P/2\dot{P} = 5\times 10^9$ yr, $\dot{E}=4\times 10^{33}$ erg s$^{-1}$, $B\sim 3\times 10^8$ G, $d=180$ pc) is of special interest. Important data from this pulsar have been collected with the $ROSAT$ (Becker & Trümper 1993; Becker et al. 1997), $EUVE$ (Edelstein, Foster & Bowyer 1995; Halpern, Martin & Marshall 1996) and $ASCA$ (Kawai, Tamura & Saito 1996) space observatories. Becker & Trümper (1993) and Halpern et al. (1996) showed that the spectral data can be fitted with power-law or blackbody plus power-law models. The latter fit indicates that the X-ray emission may be, at least partly, of a thermal (PC) origin. Rajagopal & Romani (1996) applied more realistic models of radiation emitted by NS atmospheres to fitting the thermal component and concluded that the iron atmospheres do not fit the observed spectrum. The observations made with the $ROSAT$ and $EUVE$ missions revealed smooth pulsations of soft X-rays with the pulsed fraction $\fp\sim 25-50\%$ apparently growing with photon energy in the 0.1–2.4 keV $ROSAT$ range. Since the pulsed fraction of the nonthermal radiation is not expected to vary significantly in the narrow energy range, it is natural to attribute the observed radiation to the pulsar PCs. The PC radiation should be inevitably pulsed unless the rotation axis coincides with either the line of sight or magnetic axis. If it were the blackbody (isotropic) radiation, the pulsed fraction would remain the same at all photon energies. The energy dependence of $\fp$ can be caused by anisotropy (limb-darkening) of thermal radiation emitted from NS atmospheres (Pavlov et al. 1994; Zavlin, Pavlov & Shibanov 1996). Zavlin et al. (1996) showed that even in the case of low magnetic fields characteristic to millisecond pulsars ($B\sim 10^8 - 10^9$ G) the anisotropy strongly depends on photon energy and chemical composition of NS surface layers. To interpret the $ROSAT$ and $EUVE$ observations of , ZP97 applied the NS atmosphere models with account for the energy-dependent limb-darkening and the effects of gravitational redshift and bending of photon trajectories (Zavlin, Shibanov & Pavlov 1995). Assuming the viewing angle (between the rotation axis and line of sight), $\zeta=40^\circ$, and the magnetic inclination (angle between the magnetic and rotation axes), $\alpha=35^\circ$, inferred by Manchester & Johnston (1995) from the phase dependence of the position angle of the radio polarization, ZP97 showed that both the spectra and the light curves (pulse profiles) of the [*entire*]{} soft X-ray radiation detected by $ROSAT$ and $EUVE$ can be interpreted as thermal radiation from two hydrogen-covered PCs, whereas neither the blackbody nor iron atmosphere models fit the observations. The approach of ZP97 differs substantially from that of Rajagopal & Romani (1996) who did not take into account the energy-dependent anisotropy of the emergent radiation and gravitational bending (hence, they could not analyze the pulse profiles), made the unrealistic assumption that pulsed and unpulsed flux components are of completely separate origin, and consequently obtained quite different parameters of the radiating region. The simplest, single-temperature PC model of ZP97 provides a satisfactory fit with typical PC radius $\Rpc\sim 1$ km (reasonably close to the theoretical estimate of $1.9$ km) and temperature $\Tpc\sim 1\times 10^6$ K. The corresponding bolometric luminosity of the two PCs, $L_{\rm bol} = (1.0-1.6)\times 10^{30}$ erg s$^{-1}$, comprises $\sim (2-4)\times 10^{-4}$ of the pulsar total energy loss $\dot{E}$. This value of $L_{\rm bol}$ is in excellent agreement with the predictions of the slot-gap pulsar model by Arons (1981). Even better fit to the observational data is provided by a model with non-uniform temperature distribution along the PC surface. In addition, the inferred interstellar hydrogen column density towards the , $n_H\sim (1-3)\times 10^{19}$ cm$^{-2}$ was demonstrated to be well consistent with the ISM properties obtained from observations of other stars in the vicinity of the pulsar. All these results allow one to conclude that the X-ray radiation observed from   is indeed of the thermal (PC) origin. The results of ZP97 were obtained for fixed orientations of the rotation and magnetic axes (angles $\zeta$ and $\alpha$) and for standard NS mass $M=1.4 \Ms$ and radius $R=10$ km. The inferred PC radius, temperature and luminosity are almost insensitive to these four parameters. Their main effect is on the shape and pulsed fraction of the light curves. For instance, $\fp$ decreases with increasing the mass-to-radius ratio, unless the observer can see the PC in the center of the back hemisphere of the NS, which is possible at $\alpha \simeq \zeta$ and $\mr > 1.93$, where $M_*=M/\Ms$ and $\R10=R/(10$ km$)$ (e. g., Zavlin et al. 1995). The angles $\alpha$ and $\zeta$ cannot be precisely evaluated from the radio polarization measurements because of the complicated variation of the polarization position angle across the eight-component mean radio pulse of  (Manchester & Johnston 1995), and the true NS mass and radius may differ from the canonical values. On the other hand, the fact that the set of angles and $M/R$ adopted by ZP97 fits the data does not mean that a better fit cannot be obtained for another set, and it tells us nothing about the allowed domain of these parameters. Hence, fitting the light curves for variable $M/R$, $\alpha$, and $\zeta$ enables one to constrain the mass-to-radius ratio and/or the magnetic inclination and viewing angle, so that the present paper is complementary to ZP97. We describe our approach in §2 and present the results on  in §3. Method ====== Since the shape of the light curves depends on energy, the light curve fitting is coupled to the spectral fitting. We fit the count rate spectrum (total $\simeq 3200$ counts) collected by the $ROSAT$ Position Sensitive Proportional Counter (PSPC) with the phase-integrated model spectrum emitted from two identical, uniformly heated PCs $180^\circ$ apart, assuming the hydrogen composition of the surface layers (ZP97). The fitting is carried out on a grid of angles $\zeta$ and $\alpha$ between $0^\circ$ and $90^\circ$ at different mass-to-radius ratios $\mr$ in a range allowed by equations of state of the superdense NS matter. As shown in ZP97, both the $ROSAT$ and $EUVE$ observations of   are consistent with the applied model at the interstellar hydrogen column density of $\sim 1\times 10^{19}$ cm$^{-2}$; therefore we freeze $n_H$ at this value and obtain $\Tpc$ and $\Rpc$ from the spectral fits for each set of $\zeta$, $\alpha$ and $\mr$. With these $\Tpc$ and $\Rpc$, we calculate the model spectral fluxes for various phases of the pulsar period and fold each of the spectra with the PSPC response matrix; this gives us the model light curve as a function of phase $\phi$ for given $\zeta$, $\alpha$ and $\mr$. This light curve is then compared with the observed PSPC light curve. For this comparison, we used the PSPC light curve for the total $ROSAT$ energy range, 0.1–2.4 keV; its pulsed fraction (for 17 phase bins) was determined to be $\fp=30\pm 4\%$ (ZP97). We bin the model light curve to the same number of phase bins ($K=17$) and calculate the $\chi^2$ value, $$\chi^2=\sum_{k=1}^{K} \frac{(N_{k, \rm o} - N_{k, \rm m})^2} {N_{k,\rm o}}~,$$ for each set of $\zeta$, $\alpha$ and $\mr$ ($N_{k,\rm o}$ and $N_{k,\rm m}$ are the observed and model numbers of counts in the $k$-th bin). This allows us to find the best-fit parameters (which correspond to the minimum of $\chi^2$) and confidence levels in the parameter space. Results ======= Figure 1 shows the 68%, 90% and 99% confidence regions for the above-described light curve models in the $\zeta$-$\alpha$ plane at several values of the mass-to-radius ratio, $\mr=1.1, 1,2,\ldots 1.6$. The ratio determines the parameter $g_r=\sqrt{1 - 0.295\mr}$ responsible for the effects of gravitational redshift and bending of photon trajectories (e. g., Zavlin et al. 1995). For the $\mr$ values in Figure 1, the parameter $g_r$ varies between 0.82 and 0.73, making visible from 74% to 91% of the whole NS surface, so that a distant observer can detect the radiation from both PCs simultaneously during almost the whole pulsar period. The plots in Figure 1 are clearly symmetrical with respect to the transformation $\zeta\leftrightarrow \alpha$ because the model light curves depend only on the angle $\theta$ between the observer’s direction and the magnetic axis: $\cos\theta = \cos\zeta~\cos\alpha + \sin\zeta~\sin\alpha~\cos\phi$, where $\phi$ is the rotational phase. The minimum value of the reduced $\chi^2$ ($\chi^2_\nu=1.07$ for 17 degrees of freedom) was obtained at $\zeta=47^\circ$, $\alpha=18^\circ$ (or vice versa) and $\mr=1.2$ ($g_r=0.80$). In Figure 1 we also show the lines of constant model pulsed fraction. The lines for the pulsed fractions compatible with the detected value, $f_p=30\pm 4\%$, are close to the confidence contours unless $\zeta$ and/or $\alpha$ are close to $90^\circ$; at these large angles the model light curves have a complicated shape (e. g., two maxima per rotational period) inconsistent with what is observed. The increase of the mass-to-radius ratio enhances the gravitational bending and leads to a greater contribution from the secondary PC (that on the back NS hemisphere), which suppresses the model pulsations. As a result, the allowed regions in Figure 1 completely vanish at $\mr > 1.6$ ($g_r < 0.73$)[^1]. With decreasing the $\mr$ ratio, the confidence regions shift towards the bottom-left corner of the $\zeta$-$\alpha$ plane reaching a limiting position at $\mr\simeq 0.3$ ($g_r\simeq 0.95$) when the effect of the gravitational bending becomes negligible, and the observer detects radiation only from the primary PC (on the front hemisphere). Figure 1 provides obvious constraints on the pulsar mass-to-radius ratio. For instance, if there were no observational information about the $\zeta$ and $\alpha$ values, the only constraint would be $M < 1.6\, \Ms\, (R/10~{\rm km})$, or $R>8.8\, (M/1.4 \Ms)$ km, at a 99% confidence level. If, however, we adopt $\zeta=40^\circ$ and $\alpha=35^\circ$, as given by Manchester & Johnston (1995), then $1.4 < \mr <1.6$. Figure 2 shows the corresponding domain in the NS mass-radius diagram, restricted by the $M(R)$ dependences for soft ($\pi$) and hard (TI) equations of state of the superdense matter (Shapiro & Teukolsky 1983). It follows from this picture, for example, that the radius of a NS with the canonical mass $M=1.4 \Ms$ is within the range $8.8<R<10.0$ km. Another set of angles, $\zeta=24^\circ$ and $\alpha=20^\circ$, was suggested by Gil & Krawczyk (1997). This set gets within the 99% confidence region only for very low mass-to-radius ratios, $\mr < 0.3$, when the gravitational effects become negligible. This corresponds to very low masses, $M < 0.5 \Ms$ at any $R$ allowed by the equations of state (Fig. 2). Note that for these angles and $M_*/R_{10} < 1.8$ the secondary PC remains invisible during the whole pulsar period. Conclusions =========== We have demonstrated that the analysis of the soft X-ray radiation emitted by PCs of radio pulsars in terms of NS atmosphere models provides a new tool to constrain the NS mass and radius, and consequently the equation state of the superdense matter in the NS interiors. The constraints become more stringent if this analysis is combined with complementary data on the pulsar magnetic inclination and viewing angle, as we have shown using the millisecond pulsar J0437–4715 as an example. In principle, these angles can be inferred from the phase dependence of the radio polarization position angle. However, in the case of J0437–4715 this dependence is too complicated to be described by a simple rotating vector model of Radhakrishnan & Cooke (1969), so that the inferred angles are very uncertain. Once an adequate model for the radio emission is found, the statistical analysis of the polarization data would result in a domain of allowed angles in the $\zeta$-$\alpha$ plane, and the constraints should be based upon overlapping of the confidence regions obtained from the X-ray light curve and from the radio polarization data. This may constrain not only the $M/R$ ratio, but also the pulsar geometry. The allowed $M(R)$ domain can be further restricted if additional information on the NS mass is available. For instance, since is in a binary system with a white dwarf companion, it is possible to estimate independently an upper limit on the NS mass. Sandhu et al. (1997) found upper limits on the white dwarf mass, $M_{\rm wd}\le 0.32\Ms$, and on the orbital inclination, $i\le 43^\circ$, that yields a restriction $M < 2.5\Ms$ (see their Fig. 2). If further observations of the pulsar reduce the limit on $i$, or a lower upper limit on $M_{\rm wd}$ is obtained from white dwarf cooling models, a more stringent constraint on $M$ would follow, thus narrowing the allowed domains of the other pulsar parameters ($R$, $\zeta$ and $\alpha$). The above-described analysis for   is simplified by the pulsar’s low magnetic field which does not affect the properties of the X-ray radiation. The spectra and, particularly, angular distribution of radiation emerging from strongly magnetized NS atmospheres depends significantly on the magnetic field (Pavlov et al. 1994). Thus, a similar, albeit more complicated, analysis of X-ray radiation from pulsars with strong magnetic fields, $B\sim 10^{11}-10^{13}$ G (e. g., PSR B1929+10 and 0950+08), would enable one to constrain also the magnetic field strength at their magnetic poles. We thank Werner Becker for providing us with the $ROSAT$ PSPC light curve. We are grateful to Joachim Trümper for stimulating discussions. The work was partially supported through NASA grant NAG5-2807, INTAS grant 94-3834 and DFG-RBRF grant 96-02-00177G. VEZ acknowledges the Max-Planck fellowship. Arons, J. 1981, ApJ, 248, 1099 Becker, W., & Trümper, J. 1993, Nat., 365, 528 Becker, W., & Trümper, J. 1997, A&A, in press Becker, W., et al. 1997, in preparation Beskin, V. S., Gurevich A. F., & Istomin Ya. N. 1993, Physics of Pulsar Magnetosphere. Cambridge Univ. Press, Cambridge Cheng, A. F., & Ruderman, M. A. 1980, ApJ, 235, 576 Edelstein, J., Foster, R. S., & Bowyer, S. 1995, ApJ, 454, 442 Gil, J., & Krawczyk, A. 1997, MNRAS, 285, 561 Greiveldinger, C., et al. 1996, ApJ, 465, L35 Halpern, J. P., Martin, C., & Marshall, H. L. 1996, ApJ, 462, 908 Kawai, N., Tamura, K., & Saito, Y., 1996, in ESA’s Report to the 31st COSPAR Meeting (ESA-SP 1194), ed. W. R. Burke. Noordwijk, ESA, in press Manchester, R. N., & Johnston, S. 1995, ApJ, 441, L65 Manning, R. A., & Willmore, A. P. 1994, MNRAS, 266, 635 Michel, F. C. 1991, Theory of Neutron Star Magnetospheres. Univ. of Chicago Press, Chicago Pavlov, G. G., Shibanov, Yu. A., Ventura, J., & Zavlin, V. E. 1994, A&A, 289, 837 Radhakrishnan, V., & Cooke, D. J. 1969, 3, 225 Rajagopal, M., & Romani, R. W. 1996, ApJ, 461, 327 Saito, Y., et al. 1997, ApJ, 477, L37 Sandhu, J. S., et al. 1997, ApJ, 478, L95 Shapiro, S., & Teukolsky, S. 1983, Black Holes, White Drawfs and Neutron Stars. Wiley, New York Wang, F. Y.-H., & Halpern, J. P. 1997, ApJ, 482, L159 Yancopoulos, S., Hamilton, T. T., & Helfand, D. 1994, ApJ, 429, 832 Zavlin, V. E., & Pavlov, G. G. 1997, A&A, in press (ZP97) Zavlin, V. E., Shibanov, Yu. A., & Pavlov, G. G. 1995, Astron. Let., 21, 149 Zavlin, V. E., Pavlov, G. G., & Shibanov, Yu. A. 1996, A&A, 315, 141 [^1]: In fact, at $\mr{\raisebox{-.6ex}{\mbox{ $\stackrel{>}{\mbox{\scriptsize$\sim$}}\:$}}}1.93$ ($g_r {\raisebox{-.6ex}{\mbox{ $\stackrel{<}{\mbox{\scriptsize$\sim$}}\:$}}}0.66$) the model pulsations may grow because of appearance of strong narrow peaks from PC at $\theta\simeq 180^\circ$. However, there are no such peaks in the observed light curve.
--- abstract: | In the [Bounded Degree Matroid Basis Problem]{}, we are given a matroid and a hypergraph on the same ground set, together with costs for the elements of that set as well as lower and upper bounds $f({\varepsilon})$ and $g({\varepsilon})$ for each hyperedge ${\varepsilon}$. The objective is to find a minimum-cost basis $B$ such that $f({\varepsilon}) \leq |B \cap {\varepsilon}| \leq g({\varepsilon})$ for each hyperedge ${\varepsilon}$. Kir[á]{}ly et al. (Combinatorica, 2012) provided an algorithm that finds a basis of cost at most the optimum value which violates the lower and upper bounds by at most $2 \Delta-1$, where $\Delta$ is the maximum degree of the hypergraph. When only lower or only upper bounds are present for each hyperedge, this additive error is decreased to $\Delta-1$. We consider an extension of the matroid basis problem to generalized polymatroids, or g-polymatroids, and additionally allow element multiplicities. The [Bounded Degree g-polymatroid Element Problem with Multiplicities]{} takes as input a g-polymatroid $Q(p,b)$ instead of a matroid, and besides the lower and upper bounds, each hyperedge ${\varepsilon}$ has element multiplicities $m_{\varepsilon}$. Building on the approach of Kir[á]{}ly et al., we provide an algorithm for finding a solution of cost at most the optimum value, having the same additive approximation guarantee. As an application, we develop a $1.5$-approximation for the metric [Many-Visits TSP]{}, where the goal is to find a minimum-cost tour that visits each city $v$ a positive $r(v)$ number of times. Our approach combines our algorithm for the [Bounded Degree g-polymatroid Element Problem with Multiplicities]{} with the principle of Christofides’ algorithm from 1976 for the (single-visit) metric TSP, whose approximation guarantee it matches. **Keywords:** Generalized polymatroids, degree constraints, traveling salesman problem. author: - 'Krist[ó]{}f B[é]{}rczi[^1]' - 'Andr[é]{} Berger[^2]' - 'Matthias Mnich[^3]' - 'Roland Vincze[^4]' bibliography: - 'mvtsp\_apx.bib' title: | Degree-Bounded Generalized Polymatroids and\ Approximating the Metric Many-Visits TSP[^5] --- \[0pt\]\[0pt\][![image](BMBF_gefoerdert_2017_en)]{} Introduction {#sec:introduction} ============ In this paper we consider polymatroidal optimization problems with degree constraints. An illustrious example is the [Minimum Bounded Degree Spanning Tree problem]{}, where the goal is to find a minimum cost spanning tree in a graph with lower and upper bounds on the degree at each vertex. Checking feasibility of a degree-bounded spanning tree contains the $\mathsf{NP}$-hard Hamiltonian path problem; therefore, efficiently finding spanning trees that only slightly violate the degree constraints, is of interest. Several algorithms were given that were balancing between the cost of the spanning tree and the violation of the degree bounds [@ChaudhuriEtAl2009; @ChaudhuriEtAl2009a; @FurerRaghavachari1994; @KonemannRavi2003; @KonemannRavi2002]. Goemans [@Goemans2006] gave a polynomial-time algorithm that finds a spanning tree of cost at most the optimum value that violates each degree bound by at most $2$. Singh and Lau [@SinghLau2007] improved the additive approximation guarantee to $1$ by extending the iterative rounding method of Jain [@Jain2001] with a relaxation step. Zenklusen [@Zenklusen2012] considered an extension of the problem where for every vertex $v$, the edges adjacent to $v$ have to be independent in a given matroid. Motivated by a problem on binary matroids posed by Frienze, a matroidal generalization called the [Minimum Bounded Degree Matroid Basis Problem]{} was introduced by Kir[á]{}ly, Lau and Singh [@KiralyEtAl2012] in 2012. The problem takes as input a matroid $M=(S,r)$, a cost function $c:S \rightarrow \mathbb{R}$, a hypergraph $H=(S, \mathcal{E})$ and lower and upper bounds $f,g:\mathcal{E}\rightarrow\mathbb{Z}_{\geq 0}$; the objective is to find a minimum-cost basis $B$ of $M$ such that $f({\varepsilon}) \leq |B \cap {\varepsilon}| \leq g({\varepsilon})$ for each ${\varepsilon}\in \mathcal{E}$. For this problem, the authors developed an approximation algorithm that is based on the iterative relaxation method and a clever token-counting argument of Chaudhuri et al. [@ChaudhuriEtAl2009] and Singh and Lau [@SinghLau2007]. Let us denote the maximum degree of the hypergraph $H$ by $\Delta$. When both lower bounds and upper bounds are present, their algorithm returns a basis $B$ of cost at most the optimum value such that $f({\varepsilon}) - 2\Delta + 1 \leq |B \cap {\varepsilon}| \leq g({\varepsilon}) + 2\Delta - 1$ holds for each ${\varepsilon}\in \mathcal{E}$. Based on a technique of Bansal et al. [@BansalEtAl2009], they showed that the additive error can be improved when only lower bounds (or only upper bounds) are present, thus finding a basis of cost $B$ at most the optimum value such that $|B\cap {\varepsilon}|\leq g({\varepsilon})+\Delta-1$ (respectively, $f({\varepsilon})-\Delta+1\leq |B\cap {\varepsilon}|$) for each ${\varepsilon}\in\mathcal{E}$. Bansal et al. [@BansalEtAl2013] considered extensions of the [Minimum Bounded Degree Matroid Basis Problem]{} to contra-polymatroid intersection and to crossing lattice polyhedra. In all of these cases, the solution for the problem is a $0{-}1$ vector defined on the ground set. Our results {#our-results .unnumbered} ----------- In this paper we consider a different generalization of the [Bounded Degree Matroid Basis Problem]{}. The generalization deals with general polymatroids (or g-polymatroids) instead of matroids, and additionally allows multiplicities of the hyperedges. Formally, the problem takes as input a g-polymatroid $Q(p,b)=(S,p,b)$ with a cost function $c:S \rightarrow \mathbb{R}$, and a hypergraph $H=(S, \mathcal{E})$ on the same ground set with lower and upper bounds $f, g:\mathcal{E}\rightarrow\mathbb{Z}_{\geq 0}$ and multiplicity vectors $m_{\varepsilon}:S\rightarrow\mathbb{Z}_{>\geq0}$ for ${\varepsilon}\in{\mathcal{E}}$ satisfying $m_{\varepsilon}(s)=0$ for $s\in S-{\varepsilon}$. The objective is to find a minimum-cost element $x$ of $Q(p,b)$ such that $f({\varepsilon}) \leq \sum_{s\in {\varepsilon}}m_{\varepsilon}(s)x(s) \leq g({\varepsilon})$ for each ${\varepsilon}\in \mathcal{E}$. We call this problem the [Bounded Degree g-polymatroid Element Problem with Multiplicities]{}. Our first main algorithmic result is the following: \[thm:matroid1\] There is a polynomial-time algorithm for the [Bounded Degree g-polymatroid Element Problem with Multiplicities]{} which returns an element $x$ of $Q(p,b)$ of cost at most the optimum value such that $f({\varepsilon})- 2\Delta+1 \leq \sum_{s\in {\varepsilon}} m_{\varepsilon}(s) x(s) \leq g({\varepsilon})+2\Delta-1$ for each ${\varepsilon}\in{\mathcal{E}}$, where $\Delta=\max_{s\in S}\left\{\sum_{{\varepsilon}\in{\mathcal{E}}:s\in {\varepsilon}} m_{\varepsilon}(s)\right\}$. Theorem \[thm:matroid1\] extends the result of Kir[á]{}ly et al. [@KiralyEtAl2012] from matroids to g-polymatroids. It turns out that, when upper bounds are present, there is a significant difference when g-polymatroids are considered instead of matroids. Adapting the algorithm of Kir[á]{}ly et al. is not immediate, as a crucial step of their approach is to relax the problem by deleting a constraint corresponding to a hyperedge ${\varepsilon}$ with small $g({\varepsilon})$ value. This step is feasible when the solution is a $0$-$1$ vector, as in those cases the violation on ${\varepsilon}$ is upper bounded by the size of the hyperedge. This does not hold for g-polymatroids (or even for polymatroids), where an integral element might have coordinates larger than 1. However, we show that after the first round of our algorithm, the problem can be restricted to the unit cube and so upper bounds remain tractable. When only lower bounds (or only upper bounds) are present, we call the problem [Lower (Upper) Bounded Degree g-polymatroid Element Problem with Multiplicities]{}. In this case, we show a similar result with an improved additive error: \[thm:matroid2\] There is an algorithm for the [Lower Bounded Degree g-polymatroid Element Problem with Multiplicities]{} that runs in polynomial time and returns an element $x$ of $Q(p,b)$ of cost at most the optimum value such that $f({\varepsilon})- \Delta+1 \leq \sum_{s\in {\varepsilon}} m_{\varepsilon}(s) x(s)$ for each ${\varepsilon}\in{\mathcal{E}}$. An analogous result holds for the [Upper Bounded Degree g-polymatroid Element Problem]{}, where $\sum_{s\in {\varepsilon}} m_{\varepsilon}(s) x(s) \leq g({\varepsilon}) + \Delta - 1$. While being interesting by itself, the algorithm alluded to in Theorem \[thm:matroid2\] serves as the key ingredient for our second main algorithmic result. It concerns an extension of the [Traveling Salesman Problem]{} (TSP), one of the cornerstones of combinatorial optimization. In TSP, we are given a set of $n$ cities with their pairwise non-negative symmetric distances, and we seek a tour of minimum overall length that visits every city exactly once and returns to the origin. For the metric variant, when distances obey the triangle inequality, Christofides [@Christofides1976] in 1976 gave a polynomial-time algorithm that returns a 1.5-approximation to the optimal tour. The algorithm was independently discovered by Serdyukov [@Serdyukov1978]. For more than 40 years, no polynomial-time algorithm with better approximation guarantee has been discovered. In the generalization of the TSP, known as the [Many-Visits TSP]{}, each city $v$ is equipped with a request $r(v)\in\mathbb{Z}_{\geq 1}$, and we seek a tour of minimum overall length that visits city $v$ exactly $r(v)$ times and returns to the origin. Note that a loop might have a positive cost at any city in this case. The [Many-Visits TSP]{} was first considered in 1966 by Rothkopf [@Rothkopf1966]. The problem is clearly $\mathsf{NP}$-hard as it generalizes the TSP. In 1980, Psaraftis [@Psaraftis1980] gave a dynamic programming algorithm with time complexity $\mathcal O(n^2\prod_{i=1}^n (r_i+1))$; observe that this value may be as large as $(r/n+ 1)^n$, which is prohibitive even for moderately large values of $r = \sum_{i=1}^n r_i$. In 1984, Cosmadakis and Papadimitriou [@CosmadakisPapadimitriou1984] designed a family of algorithms, the fastest of which has run time[^6] $\mathcal O^\star(n^{2n}2^n + \log\sum r_i)$. The analysis of the algorithm is highly non-trivial, combining graph-theoretic insights and involved estimates of various combinatorial quantities. The usefulness of the Cosmadakis-Papadimitriou algorithm is limited by its superexponential dependence on $n$ in the run time, as well as its superexponential space requirement. Recently, Berger et al. [@BergerEtAl2019] simultaneously improved the run time to $2^{\mathcal O(n)}\cdot \log \sum r_i$ and reduced the space complexity to polynomial. As it is a generalization of the TSP, the [Many-Visits TSP]{} is of fundamental interest. This framework can be used for modeling *high-multiplicity* scheduling problems [@Psaraftis1980; @HochbaumShamir1991; @BraunerEtAl2005; @vanderVeenZhang1996]. In such problems, every job belongs to a job type, and two jobs of the same type are considered to be identical. One notable example of such problems is the *aircraft sequencing problem*. Airplanes are categorized into a small number of different classes. Two airplanes belonging to the same class need the same amount of time to land. In addition, there is a minimum time that should pass between the arrival of two planes. The amount of this time only depends on the classes of the two airplanes, and the aim is to minimize the time when the last plane lands. At the Hausdorff Workshop on Combinatorial Optimization in 2018, Rico Zenklusen asked[^7] for a polynomial-time approximation algorithm for [Many-Visits TSP]{} with metric cost functions. The cost function being metric implies that the cost of each loop $c_{ii}$ is at most twice the cost of leaving city $i$ to any other city $j$ and returning. The assumption of metric costs is necessary, as the TSP, and therefore the [Many-Visits TSP]{} does not admit any non-trivial approximation for unrestricted cost functions. Our next algorithmic result answers Zenklusen’s question in a very strong form. Namely, we give a polynomial-time algorithm that matches the approximation guarantee of Christofides and Serdyukov for the single-visit case. \[thm:tsp1\] There is a polynomial-time $1.5$-approximation for the metric [Many-Visits TSP]{}. Let us remark that the requirements $r(v)$ are encoded in binary. The TSP can also be formulated for directed graphs, where the cost function is asymmetric. In a recent breakthrough, Svensson et al. [@SvenssonEtAl2018] gave the first constant-factor approximation for the metric ATSP. We can show the following: \[thm:tsp2\] There is a polynomial-time $\mathcal O(1)$-approximation for the metric [Many-Visits ATSP]{}. The rest of the paper is organized as follows. In Sect. \[sec:pre\], we give an overview of the notation and definitions. In Sect. \[sec:simple52approximation\], we provide a simple 2.5-approximation for the metric [Many-Visits TSP]{} that runs in polynomial-time, and a polynomial-time constant-factor approximation for the metric [Many-Visits ATSP]{}. Thereafter, in Sect. \[sec:bp\], we give the necessary background on g-polymatroids. Sect. \[sec:approxpolymatroid\] describes the approximation algorithm for the [Bounded Degree g-polymatroid Element Problem with Multiplicities]{}. The 1.5-approximation for the metric [Many-Visits TSP]{} is given in Sect. \[sec:approx\]. We conclude in Sect. \[sec:discussion\]. Preliminaries {#sec:pre} ============= Throughout the paper, we let $G=(V,E)$ be a finite, undirected complete graph on $n$ vertices, whose edge set $E$ also contains a self-loop at every vertex $v\in V$. For a subset $F\subseteq E$ of edges, the *set of vertices covered by $F$* is denoted by $V(F)$. The *number of connected components* of the graph $(V(F),F)$ is denoted by $\operatorname{comp}(F)$. For a subset $X\subseteq V$ of vertices, the *set of edges spanned by $X$* is denoted by $E(X)$. The set of edges incident to a vertex $v$ is denoted by $\delta(v)$. For a vector $x\in\mathbb{R}^{|E|}$, we denote the sum of the $x$-values on the edges incident to $v$ by $d_x(v)$. Note that the $x$-value of the self-loop at $v$ is counted twice in $d_x(v)$. Given two graphs $H_1,H_2$ on the same vertex set, $H_1+H_2$ denotes the multigraph on the same vertex set obtained by taking the union of the edge sets of $H_1$ and $H_2$. Given a vector $x\in\mathbb{R}^{|S|}$ and a set $Z\subseteq S$, we use $x(Z)=\sum_{s\in Z} x(s)$. The *lower integer part of $x$* is denoted by $\floor{x}$, so $\floor{x} (s)=\floor{x(s)}$ for every $s\in S$. This notation extends to sets as well, therefore by $\floor{x}(Z) $ we mean $\sum_{s \in Z} \floor{x}(s)$. The *support of $x$* is denoted by $\operatorname{supp}(x)$, that is, $\operatorname{supp}(x)=\{s\in S:x(s)\neq 0\}$. The *difference of set $B$ from set $A$* is denoted by $A-B=\{s\in A : s\notin B\}$. We denote a single-element set $\{s\}$ by $s$, and with a slight abuse of notation, will write $A-s$ to indicate $A- \{s\}$. The *charasteristic vector* of a set $A$ is denoted by $\chi_A$. Let ${\mathcal{T}}$ be a collection of subsets of $S$. We call $\mathcal{L} \subseteq {\mathcal{T}}$ an *independent laminar system* if for any pair $X, Y \in \mathcal{L}$: (i) they do not intersect, i.e. either $X \subseteq Y$, $Y \subseteq X$ or $X \cap Y = \emptyset$, (ii) the characteristic vectors $\chi_Z$ of the sets $Z \in \mathcal{L}$ are independent. A *maximal* independent laminar system $\mathcal{L}$ with respect to ${\mathcal{T}}$ is an independent laminar system in ${\mathcal{T}}$, such that for any $Y \in {\mathcal{T}}-\mathcal{L}$ the system $\mathcal{L} \cup \{Y\}$ is not independent laminar. In other words, if we include any set $Y$ from ${\mathcal{T}}-\mathcal{L}$, it will intersect at least one set $Y$ from $\mathcal{L}$, or $\chi_Y$ can be given as a linear combination of $\{ \chi_Z: Z \in \mathcal{L} \}$. Given a laminar system $\mathcal{L}$ and a set $X\subseteq S$, the set of maximal members of $\mathcal{L}$ lying inside $X$ is denoted by $\mathcal{L}^{\max}(X)$, that is, $\mathcal{L}^{\max}(X)=\{Y\in\mathcal{L}:\ Y\subset X,\ \not\exists Y'\in\mathcal{L}\ \text{s.t.}\ Y\subset Y'\subset X\}$. The cost functions $c:E\rightarrow\mathbb{R}_{\geq 0}$ are assumed to satisfy the triangle inequality. The *minimum cost of an edge incident to a vertex $v$* is denoted by $c_v^{\min} := \min_{u \in V} c(uv)$. Note that $u=v$ is allowed in the definition, therefore the minimum takes into account the cost of the self-loop at $v$ as well. The triangle inequality holds for self-loops, too, meaning that $c(vv) \leq 2 \cdot c_v^{\min}$ for all $v\in V$. In the [Many-Visits TSP]{}, each vertex $v\in V$ is additionally equipped with a request $r(v)\in\mathbb {Z}_{\geq 1}$ encoded in binary. The goal is to find a minimum-cost closed walk (or *tour*) on the edges of the graph that visits each vertex $v\in V$ exactly $r(v)$ times. Listing all the edges of such a walk might be exponential in the size of the input, hence we always consider *compact representations* of the solution and the multigraphs that arise in our algorithms. That is, rather than storing an $r(V)$-long sequence of edges, for every edge $e$ we store its multiplicity $z(e)$ in the solution. As there are at most $n^2$ different edges in the solution each having multiplicity at most $\max_{v \in V} r(v)$, the space needed to store a feasible solution is $\mathcal O(n^2\log r(V))$. Therefore a vector $z \in \mathbb{Z}_{\geq 0}^{E}$ represents a feasible tour if $d_z(v)=2\cdot r(v)$ for every $v\in V$ and $\operatorname{supp}(z)$ is a connected subgraph of $G$. From this compact representation, one can compute a collection $\mathcal{C}$ of pairs $(C, \mu_C)$, where each $C$ is a simple closed walk (cycle) and $\mu_C$ is the corresponding integer denoting the number of copies of $C$. The number of such cycles $C$ is polynomial in $n$, and one can compute $\mathcal{C}$ in polynomial time (see, e.g., the procedure in Sect. 2 of Grigoriev and van de Klundert [@Grigoriev2006]). One can obtain the explicit order of the vertices from $(C, \mu_C)$ the following way: traverse $\mu_C$ copies of an arbitrary cycle $C$, and whenever a vertex $u$ is reached for the first time, traverse $\mu_{C'}$ copies of every cycle $C' \neq C$ containing $u$. Note that while the size of $\mathcal{C}$ is polynomial in $n$, the size of the explicit order of the vertices is exponential, hence the time complexity of the last step is also exponential in $n$. Denote by ${\mathsf{T}}^\star_{c,r}$ an optimal solution for an instance $(G,c,r)$ of the [Many-Visits TSP]{}, and by ${\mathsf{T}}^\star_{c,1}$ an optimal tour for the single-visit TSP (i.e., when $r(v)=1$ for each $v\in V$). Relaxing the connectivity requirement for solutions of the [Many-Visits TSP]{} yields Hitchcock’s transportation problem, which is solvable in polynomial time [@EdmondsKarp1970] and whose optimal solution we denote by ${\mathsf{TP}}^\star_{c,r}$. A Simple 2.5-Approximation for the Metric Many-Visits TSP {#sec:simple52approximation} ========================================================= In this section we give a simple $2.5$-approximation algorithm for the metric [Many-Visits TSP]{}; see Algorithm \[alg:apx\_tp\]. **Input:** A complete undirected graph $G$, costs $c:E\rightarrow\mathbb{R}_{\geq 0}$ satisfying the triangle inequality, requirements $r:V\rightarrow\mathbb{Z}_{\geq 1}$. **Output:** A tour that visits each $v \in V$ exactly $r(v)$ times. Calculate an $\alpha$-approximate solution ${\mathsf{T}}^\alpha_{c,1}$ for the single-visit metric TSP instance $(G,c,1)$. \[st:i\] Calculate an optimal solution ${\mathsf{TP}}^\star_{c,r-1}$ for the transportation problem with prescriptions $r(v)-1$ for $v\in V$. \[st:ii\] **return** $T = {\mathsf{T}}^\alpha_{c,1} + {\mathsf{TP}}^\star_{c,r-1}$ \[st:iii\] \[thm:simple\] The multigraph $T$ returned by Algorithm \[alg:apx\_tp\] is a feasible solution to the metric [Many-Visits TSP]{} instance $(G,c,r)$. The cost of the tour $T$ is at most $(\alpha+1)\cdot c({\mathsf{T}}^\star_{c,r})$. The degree of each vertex $v\in V$ is $2$ in ${\mathsf{T}}^\alpha_{c,1}$, and is $2\cdot(r(v)-1)$ in ${\mathsf{TP}}^\star_{c,r-1}$; hence the total degree of $v$ in $T = {\mathsf{T}}^\alpha_{c,1} + {\mathsf{TP}}^\star_{c,r-1}$ is $2\cdot r(v)$, as required. Since ${\mathsf{T}}^\alpha_{c,1}$ is connected, $T = {\mathsf{T}}^\alpha_{c,1} + {\mathsf{TP}}^\star_{c,r-1}$ is also connected, implying that it is a feasible solution to the problem. The cost of the tour $T$ constructed by Algorithm \[alg:apx\_tp\] is equal to $c(T) = c({\mathsf{T}}^\alpha_{c,1}) + c({\mathsf{TP}}^\star_{c,r-1})$. The cost of ${\mathsf{T}}^\alpha_{c,1}$ is at most $\alpha\cdot c({\mathsf{T}}^\star_{c,1})$. Note that $c({\mathsf{T}}^\star_{c,1})\leq c({\mathsf{T}}^\star_{c,r})$, as the cost function satisfies the triangle inequality. Again, by the triangle inequality, $c({\mathsf{TP}}^\star_{c,r-1})\leq c({\mathsf{TP}}^\star_{c,r})$. Hence we get $$\begin{aligned} c(T) &= c({\mathsf{T}}^\alpha_{c,1}) + c({\mathsf{TP}}^\star_{c,r-1})\\ & \leq \alpha\cdot c({\mathsf{T}}^\star_{c,1}) + c({\mathsf{TP}}^\star_{c,r-1})\\ & \leq \alpha\cdot c({\mathsf{T}}^\star_{c,r}) + c({\mathsf{TP}}^\star_{c,r}) \\ &\leq(\alpha+1)\cdot c({\mathsf{T}}^\star_{c,r}), \end{aligned}$$ proving the approximation guarantee stated in the theorem. Christofides’ algorithm [@Christofides1976] for the single-visit metric TSP provides an approximate solution with $\alpha=1.5$; thus we get the following: There is a polynomial-time algorithm that provides a $2.5$-approximation for the metric [Many-Visits TSP]{}. The approximation ratio follows immediately; it remains to argue that the algorithm runs in polynomial time. Finding an approximate solution for the single-visit TSP in Step \[st:i\] requires $\mathcal O(n^3)$ operations [@Christofides1976]. The transportation problem in Step \[st:ii\] can be solved in $\mathcal O(n^3\log r(V))$ operations using the Edmonds-Karp scaling method [@EdmondsKarp1970]. Finally, Step \[st:iii\] takes $\mathcal O(n^2 \log r(V))$ operations, therefore the total time complexity of the algorithm is $\mathcal O(n^3 \log r(V))$. For the metric [Many-Visits ATSP]{}, in Step \[st:i\] of Algorithm \[alg:apx\_tp\] we can apply the $\mathcal O(1)$-approximation for metric ATSP due to Svensson et al. [@SvenssonEtAl2018]. This leads to the proof of Theorem \[thm:tsp2\]. Polyhedral background {#sec:bp} ===================== In what follows, we make use of some basic notions and theorems of the theory of generalized polymatroids. For background, see for example the paper of Frank and Tardos [@frank1988generalized] or Chapter 14 in the book by Frank [@frank2012connections]. Given a ground set $S$, a set function $b:2^S\rightarrow\mathbb{Z}$ is *submodular* if $$b(X)+b(Y)\geq b(X\cap Y) + b(X\cup Y)$$ holds for every pair of subsets $X,Y\subseteq S$. A set function $p:2^S\rightarrow\mathbb{Z}$ is *supermodular* if $-p$ is submodular. As a generalization of matroid rank functions, Edmonds introduced the notion of polymatroids [@Edmonds1970]. A set function $b$ is a *polymatroid function* if $b(\emptyset)=0$, $b$ is non-decreasing, and $b$ is submodular. We define $$P(b):=\{x\in\mathbb{R}^{S}_{\geq 0}: x(Y)\leq b(Y)\ \text{for every}\ Y\subseteq S\} \enspace.$$ The set of integral elements of $P(b)$ is called a *polymatroidal set*. Similarly, the *base polymatroid* $B(b)$ is defined by $$B(b):=\{x\in\mathbb{R}^{S}: x(Y)\leq b(Y)\ \text{for every}\ Y\subseteq S, \, x(S)=b(S)\} \enspace .$$ Note that a base polymatroid is just a facet of the polymatroid $P(b)$. In both cases, $b$ is called the *border function* of the polyhedron. Although non-negativity of $x$ is not assumed in the definition of $B(b)$, this follows by the monotonicity of $b$ and the definition of $B(b)$: $x(s)=x(S)-x(S-s) \geq b(S)-b(S-s)\geq 0$ holds for every $s\in S$. The set of integral elements of $B(b)$ is called a *base polymatroidal set*. Edmonds [@Edmonds1970] showed that the vertices of a polymatroid or a base polymatroid are integral, thus $P(b)$ is the convex hull of the corresponding polymatroidal set, while $B(b)$ is the convex hull of the corresponding base polymatroidal set. For this reason, we will call the sets of integral elements of $P(b)$ and $B(b)$ simply a polymatroid and a base polymatroid. We say that a pair $(p,b)$ of set functions is a *paramodular pair* if $p(\emptyset)=b(\emptyset)=0$, $p$ is supermodular, $b$ is submodular, and the *cross-inequality* $$b(X) - p(Y) \geq b(X - Y) - p(Y - X)$$ holds for every pair of subsets $X,Y\subseteq S$. A *generalized polymatroid*, or *g-polymatroid* is a polyhedron of the form $$Q(p,b):=\left\{ x \in \mathbb{R}^{S}: p(Y) \leq x(Y) \leq b(Y) \ \text{for every}\ Y \subseteq S \right\} \enspace ,$$ where $(p,b)$ is a paramodular pair. Here $(p,b)$ is called the *border pair* of the polyhedron. It is known [@frank2012connections] that a g-polymatroid defined by an integral paramodular pair is a non-empty integral polyhedron. A special g-polymatroid is a box $T(\ell,u)=\{x\in \mathbb{R} \sp {S}: \ell\leq x\leq u\}$ where $\ell:S\rightarrow \mathbb{Z} \cup \{-\infty \}$, $u:S\rightarrow \mathbb{Z}\cup \{\infty \}$ with $\ell\leq u$. Another illustrious example is base polymatroids. Indeed, given a polymatroid function $b$ with finite $b(S)$, its *complementary set function* $p$ is defined for $X\subseteq S$ by $p(X):=b(S)-b(S-X)$. It is not difficult to check that $(p,b)$ is a paramodular pair and that $B(b)=Q(p,b)$. The intersection $Q'$ of a g-polymatroid $Q=Q(p,b)$ and a box $T=T(\ell,u)$ is non-empty if and only if $\ell(Y)\leq b(Y)$ and $p(Y)\leq u(Y)$ hold for every $Y\subseteq S$. When $Q'$ is non-empty, its unique border pair $(p',b')$ is given by $$p'(Z) = \max\{ p(Z') - u(Z'-Z)+ \ell(Z-Z') : Z'\subseteq S\},$$ $$b'(Z) = \min\{ b(Z') - \ell(Z'-Z)+ u(Z-Z') : Z'\subseteq S\} \enspace .$$ Given a g-polymatroid $Q(p, b)$ and $Z\subset S$, by *deleting* $Z\subseteq S$ from $Q(p,b)$ we obtain a g-polymatroid $Q(p, b)\setminus Z$ defined on set $S - Z$ by the restrictions of $p$ and $b$ to $S - Z$, that is, $$Q(p, b)\setminus Z:=\{x\in\mathbb{R}^{S-Z}: p(Y) \leq x(Y)\leq b(Y)\ \text{for every}\ Y\subseteq S - Z\} \enspace .$$ In other words, $Q(p, b)\setminus Z$ is the projection of $Q(p, b)$ to the coordinates in $S - Z$. Extending the notion of contraction is not immediate. A set can be naturally identified with its characteristic vector, that is, contraction is basically an operation defined on $0{-}1$ vectors. In our proof, we will need a generalization of this to the integral elements of a g-polymatroid. However, such an element might have coordinates larger than one as well, hence finding the right definition is not straightforward. In the case of matroids, the most important property of contraction is the following: $I$ is an independent of $M/Z$ if and only if $F\cup I$ is independent in $M$ for any maximal independent set $F$ of $Z$. With this property in mind, we define the g-polymatroid obtained by the contraction of an integral vector $z\in Q(p,b)$ to be the polymatroid $Q(p',b'):=Q(p,b)/z$ on the same ground set $S$ with the border functions $$\begin{aligned} p'(X) &:= p(X) - z(X) \\ b'(X) &:= b(X) - z(X) \enspace .\end{aligned}$$ Observe that $p'$ is obtained as the difference of a supermodular and a modular function, implying that it is supermodular. Similarly, $b'$ is submodular. Moreover, $p'(\emptyset)=b'(\emptyset)=0$, and $$\begin{aligned} b'(X)-p'(Y) {}&{}= b(X)-z(X)-p(Y)+z(Y)\\ {}&{}\geq b(X-Y)+p(Y-X)-z(X-Y)+z(Y-X)\\ {}&{}= b'(X-Y)-p'(Y-X),\end{aligned}$$ hence $(p',b')$ is indeed a paramodular pair. The main reason for defining the contraction of an element $z\in Q(p,b)$ is shown by the following lemma. \[lem:contraction\] Let $Q(p',b')$ be the polymatroid obtained by contracting $z\in Q(p,b)$. Then $x+z\in Q(p,b)$ for every $x\in Q(p',b')$. Let $x\in Q(p',b')$. By definition, this implies $p'(Y)\leq x(Y)\leq b'(Y)$ for $Y\subseteq S$. Thus $p(Y)=p'(Y)+z(Y)\leq x(Y)+z(Y)\leq b'(Y)+z(Y)=b(Y)$, concluding the proof. Formally, the [Bounded Degree g-polymatroid Element Problem]{} takes as input a g-polymatroid $Q(p,b)$ with a cost function $c:S \rightarrow \mathbb{R}$, and a hypergraph $H=(S, {\mathcal{E}})$ on the same ground set with lower and upper bounds $f,g:{\mathcal{E}}\rightarrow\mathbb{Z}_{\geq 0}$ and multiplicity vectors $ m_{\varepsilon}:S\rightarrow\mathbb{Z}_{\geq0}$ for ${\varepsilon}\in{\mathcal{E}}$ satisfying $m_{\varepsilon}(s)=0$ for $s\in S-{\varepsilon}$. The objective is to find a minimum-cost element $x$ of $Q(p,b)$ such that $f({\varepsilon}) \leq \sum_{s\in {\varepsilon}} m_{\varepsilon}(s) x(s) \leq g({\varepsilon})$ for each ${\varepsilon}\in {\mathcal{E}}$. Approximating the Bounded Degree g-polymatroid Element Problem with Multiplicities {#sec:approxpolymatroid} ================================================================================== The aim of this section is to prove Theorems \[thm:matroid1\] and \[thm:matroid2\]. We start by formulating a linear programming relaxation for the [Bounded Degree g-polymatroid Element Problem]{}: [@true@leqno]{} $$\begin{aligned} \label{eq:lp_poly} \text{minimize} \qquad \sum_{s \in S} c(s) \ &x(s) \\ \text{subject to} \qquad p(Z) \leq \ &x(Z) \leq b(Z) &\forall Z \subseteq S \tag{LP}\\ \qquad f({\varepsilon}) \leq \sum_{s\in {\varepsilon}} \ m_{\varepsilon}(s) \, &x(s) \leq g({\varepsilon}) &\forall {\varepsilon}\in {\mathcal{E}}$$ Although the program has an exponential number of constraints, it can be separated in polynomial time using submodular minimization [@iwata2001; @mccormick2005; @schrijver2000]. Algorithm \[alg:matd\] generalizes the approach by Kir[á]{}ly et al. [@KiralyEtAl2012]. We iteratively solve the linear program, delete elements which get a zero value in the solution, update the solution values and perform a contraction on the polymatroid, or remove constraints arising from the hypergraph. There is a significant difference between the first round of the algorithm and the later ones. In the first round, the bounds on the coordinates solely depend on $p$ and $b$, while in the subsequent rounds the whole problem is restricted to the unit cube. It is somewhat surprising that this restriction affects neither the solvability of the problem nor the additive error. Intuitively, the very first step of the algorithm fixes ‘most part’ of each coordinate, and the following steps are changing their value by at most $1$. **Input:** A g-polymatroid $Q(p,b)$ on ground set $S$, cost function $c:S\rightarrow\mathbb{R}$, a hypergraph $H = (S, {\mathcal{E}})$, lower bounds $f,g:{\mathcal{E}}\rightarrow\mathbb{Z}_{\geq 0}$, multiplicities $m_{\varepsilon}:S\rightarrow\mathbb{Z}_{\geq 0}$ for ${\varepsilon}\in {\mathcal{E}}$ satisfying $m_{\varepsilon}(s)=0$ for $s\in S-{\varepsilon}$. **Output:** $z\in Q(p,b)$ of cost at most $\textsc{OPT}_{LP}$, violating the hyperedge constraints by at most $2\Delta-1$. Initialize $z(s) \leftarrow 0$ for every $s\in S$. [0.9]{} Compute a basic optimal solution $x$ for .\ (Note: starting from the second iteration, $0\leq x \leq 1$.) [0.9]{} Delete any element $s$ with $x(s)=0$. Update each hyperedge ${\varepsilon}\leftarrow {\varepsilon}-s$ and $m_{\varepsilon}(s)\leftarrow 0$. Update the base polymatroid $Q(p,b)\leftarrow Q(p,b)\setminus s$ by deletion. \[st:del\] [0.9]{} For all $s\in S$ update $z(s) \leftarrow z(s) + \floor{x}(s)$.\ Apply polymatroid contraction $Q(p,b)\leftarrow Q(p,b)/\floor{x}$, that is, redefine $p(Y) := p(Y) - \floor{x}(Y)$ and $b(Y) := b(Y) - \floor{x}(Y)$ for every $Y \subseteq S$.\ Update $f({\varepsilon}) \leftarrow f({\varepsilon}) - \displaystyle\sum_{s\in {\varepsilon}}\, m_{\varepsilon}(s) \floor{x}(s)$ and $g({\varepsilon}) \leftarrow g({\varepsilon}) - \displaystyle\sum_{s\in {\varepsilon}}\, m_{\varepsilon}(s) \floor{x}(s)$ for each ${\varepsilon}\in {\mathcal{E}}$.\[st:inc\] If $m_{\varepsilon}({\varepsilon}) \leq 2\Delta-1$, let ${\mathcal{E}}\leftarrow {\mathcal{E}}- {\varepsilon}$. \[st:rem\] [0.9]{} **if** it is the first iteration **then** \[st:first\]\ Take the intersection of $Q(p,b)$ and the unit cube $[0,1]^S$, that is, $p(Y):=\max\{ p(Y') - |Y'-Y| : Y'\subseteq S\}$ and $b(Y) := \min\{ b(Y')+ |Y-Y'| : Y'\subseteq S\}$ for every $Y\subseteq S$. **return** $z$ $\;$ #### **Correctness** First we show that if the algorithm terminates then the returned solution $z$ satisfies the requirements of the theorem. In a single iteration, the g-polymatroid $Q(p,b)$ is updated to $(Q(p,b)\setminus D)/\floor{x}$, where is the set of deleted elements. In the first iteration, the g-polymatroid thus obtained is further intersected with the unit cube. By Lemma \[lem:contraction\], the vector $x-\lfloor x\rfloor$ restricted to $S-D$ remains a feasible solution for the modified linear program in the next iteration. Note that this vector is contained in the unit cube as its coordinates are between $0$ and $1$. This remains true when a lower degree constraint is removed in Step \[st:rem\] as well, therefore the cost of $z$ plus the cost of an optimal LP solution does not increase throughout the procedure. Hence the cost of the output $z$ is at most the cost of the initial LP solution, which is at most the optimum. By Lemma \[lem:contraction\], the vector $x-\lfloor x \rfloor+z$ is contained in the original base polymatroid, although it might violate some of the lower and upper bounds on the hyperdeges. We only remove the constraints corresponding to the lower and upper bounds for a hyperedge ${\varepsilon}$ when $m_{\varepsilon}({\varepsilon}) \leq 2\Delta-1$. As the g-polymatroid is restricted to the unit cube after the first iteration, these constraints are violated by at most $2\Delta-1$, as the total value of $\sum_{s\in{\varepsilon}}m_{\varepsilon}(s)z(s)$ can change by a value between $0$ and $2\Delta-1$ in the remaining iterations. It remains to show that the algorithm terminates successfully. The proof is based on similar arguments as in Kir[á]{}ly et al. [@KiralyEtAl2012 proof of Theorem 2]. #### **Termination** Suppose, for sake of contradiction, that the algorithm does not terminate. Then there is some iteration after which none of the simplifications in Steps \[st:del\]-\[st:rem\] can be performed. This implies that for the current basic LP solution $x$ it holds $0<x(s)<1$ for each $s \in S$ and $m_{\varepsilon}({\varepsilon})\geq 2\Delta$ for each ${\varepsilon}\in {\mathcal{E}}$. We say that a set $Y$ is *p-tight* (or *b-tight*) if $x(Y)=p(Y)$ (or $x(Y)=b(Y)$), and let ${\mathcal{T}}^p=\{Y\subseteq S:x(Y)=p(Y)\}$ and ${\mathcal{T}}^b=\{Y\subseteq S:x(Y)=b(Y)\}$ denote the collections of $p$-tight and $b$-tight sets with respect to solution $x$. Let ${\mathcal{L}}$ be a maximal independent laminar system in ${\mathcal{T}}^p \cup {\mathcal{T}}^b$. \[claim:uncrossing\] $\operatorname{span}{(\{\chi_Z: Z \in {\mathcal{L}}\})} = \operatorname{span}{(\{\chi_Z: Z \in {\mathcal{T}}^p \cup {\mathcal{T}}^b\})}$ The proof uses an uncrossing argument. Let us suppose indirectly that there is a set $R$ from ${\mathcal{T}}^p \cup {\mathcal{T}}^b$ for which $\chi_R \notin \operatorname{span}{(\{\chi_Z: Z \in {\mathcal{L}}\})}$. Choose this set $R$ so that it is incomparable to as few sets of ${\mathcal{L}}$ as possible. Without loss of generality, we may assume that $R \in {\mathcal{T}}^p$. Now choose a set $T \in {\mathcal{L}}$ that is incomparable to $R$. Note that such a set necessarily exists as the laminar system is maximal. We distinguish two cases. **Case 1.** $T \in {\mathcal{T}}^p$. Because of the supermodularity of $p$, we have $$\begin{aligned} x(R) + x(T) &= p(R) + p(T) \leq p(R \cup T) + p(R \cap T) \leq x(R \cup T) + x(R \cap T)\\ &= x(R) + x(T) \enspace , \end{aligned}$$ hence equality holds throughout. That is, $R \cup T$ and $R \cap T$ are in ${\mathcal{T}}^p$ as well. In addition, since $\chi_R + \chi_T = \chi_{R \cup T} + \chi_{R \cap T}$ and $\chi_R$ is not in $\operatorname{span}{(\{\chi_Z: Z \in {\mathcal{L}}\})}$, either $\chi_{R \cup T}$ or $\chi_{R \cap T}$ is not contained in $\operatorname{span}{(\{\chi_Z: Z \in {\mathcal{L}}\})}$. However, both $R \cup T$ and $R \cap T$ are incomparable with fewer sets of ${\mathcal{L}}$ than $R$, which is a contradiction. **Case 2.** $T \in {\mathcal{T}}^b$. Because of the cross-inequality, we have $$\begin{aligned} x(T) - x(R) &= b(T) - p(R) \geq b(T \setminus R) - p(R \setminus T) \geq x(T \setminus R) - x(R \setminus T)\\ &= x(T) - x(R) \enspace , \end{aligned}$$ implying $T \setminus R \in {\mathcal{T}}^b$ and $R \setminus T \in {\mathcal{T}}^p$. Since $\chi_R + \chi_T = \chi_{R \setminus T} + \chi_{R \setminus T} + 2 \ \chi_{R \cup T}$ and $\chi_R$ is not in $\operatorname{span}{(\{\chi_Z: Z \in {\mathcal{L}}\})}$, one of the vectors $\chi_{R \setminus T}$, $\chi_{R \setminus T}$ and $\chi_{R \cup T}$ is not contained in $\operatorname{span}{(\{\chi_Z: Z \in {\mathcal{L}}\})}$. However, any of these three sets is incomparable with fewer sets of ${\mathcal{L}}$ than $R$, which is a contradiction. The case when $R \in {\mathcal{T}}^b$ is analogous to the above. This completes the proof of the Claim. We say that a hyperedge ${\varepsilon}\in {\mathcal{E}}$ is *tight* if $f({\varepsilon})=\sum_{s\in {\varepsilon}} m_{\varepsilon}(s) x(s)$ or $g({\varepsilon})=\sum_{s\in {\varepsilon}} m_{\varepsilon}(s) x(s)$. As $x$ is a basic solution, there is a set ${\mathcal{E}}'\subseteq{\mathcal{E}}$ of tight hyperedges such that $\{m_{\varepsilon}: {\varepsilon}\in {\mathcal{E}}'\}\cup\linebreak \{\chi_Z: Z \in {\mathcal{L}}\}$ are linearly independent vectors with $|{\mathcal{E}}'|+|{\mathcal{L}}|=|S|$. We derive a contradiction using a token-counting argument. We assign $2\Delta$ tokens to each element $s \in S$, accounting for a total of $2\Delta |S| $ tokens. The tokens are then redistributed in such a way that each hyperedge in ${\mathcal{E}}'$ and each set in ${\mathcal{L}}$ collects at least $2\Delta$ tokens, while at least one extra token remains. This implies that $2\Delta |S|>2\Delta|{\mathcal{E}}'|+2\Delta|{\mathcal{L}}|$, leading to a contradiction. We redistribute the tokens as follows. Each element $s$ gives $\Delta$ tokens to the smallest member in ${\mathcal{L}}$ it is contained in, and $m_{\varepsilon}(s)$ token to each hyperedge ${\varepsilon}\in{\mathcal{E}}'$ it is contained in. As holds for every element $s \in S$, thus we redistribute at most $2\Delta$ tokens per element and so the redistribution step is valid. Now consider any set $U\in{\mathcal{L}}$. Recall that ${\mathcal{L}}^{\max}(U)$ consists of the maximal members of ${\mathcal{L}}$ lying inside $U$. Then $U-\bigcup_{W\in{\mathcal{L}}^{\max}(U)} W\neq\emptyset$, as otherwise $\chi_U=\sum_{W\in{\mathcal{L}}^{\max}(U)} \chi_W$, contradicting the independence of ${\mathcal{L}}$. For every set $Z$ in ${\mathcal{L}}$, $x(Z)$ is an integer, meaning that $x(U - \bigcup_{W\in{\mathcal{L}}^{\max}(U)} W)$ is an integer. But also $0 < x(s) < 1$ for every $s \in S$, which means that $U - \bigcup_{W\in{\mathcal{L}}^{\max}(U)} W$ contains at least 2 elements. Therefore, each set $U$ in ${\mathcal{L}}$ receives at least $2 \Delta$ tokens, as required. By assumption, $m_{\varepsilon}({\varepsilon}) \geq 2 \Delta$ for every hyperedge ${\varepsilon}\in {\mathcal{E}}'$, which means that each hyperedge in ${\mathcal{E}}'$ receives at least $2\Delta$ tokens, as required. If $\sum_{{\varepsilon}\in{\mathcal{E}}':s\in {\varepsilon}} m_{\varepsilon}(s)\leq \Delta$ holds for any $s\in S$ or $\mathcal{L}^{\max}(S)$ is not a partition of $S$, then an extra token exists. Otherwise, $\sum_{{\varepsilon}\in{\mathcal{E}}'}m_{{\varepsilon}}=\Delta\cdot\chi_S=\sum_{W\in{\mathcal{L}}^{\max}(S)}^q\chi_W$, contradicting the independence of $\{m_{\varepsilon}: {\varepsilon}\in {\mathcal{E}}'\}\cup \{\chi_Z: Z \in {\mathcal{L}}\}$. #### **Time complexity** Let us now prove that the run time of the algorithm is polynomial in the input size. Solving an LP, as well as removing an element from a hyperedge in Step \[st:rem\] or removing a hyperedge in Step \[st:del\] can be done in polynomial time. Now let us turn to the g-polymatroid contraction in Step \[st:inc\] and taking the intersection with the unit cube in Step \[st:first\]. The function value is not recalculated for every subset $Y \subseteq S$, as there is an exponential number of such subsets. Instead, we calculate the value of the current functions $p$ and $b$ for a set $Y$ only when it is needed during the ellipsoid method. We keep track of the vectors $\floor{x}$ that arise during contraction steps (there is only a polynomial number of them), and every time a query for $p$ or $b$ happens, it takes into account every contraction and removal that occurred until that point. Let us now bound the number of iterations. In every iteration at least one of Steps \[st:del\]-\[st:rem\] is executed. Clearly, Step \[st:del\] can be repeated at most $|S|$ times, while Step \[st:rem\] can be repeated at most $|{\mathcal{E}}|$ times. Starting from the second iteration, we are working in the unit cube. That is, when Step \[st:inc\] adds the integer part of a variable $x(s)$ to $z(s)$ and reduces the problem, then the given variable will be $0$ in the next iteration and so element $s$ is deleted. This means that the total number of iterations of Step \[st:inc\] is at most $\mathcal O(|S|)$. We therefore showed that the number of iterations, as well as the time complexity of each step taken by the algorithm can be bounded by the input size, meaning the algorithm runs in polynomial time. Now we turn to the proof of the case when only lower or only upper bounds are given. The proof is similar to the proof of Theorem \[thm:matroid1\], the main difference appears in the the counting argument. When only lower bounds are present, the condition in Step \[st:rem\] changes: we delete a hyperedge ${\varepsilon}$ if $f({\varepsilon})\leq\Delta-1$. Suppose, for the sake of contradiction, that the algorithm does not terminate. Then there is an iteration after which none of the simplifications in Steps \[st:del\]-\[st:rem\] can be performed. This implies that in the current basic solution $0 < x(s) < 1$ holds for each $s \in S$ and $f({\varepsilon}) \geq \Delta$ for each ${\varepsilon}\in {\mathcal{E}}$. We choose a subset ${\mathcal{E}}'\subseteq{\mathcal{E}}$ and a maximal independent laminar system ${\mathcal{L}}$ of tight sets the same way as in the proof of Theorem \[thm:matroid1\]. Recall that $|{\mathcal{E}}'| + |{\mathcal{L}}| = |S|$. Let $Z_1, \dots, Z_k$ denote the members of the laminar system ${\mathcal{L}}$. As ${\mathcal{L}}$ is an independent system, $Z_i-\bigcup_{W\in\mathcal{L}^{\max}(Z_i)}W\neq\emptyset$. Since $x(s)<1$ for all $s\in S$, $x(Z_i-\bigcup_{W\in\mathcal{L}^{\max}(Z_i)}W)<|Z_i-\bigcup_{W\in\mathcal{L}^{\max}(Z_i)}W|$. As we have integers on both sides of this inequality, we get $$|Z_i-\!\!\!\bigcup_{W\in\mathcal{L}^{\max}(Z_i)}\!\!\!\!\!\!W|-x(Z_i-\!\!\!\bigcup_{W\in\mathcal{L}^{\max}(Z_i)}\!\!\!\!\!\!W)\geq 1\quad\text{for all}\ i=1,\dots,k \enspace .$$ Moreover, $\sum_{s\in{\varepsilon}}m_{{\varepsilon}}(s)x(s)\geq f({\varepsilon})\geq\Delta$ for all hyperedges; therefore, $$\begin{aligned} |{\mathcal{E}}'| + |{\mathcal{L}}| {}&{}\leq \sum_{{\varepsilon}\in {\mathcal{E}}'} \frac{\sum_{s \in {\varepsilon}} m_{\varepsilon}(s) x(s)}{\Delta} + \sum_{i=1}^k \left[ |Z_i - \!\!\! \bigcup_{W \in \mathcal{L}^{\max}(Z_i)} \!\!\!\!\!\! W| - x(Z_i - \!\!\! \bigcup_{W \in \mathcal{L}^{\max}(Z_i)} \!\!\!\!\!\! W) \right] \\ {}&{}= \sum_{s \in S} \frac{x(s)}{\Delta} \sum_{\substack{{\varepsilon}\in {\mathcal{E}}' \\ s\in {\varepsilon}}} m_{\varepsilon}(s) + \sum_{W \in \mathcal{L}^{\max}(S)}|W| - \sum_{W \in \mathcal{L}^{\max}(S)} x(W) \label{eq:optional} \leq |S| \enspace . \end{aligned}$$ In the last line, the first term is at most $x(S)$ since $\sum_{{\varepsilon}\in{\mathcal{E}}:s\in {\varepsilon}} m_{\varepsilon}(s)\leq\Delta$ holds for each element . From $x(S)- \sum_{W \in \mathcal{L}^{\max}(S)} x(W)\leq |S|-\sum_{W \in \mathcal{L}^{\max}(S)}|W|$ the upper bound of $|S|$ follows. As $|S| = |{\mathcal{L}}| + \mathcal{|{\mathcal{E}}'|}$, we have equality throughout. This implies that $\sum_{{\varepsilon}\in {\mathcal{E}}'} m_{\varepsilon}= \Delta \cdot \chi_S=\Delta\cdot\sum_{W\in\mathcal{L}^{\max}(S)}\chi_W$, contradicting linear independence. If only upper bounds are present, we remove a hyperedge ${\varepsilon}$ in Step \[st:rem\] when $g({\varepsilon})+\Delta-1 \geq m_{\varepsilon}({\varepsilon})$. Suppose, for the sake of contradiction, that the algorithm does not terminate. Then there is an iteration after which none of the simplifications in Steps \[st:del\]-\[st:rem\] can be performed. This implies that in the current basic solution $0 < x(s) < 1$ holds for each $s \in S$ and $m_{\varepsilon}({\varepsilon})-g({\varepsilon}) \geq \Delta$ for each ${\varepsilon}\in {\mathcal{E}}$. Again, we choose a subset ${\mathcal{E}}'\subseteq{\mathcal{E}}$ and a maximal independent laminar system ${\mathcal{L}}$ of tight sets the same way as in the proof of Theorem \[thm:matroid1\]. Let $Z_1, \dots, Z_k$ denote the members of the laminar system ${\mathcal{L}}$. As ${\mathcal{L}}$ is an independent system, $Z_i-\bigcup_{W\in\mathcal{L}^{\max}(Z_i)}W\neq\emptyset$ and so $$x(Z_i - \!\!\! \bigcup_{W \in \mathcal{L}^{\max}(Z_i)} \!\!\!\!\!\! W) \geq 1 \enspace .$$ By $\sum_{s \in {\varepsilon}} m_{\varepsilon}(s) x(s) \leq g({\varepsilon})$, we get $\sum_{s \in {\varepsilon}} m_{\varepsilon}(s)-\sum_{s \in {\varepsilon}} m_{\varepsilon}(s) x(s) \geq m_{\varepsilon}({\varepsilon})-g({\varepsilon}) \geq \Delta$. Therefore, $$\begin{aligned} |{\mathcal{E}}'| + |{\mathcal{L}}| {}&{}\leq \sum_{{\varepsilon}\in {\mathcal{E}}'} \frac{\sum_{s \in {\varepsilon}} m_{\varepsilon}(s)-\sum_{s \in {\varepsilon}} m_{\varepsilon}(s) x(s)}{\Delta} + \sum_{i=1}^k x(Z_i - \!\!\! \bigcup_{W \in \mathcal{L}^{\max}(Z_i)} \!\!\!\!\!\! W) \\ {}&{}= \sum_{s \in S} \frac{1-x(s)}{\Delta} \sum_{\substack{{\varepsilon}\in {\mathcal{E}}' \\ s\in {\varepsilon}}} m_{\varepsilon}(s) + \sum_{W \in \mathcal{L}^{\max}(S)} x(W) \label{eq:optional2} \\ {}&{}\leq \sum_{s \in S} \frac{1-x(s)}{\Delta} \sum_{\substack{{\varepsilon}\in {\mathcal{E}}' \\ s\in {\varepsilon}}} m_{\varepsilon}(s) + x(S) \leq |S| \enspace . \end{aligned}$$ In the last line, the first term is at most $|S|-x(S)$ since $\sum_{{\varepsilon}\in{\mathcal{E}}:s\in {\varepsilon}} m_{\varepsilon}(s)\leq\Delta$ holds for every element $s \in S$. Therefore, the upper bound of $|S|$ follows. As $|S| = |{\mathcal{L}}| + \mathcal{|{\mathcal{E}}'|}$, we have equality throughout. This implies that $\sum_{{\varepsilon}\in {\mathcal{E}}'} m_{\varepsilon}= \Delta \cdot \chi_S=\Delta\cdot\sum_{W\in\mathcal{L}^{\max}(S)}\chi_W$, contradicting linear independence. We have seen in Sect. \[sec:bp\] that base polymatroids are special cases of g-polymatroids. This implies that the results of Theorem \[thm:matroid2\] immediately apply to polymatroids. Let us first formally define the problem. In the [Lower Bounded Degree Polymatroid Basis Problem with Multiplicities]{}, we are given a base polymatroid $B(b)=(S,b)$ with a cost function $c:S \rightarrow \mathbb{R}$, and a hypergraph $H=(S, {\mathcal{E}})$ on the same ground set. The input contains lower bounds $f: {\mathcal{E}}\rightarrow \mathbb{Z}_{\geq 0}$ and multiplicity vectors $m_{\varepsilon}: {\varepsilon}\rightarrow \mathbb{Z}_{\geq 1}$ for every hyperedge ${\varepsilon}\in {\mathcal{E}}$. The objective is to find a minimum-cost element $x \in B(b)$ such that $f({\varepsilon}) \leq \sum_{s \in {\varepsilon}} m_{\varepsilon}(s) x(s)$ holds for each ${\varepsilon}\in {\mathcal{E}}$. \[thm:polym\] There is an algorithm for the [Lower Bounded Degree Polymatroid Basis Problem with Multiplicities]{} that runs in polynomial time and returns an element $x$ of $B(b)$ of cost at most the optimum value such that $f({\varepsilon})- \Delta+1 \leq \sum_{s \in {\varepsilon}} m_{\varepsilon}(s) x(s)$ for each ${\varepsilon}\in{\mathcal{E}}$. A 1.5-Approximation for the Metric Many-Visits TSP {#sec:approx} ================================================== In this section we design a polynomial-time $1.5$-approximation for the [Metric Many-Visits TSP]{}. Our approach is along similar lines as Christofides’ algorithm [@Christofides1976] for the metric single-visit TSP. It constructs a solution in three steps: (i) it computes a minimum cost spanning tree that ensures the connectivity of the solution, then (ii) it adds a minimum cost matching on the set of vertices of odd degree in order to obtain an Eulerian subgraph, and finally (iii) it forms a Hamiltonian circuit from an Eulerian circuit by shortcutting repeated vertices. In our setting of many-visits, we make use of the following formulation of the metric [Many-Visits TSP]{}: Given a complete undirected graph $G$ with non-negative cost function $c:E\rightarrow \mathbb{Z}_{\geq 0}$ and requirements $r:V\rightarrow\mathbb{Z}_{\geq 1}$, find a vector $x\in\mathbb{Z}^{E}_{\geq 0}$ minimizing $c^Tx$ such that $d_x(v)=2r(v)$ for every $v\in V$, and $\operatorname{supp}(x)$ is connected. From now on we use $\hat{r}=r(V)-|V|+1$. The high-level idea of the algorithm is the following. We first show that the set of integral vectors $\{x\in\mathbb{Z}^{E}_{\geq 0}:x(E)=r(V), ~\operatorname{supp}(x)\ \text{is connected}\}$ form the integral points of a base polymatroid. We apply Corollary \[thm:polym\] to this base polymatroid to obtain a vector $x\in\mathbb{Z}^{E}_{\geq 0}$ with $c^Tx$ no more than the optimum, such that $d_x(v)\geq 2r(v)-1$ for $v\in V$. Then we add a minimum-cost matching on the set of vertices of odd $d_x(v)$-value. Finally, by shortcutting vertices with degree higher than prescribed, we obtain a tour that satisfies the requirements on the number of visits at every vertex. \[lem:bdef\] Let $b$ denote the following function defined on edge sets $F\subseteq E$: $$\label{eq:b} b(F) = \begin{cases} |V(F)|-\operatorname{comp}(F)+\hat{r} & \text{if $F\neq\emptyset$,}\\ 0 & \text{otherwise.} \end{cases}$$ Then $b$ is a polymatroid function. By definition, $b(\emptyset)=0$ and $b$ is monotone increasing. It remains to show that $b$ is submodular. Let $X,Y\subseteq E$. The submodular inequality clearly holds if one of $X$ and $Y$ is empty. If none of $X$ and $Y$ is empty then the submodular inequality follows from the fact that $|V(F)|-\operatorname{comp}(F)$ is the rank function of the graphical matroid. Consider the base polymatroid $B(b)$ determined by the border function defined in . Let us define the set $B=\{x\in\mathbb{Z}^{E}_{\geq 0}:x(E)=r(V), ~ \operatorname{supp}(x)\ \text{is connected}\}$. \[lem:description\] $B=B(b)\cap\mathbb{Z}^{E}_{\geq 0}$. Take an integral element $x\in B(b)$ and let $C\subseteq E$ be an arbitrary cut between $V_1$ and $V_2$ for some partition $V_1\uplus V_2$ of $V$. Then $$\begin{aligned} x(C) {}&{}= x(E)-(x(E(V_1)\cup E(V_2)))\\ {}&{}\geq |V|-1+\hat{r}-(|V_1|+|V_2|-\operatorname{comp}(E(V_1)\cup E(V_2))+\hat{r})\\ {}&{}\geq 1, \end{aligned}$$ thus $\operatorname{supp}(x)$ is connected. As $x(E)=|V|-1+\hat{r}=r(V)$, we obtain $x\in B$, showing that $B(b)\subseteq B$. To see the other direction, take an element $x\in B$. As $\operatorname{supp}(x)$ is connected, $x(E - F)\geq\operatorname{comp}(F)+|V|-|V(F)|-1$ for every $F\subseteq E$. That is, $$\begin{aligned} x(F) {}&{} = x(E) - x(E - F)\\ {}&{}\leq r(V)-(|V-V(F)| +\operatorname{comp}(F)-1)\\ {}&{} = |V(F)|-\operatorname{comp}(F)+\hat{r}, \end{aligned}$$ thus $x(F)\leq b(F)$. As $x(E) = r(V) = |V|-1+\hat{r}$, we obtain $x\in B(b)$, showing $B\subseteq B(b)$. **Input:** A complete undirected graph $G$, costs $c:E\rightarrow\mathbb{R}_{\geq 0}$ satisfying the triangle inequality, requirements $r:V\rightarrow\mathbb{Z}_{\geq 1}$. **Output:** A tour that visits each $v \in V$ exactly $r(v)$ times. Construct the polymatroid $B(b)=(S,b)$, where $S:=E$ and $b$ is defined as in Equation .\[st:1\] Construct a hypergraph $H=(S,{\mathcal{E}})$ with ${\mathcal{E}}=\{\delta(v):v\in V\}$ and \[st:2\] $\circ$ for every ${\varepsilon}\in {\mathcal{E}}$ and $s\in{\varepsilon}$, set $m_{\varepsilon}(s)=2$ if $s$ is a self-loop and $m_{\varepsilon}(s)=1$ otherwise, $\circ$ for every ${\varepsilon}\in {\mathcal{E}}$, set $f({\varepsilon})=2 \cdot r(v)$, where ${\varepsilon}=\delta(v)$. Run Algorithm \[alg:matd\] with $B(b),c,H,f$ and the $m_{\varepsilon}$’s as input. Let $z\in B(b)$ denote the output.\[st:3\] Calculate a minimum-cost matching $M$ with respect to $c$ on the vertices of $V$ with odd $d_z(v)$ values.\[st:4\] Determine a tour $T = \{C, \mu_C\}_{C \in \mathcal{C}}$ from $z$ and $\chi_M$.\[st:5\] Do shortcuts in $T$ and obtain a solution $T'$, such that $T'$ visits every city $v$ exactly $r(v)$ times (that is, $d_T(v) = r(v)$ for every vertex $v \in V$). \[st:6\] **return** $T'$. Our algorithm is presented as Algorithm \[alg:apx\_matd\]. First, we construct a polymatroid $B$ and a hypergraph $H$, such that their common ground set $S$ consists of the edges of the graph $G$ in our <span style="font-variant:small-caps;">Many-Visits TSP</span> instance. The border function $b$ of the polymatroid is defined in Equation . For each vertex $v$ of $G$, there is a hyperedge ${\varepsilon}$ in the hypergraph that contains all edges of $G$ incident to $v$, including the self-loop at $v$. We set the multiplicities of an element $s \in S$ to 2 if it corresponds to a self-loop in $G$, and to 1 otherwise. The motivation is that a self-loop contributes the the degree of a vertex by two, while a regular edge contributes to the degree of each of its endpoints by one. Note that an element $s \in S$ is contained in exactly one hyperedge if it corresponds to a self-loop, and it is contained in exactly two hyperedges otherwise; therefore the total contribution of each edge adds up to two. Now we are ready to prove Theorem \[thm:tsp1\]. First, let us show that Algorithm \[alg:apx\_matd\] provides a feasible solution for the given instance $(G,c,r)$ of the metric [Many-Visits TSP]{}. By Lemma \[lem:description\], the solution $z$ provided by Algorithm \[alg:matd\] in Step \[st:2\] is such that $c^T z\leq c({\mathsf{T}}^\star_{c,r})$, $z(E)=r(V)$ and $\operatorname{supp}(z)$ is connected. Furthermore, by Corollary \[thm:polym\], $f({\varepsilon})-1\leq\sum_{s \in {\varepsilon}}m_{\varepsilon}(s)z(s)$ for each ${\varepsilon}\in {\mathcal{E}}$. Note that in our case $\Delta=\max_{s\in S}\{\sum_{{\varepsilon}\in {\mathcal{E}}:s\in {\varepsilon}}m_{\varepsilon}(s)\}=2$, so this inequality translates to $2\cdot r(v)-1\leq d_z(v)$ for every $v\in V$. That is, $z$ corresponds to a multigraph of cost at most $c({\mathsf{T}}^\star_{c,r})$ violating the degree prescriptions from below by at most one. Note that this means the total violation from above is at most $|V|-1$. In Step \[st:3\] we calculate a matching $M$ that provides one extra degree to each odd-degree vertex. That is, in the union of the multigraph defined by $z$ and $M$, every vertex $v$ has an even degree. #### **Constructing a tour and shortcutting** In Step \[st:4\], we construct a *compact representation* of a tour from the vector $z$ and matching $M$, and we denote it by $T$. We use the algorithm described in Grigoriev et al. [@Grigoriev2006], which takes the edge multiplicities as input, and outputs a collection $\mathcal{C}$ of pairs $(C, \mu_C)$. Here $C$ is a simple closed walk, and $\mu_C$ is the corresponding integer denoting the number of copies of the walk $C$ in $T$. From the pairs $(C, \mu_C)$ it is possible to construct an implicit order of the vertices the following way. Let us construct an auxiliary multigraph $A$ on the vertex set $V$ by taking the edges of each cycle $C$ exactly once. Note that parallel edges are allowed in $A$ if an edge appears in multiple cycles $C$. Due to the construction, every vertex has an even degree in $A$, which means that there exist an Eulerian circuit in $A$. Moreover, there are $\mathcal O(n^2)$ distinct cycles [@Grigoriev2006], hence, the total number of edges in $A$ is $\mathcal O(n^3)$. Consequently, using Hierholzer’s algorithm, we can compute an Eulerian circuit $\eta$ in $A$ in $\mathcal O(n^3)$ time [@Hierholzer1873; @Fleischer1991]. The circuit $\eta$ covers the edges of each cycle $C$ once. Now an implicit order of the vertices in the [Many-Visits TSP]{} tour $T$ is the following. Traverse the vertices of the Eulerian circuit $\eta$ in order. Every time a vertex $u$ appears the first time, traverse all cycles $C$ that contain the vertex $\mu_{C}-1$ times. Denote this circuit by $\eta'$. It is easy to see that the sequence $\eta'$ is a sequence of vertices that uses the edges of each cycle $C$ exactly $\mu_C$ times, meaning this is a feasible sequence of the vertices in the tour $T$. Moreover, the order itself takes polynomial space, as it is enough to store indices of $\mathcal O(n^3)$ vertices and $\mathcal O(n^2)$ cycles. Now let us consider the set $W$ of vertices $w$ that have more visits than $r(w)$ in the tour $T$. Denote the surplus of visits of a vertex $w\in W$ by $\gamma(w) := d_T(w) - 2\cdot r(w)$. In Step \[st:6\], we remove the last $\gamma(w)$ occurrences of every vertex $w \in W$ from $T$, by doing shortcuts: if an occurrence of $w$ is preceded by $u$ and superseded by $v$ in $T$, replace the edges $uw$ and $wv$ by $uv$ in the sequence. This can be done by traversing the compact representation of $\eta'$ backwards, and removing the vertex $w$ from the last $\gamma(w)$ cycles $C^{(w)}_{r(w)-\gamma(w)+1}, \dots, C^{(w)}_{r(w)}$. As $\sum_w \gamma(w)$ can be bounded by $\mathcal O(n)$, this operation makes $\mathcal O(n)$ new cycles, keeping the space required by the new sequence of vertices and cycles polynomial. Moreover, since the edge costs are metric, making shortcuts the way described above cannot increase the total cost of the edges in $T$. Finally, using a similar argument as in the algorithm of Christofides, the shortcutting does not make the tour disconnected. The resulting graph is therefore a tour $T'$ that visits every vertex $v$ exactly $r(v)$ times, that is, a feasible solution for the instance $(G,c,r)$. #### **Cost and complexity** The cost of the edges in $z$ is at most $c({\mathsf{T}}^\star_{c,r})$, and as the cost function satisfies the triangle inequality, the cost of the matching $M$ found in Step \[st:3\] is at most $c({\mathsf{T}}^\star_{c,1})/2$. Moreover, taking shortcuts at vertices does not increase the cost of the solution, hence the cost of the output is at most $c^T(z+\chi_M)\leq c({\mathsf{T}}^\star_{c,r})+c({\mathsf{T}}^\star_{c,1})/2=1.5\cdot c({\mathsf{T}}^\star_{c,r})$. Now we turn to the complexity analysis. All edge multiplicities during the algorithm are stored as integer numbers in binary, therefore the space needed of any variable representing multiplicities of edges can be bounded by $\mathcal O(n^2 \, \log\sum r(v))$. Steps \[st:1\]-\[st:2\] can be performed in time that is polynomial in the input size. The function $b$ is defined in Lemma \[lem:bdef\] and can be computed efficiently. Therefore, according to Corollary \[thm:polym\], the algorithm in Step \[st:3\] also runs in polynomial time. Step \[st:4\] can also be done in polynomial time [@Grigoriev2006], and the number of closed walks can be bounded by $\mathcal O(n^2)$. Moreover, the total surplus of degrees in $T$ is at most $n-1$, therefore the number of shortcutting operations is also bounded by $n$. This completes the proof. It is worth considering what Algorithm \[alg:apx\_matd\] does when applied to the single-visit TSP, that is, when $r(v)=1$ for each $v\in V$. The output of Algorithm \[alg:matd\] in Step \[st:3\] is a connected multigraph with $r(V)=n$ edges. Note that the guarantee that each vertex $v$ has degree at least $2\cdot r(v)-1=1$ does not add anything as this already follows from connectivity. Such a graph is basically the union of a spanning tree and a single edge (where the edge might be also part of the spanning tree, that is, in the solution having multiplicity 2); we call such a graph a *1-tree*. The rest of the algorithm mimics Christofides’ algorithm: a minimum cost matching is added on the set of vertices of odd degree to get an Eulerian graph, and then a Hamiltonian circuit is formed by shortcutting repeated vertices in an Eulerian circuit. That is, applying our algorithm to a single-visit TSP instance, it is almost identical to that of Christofides, except that instead of a spanning tree we start with a 1-tree. However, the 1-tree we start with is not necessarily a cheapest one among all possible choices; we only know that its cost is at most the cost of the optimal single-visit TSP tour. Discussion {#sec:discussion} ========== In this work we developed an approximation algorithm for the minimum-cost degree bounded g-polymatroid element problem with multiplicities. The approximation algorithm yields a solution of cost at most the optimum, which violates the lower bounds only by a constant factor depending on the weighted maximum element frequency $\Delta$. We then demonstrated the usefulness of our result by developing a polynomial-time $1.5$-approximation algorithm for the metric many-visits traveling salesman problem. This way, we match the famous Christofides-Serdyukov bound for the single-visit TSP. **Acknowledgements.** The authors are grateful to Tamás Király and Gyula Pap for the helpful discussions. Kristóf was supported by the János Bolyai Research Fellowship of the Hungarian Academy of Sciences and by the ÚNKP-19-4 New National Excellence Program of the Ministry for Innovation and Technology. Project no. NKFI-128673 has been implemented with the support provided from the National Research, Development and Innovation Fund of Hungary, financed under the FK\_18 funding scheme. This research was supported by Thematic Excellence Programme, Industry and Digitization Subprogramme, NRDI Office, 2019. [^1]: MTA-ELTE Egerváry Research Group, Department of Operations Research, E[ö]{}tv[ö]{}s Lor[á]{}nd University, Hungary. Email: `[email protected]`. [^2]: Department of Quantitative Economics, Maastricht University, The Netherlands. Email: `[email protected]`. [^3]: Universit[ä]{}t Bonn *and* Technische Universit[ä]{}t Hamburg, Germany. Email: `[email protected]`. [^4]: Department of Quantitative Economics, Maastricht University, The Netherlands *and* Technische Universit[ä]{}t Hamburg, Germany. Email: `[email protected]`. [^5]: Supported by DAAD with funds of the Bundesministerium f[ü]{}r Bildung und Forschung (BMBF) and by DFG project MN 59/4-1. [^6]: The $\mathcal{O}^\star$ notation suppresses the factors polynomial in $n$. [^7]: The fourth author thanks Rico Zenklusen for posing the problem and initial discussions on the subject.
--- author: - 'C. Soubiran, O. Bienaymé, T.V. Mishenina, V.V. Kovtyukh' date: 'Received : October 4, 2007 / Accepted : November 30, 2007 ' subtitle: 'IV - AMR and AVR from clump giants ' title: 'Vertical distribution of Galactic disk stars [^1] [^2]' --- Introduction ============ This paper is the continuation of previous papers (Soubiran et al. [@sou03], hereafter Paper I and Siebert et al. [@sie03], hereafter Paper II) where we investigated the vertical distribution of disk stars with local and distant samples of clump giants. Our main result in Paper I was a new characterization of the thick disk, showing a rotational lag of $-51 \pm 5\,\mathrm{km\,s}^{-1}$ with respect to the Sun, a velocity ellipsoid of $(\sigma_U, \sigma_V, \sigma_W)=(63\pm 6, 39\pm 4, 39\pm 4) \,\mathrm{km\,s}^{-1}$, a mean metallicity of \[Fe/H\] =$-0.48\,\pm$0.05 and a high local normalization of 15$\pm$7%. We have also determined in Paper II the gravitational force perpendicular to the galactic plane and the mass density in the galactic plane ($\Sigma = 67 M_\odot{\rm pc}^{-2}$) and thickness of the disk ($390^{+330}_{-120}$ pc). We found no vertex deviation for old stars, consistent with an axisymetric Galaxy. After these two papers, we have enlarged and improved our samples in order to go further into the study of the local thin disk. We have observed a large sample of local Hipparcos clump giants at high spectral resolution and high signal-to-noise ratio, and measured their metallicity and elemental abundances (Mishenina et al. [@mish06]). Combined with a compilation of other studies providing metallicities of nearby clump giants, we have built a large unbiased sample of local giants to investigate the kinematical and chemical distributions of these stars. Our previous sample of distant giants was based on high resolution, low signal-to-noise spectra for 387 stars, spanning distances up to z=800 pc above the galactic plane, in the direction of the North Galactic Pole (NGP). The new distant sample now includes 523 stars up to z=1 kpc, with improved distance and metallicity determinations. These two improved samples, local and distant, have also been used for other purposes, presented in separate papers. Kovtyukh et al. ([@kov06]) use the local sample to establish an accurate temperature scale for giants using line-depth ratios. Mishenina et al. ([@mish06]) investigate mixing processes in the atmosphere of clump giants. Finally Bienaymé et al. ([@bie05]), hereafter Paper III, apply two-parameter models on the combination of the local and distant samples to derive a realistic estimate of the total surface mass density within 0.8 kpc and 1.1 kpc from the Galactic plane, respectively $\Sigma_{\rm0.8\,kpc}$=59-67 $\,\mathrm{M}_{\sun}\mathrm{pc}^{-2}$ and $\Sigma_{\rm1.1\,kpc}$=59-77 $\,\mathrm{M}_{\sun}\mathrm{pc}^{-2}$. Here we use these new data to focus on local properties of the thin disk which are so important to constrain its chemical and dynamical evolution : metallicity distribution, vertical metallicity gradient, age - metallicity relation (AMR) and age - velocity relation (AVR). Numerous studies of these properties have been published, with however considerable disagreements reflecting the variety of tracers (open clusters, planetary nebulae, field dwarfs), discrepant metallicity scales, different age determinations, or selection biases. A major contribution on the subject comes from the Geneva-Copenhagen survey of the Solar neighbourhood by Nordström et al. ([@nor04]), which includes stellar parameters similar to ours, but for a much larger sample of dwarfs, and with photometric, less reliable, metallicities. In the present work, the use of distant giants allows us to probe larger distances above the galactic plane where kinematical distributions are no longer affected by local streams and moving groups, as studied by Famaey et al. ([@fam05]). Moreover, giants are well suited for age determinations, as shown in da Silva et al. ([@dasil06]). We use their Bayesian method with isochrone fitting to compute ages and, similarly to them, we use the complete resulting probability distribution function of each star to bin the age axis. The combination of this pertinent method with the fact that we use spectrocopic metallicities for a large, homogeneous and complete sample, with well defined boundaries in magnitude and colour, should ensure that the new relations that we obtain are quite reliable. We have also computed for each star its probability of belonging, on kinematical criteria, to the thin disk, the thick disk, the Hercules stream and the halo, in order to reject the most probable non thin disk stars. Sections \[s:hip\_sample\] and \[s:pgn\_sample\] describe the local and distant samples. We give details on the TGMET method and the new reference library which have been used to improve the determination of $T_{\rm eff}$, $\log\,g$, $\mathrm{[Fe/H]}$, and $M_{\rm v}$ for the distant giants observed at high spectral resolution, but low signal-to-noise (Section \[s:TGMET\]). Sections \[s:ages\] and \[s:pop\] describe the determination of ages, Galactic orbits and population membership. Then we select the most probable thin disk clump giants and demonstrate the existence of a vertical metallicity gradient (Section \[s:grad\]). We present the AMR derived from the same stars in Section \[s:AMR\], while in Section \[s:AVR\] we discuss the AVR in U, $V$ and $W$ derived from a larger sample of clump giants where the most probable thick disk, Hercules stream and halo members have been rejected. The local sample of Hipparcos giants {#s:hip_sample} ==================================== The sample of local giants, dominated by clump giants, consists of the 381 single Hipparcos field stars which follow the criteria : $$\pi \ge 10\, \rm{mas}$$ $$\delta_{ICRS} \ge -20^\circ$$ $$0.7 \le B-V \le 1.2$$ $$M_{\rm V} \le 1.6$$ where $\pi$ is the Hipparcos parallax and $ \delta_{ICRS}$ the declination. It is thus a complete sample. The Johnson B-V colour has been obtained from the Tycho2 $B_{\rm T}-V_{\rm T}$ colour applying Eq. 1.3.20 from ESA ([@esa97]) : $$B-V = 0.850 \,(B{\rm _T}-V{\rm _T})$$ Absolute magnitudes $M_{\rm{v}}$ were computed with V apparent magnitudes resulting from the transformation of Hipparcos magnitudes $H_{\rm p}$ to the Johnson system, calibrated by Harmanec ([@har98]). Radial velocities have been mainly compiled from observations on the ELODIE spectrograph at Observatoire de Haute-Provance (OHP). Some 177 local giants have been observed for this project (Mishenina et al. [@mish06]), while radial velocities of other stars were retrieved from the ELODIE library (Prugniel & Soubiran [@pru01], [@pru04]) and the ELODIE archive (Moultaka et al. [@mou04]). For the remaining stars, we found radial velocities in Famaey et al. ([@fam05]) and Barbier-Bossat et al. ([@bar00]). In summary, we have retrieved radial velocities for 220 stars in the various ELODIE datasets, for 54 stars in Famaey et al’s catalogue, for 107 stars in Barbier-Bossat et al’s catalogue. We have also retrieved from these different sources information about the binarity of the stars. We have flagged 30 suspected spectroscopic binaries presenting an enlarged or double peak of their cross-correlation function. Atmospheric parameters ($T_{\rm eff}$, $\log\,g$, \[Fe/H\]) have been compiled from the \[Fe/H\] catalogue (Cayrel de Strobel et al. [@cay01]) updated with a number of recent references. The \[Fe/H\] catalogue is a bibliographical compilation which lists determinations of atmospheric parameters relying on high resolution, high signal-to-noise spectroscopic observations and published in the main astronomical journals. We have added to the compilation effective temperatures determined by Alonso et al. ([@alo01]), di Benedetto ([@diben98]), Blackwell & Lynas-Gray ([@blac98]) and Ramírez & Meléndez ([@ram05]). A number of other recent references providing spectroscopic ($T_{\rm eff}$, $\log\,g$, \[Fe/H\]) have been added to the \[Fe/H\] catalogue in an effort to keep it up to date. For the present work, the largest contributions come from Mishenina et al. ([@mish06]) for 177 stars and da Silva et al. ([@dasil06]) for 14 stars. For the older references, which were already in Cayrel de Strobel et al. ([@cay01]), the largest contribution comes from McWilliam ([@mcw90]) for 233 stars. This compilation provided \[Fe/H\] for 363 stars, adopting a weighted average when several values where available for a given star (a higher weight was given to the most recent references). For 5 remaining stars, an ELODIE spectrum was available, enabling the determination of atmospheric parameters with the TGMET method (see next section). We thus have just 13 stars which lack atmospheric parameters, representing 3% of the whole local sample. Combining atmospheric parameters from different sources can be a source of errors if some verifications are not made. Not all authors of spectroscopic analyses use the same temperature scales, Fe lines, and atomic data so that systematic differences may occur in the resulting metallicities. In the present work, our narrow ranges in colour and luminosity suggest we deal with a very limited range of atmospheric parameters where temperature determinations from different methods usually agree well. This is confirmed in our sample where 99 stars have at least two different determinations of $T_{\rm eff}$. Computing the mean $T_{\rm eff}$ for each of these 99 stars, we find standard deviations ranging from 0 to 140 K, with a median value of 40 K, which is below the commonly admitted external error on effective temperatures ($\sim$ 50-80 K). Only 6 stars show $T_{\rm eff}$ determinations deviating by more than 100 K. Similar verifications were made on \[Fe/H\] : the median value of standard deviations around the mean for stars having at least two determinations is 0.09 dex. Hipparcos proper motions and parallaxes have been combined with radial velocities through the equations of Johnson & Soderblom ([@joh87]) to compute the 3 velocity components $(U,V,W)$ with respect to the Sun (the $U$ axis points towards the Galactic Center). Figure \[f:loc\] shows the distribution of this sample in the planes $M_{\rm v}$ vs $T_{\rm eff}$, $M_{\rm v}$ vs \[Fe/H\] and $V$ vs $U$. ![Our local sample in the $M_{\rm v}$ vs $T_{\rm eff}$, $M_{\rm v}$ vs \[Fe/H\] and $V$ vs $U$ diagrams[]{data-label="f:loc"}](loc_TeffMv.pdf "fig:"){width="8cm"} ![Our local sample in the $M_{\rm v}$ vs $T_{\rm eff}$, $M_{\rm v}$ vs \[Fe/H\] and $V$ vs $U$ diagrams[]{data-label="f:loc"}](loc_FeHMv.pdf "fig:"){width="8cm"} ![Our local sample in the $M_{\rm v}$ vs $T_{\rm eff}$, $M_{\rm v}$ vs \[Fe/H\] and $V$ vs $U$ diagrams[]{data-label="f:loc"}](loc_UV.pdf "fig:"){width="8cm"} The distant NGP sample {#s:pgn_sample} ====================== The distant sample has been drawn from the Tycho2 catalogue (H[ø]{}g et al. [@hog00]). We have applied similar criteria as in Soubiran et al. ([@sou03]) to build the list of red clump candidates, just extending the limiting apparent magnitudes to fainter stars. A detailed description of the sample can be found in Paper III. The resulting sample consists of 523 different stars on a 720 square degree field close to the NGP. The Tycho2 catalogue provides accurate proper motions and $V$ magnitudes. High resolution spectroscopic observations on ELODIE allowed us to measure radial velocities, spectroscopic distances and metallicities. Spectroscopic observations, radial velocities --------------------------------------------- The observations were carried out with the echelle spectrograph ELODIE on the 1.93m-telescope at the Observatoire de Haute Provence. The performances of this instrument are described in Baranne et al. ([@bar96]). Compared to our previous study in Paper I, 141 additional spectra have been obtained in Febuary and March 2003. The resulting 540 spectra cover the full range 390 – 680 nm at a resolving power of 42000. The reduction has been made at the telescope with the on-line software which performs the spectrum extraction, wavelength calibration and measurement of radial velocities by cross-correlation with a numerical mask. The radial velocity accuracy is better than for the considered stars (K stars). Our sample spans radial velocities from –139 to 85 $\mathrm{km\,s}^{-1}$ with a mean value of $-12.8$ $\mathrm{km\,s}^{-1}$. The mean S/N of the spectra at 550 nm is 22. Some 17 stars have been observed twice. For 13 stars, the correlation peak was enlarged or double, indicating the probable binarity of these stars which were flagged. Stellar parameters ($T_{\rm eff}$, $\log\,g$, $\mathrm{[Fe/H]}$, $M_{\rm v}$) {#s:TGMET} ----------------------------------------------------------------------------- We have performed the determination of stellar parameters $T_{\rm eff}$, $\log\,g$, $\mathrm{[Fe/H]}$ and $M_{\rm v}$ from ELODIE spectra using the code TGMET (Katz et al. [@kat98]), like in Paper I. TGMET relies on the comparison by minimum distance of target spectra to a library of stars with well known parameters, also observed with ELODIE (Soubiran et al. [@sou98], Prugniel & Soubiran [@pru01]). As compared to Paper I, we have improved the content of the TGMET library because we were aware that the quality of TGMET results are very dependent of the quality of the empirical library which is used as reference. We present in this section the library that we built for the present study dealing with clump giants. We also present the tests which have been performed to assess the reliability of the TGMET parameters. The TGMET library must be built with reference spectra representative of the parameter space occupied by the target stars, with a coverage as dense as possible. The parameters of the reference spectra must be known as accurately as possible. Since our previous study of clump giants at the NGP, in papers I and II, the TGMET library has been improved considerably. Many stars with well determined atmospheric parameters, compiled from the literature, and with accurate Hipparcos parallaxes, have been added to the library as reference stars for $T_{\rm eff}$, $\log\,g$, \[Fe/H\] and $M_{\rm{V}}$. In particular the Hipparcos giants observed with ELODIE to build the local sample and analysed by Mishenina et al. ([@mish06]) have been added to the library. Fig. \[f:bib\_feHMv\] shows the distribution of the TGMET library used for this study in the plane (\[Fe/H\], $M_{\rm{V}}$). The clump area is densely covered down to \[Fe/H\] = $-0.80$. A small part of the TGMET library is presented in Table \[t:TGMET\_lib\]. The full Table is only available in electronic form, at the CDS. The calibrated Echelle spectra can be retreived from the ELODIE archive[^3]. ----------- ---------- ----------------- ----------- ---------- -------------- ---------- ------- ------------------------------ ------- ------------- HD/BD date $T_{\rm eff}$ $\log\,g$ \[Fe/H\] $M_{\rm{V}}$ qt qf qm S/N $RV$ $B-V$ ST K dex [$\mathrm{\,km\,s}^{-1}$ ]{} BD+430699 20040203 4760 4.68 -0.41 6.916 424 130.6 7.41 0.972 K2 BD+522815 20040902 7.914 003 95.9 -52.66 1.164 K5 BD-004234 19970826 4574 4.32 -0.84 6.237 233 79.2 -127.58 0.968 K3Ve+... HD001227 19970822 5037 2.65 +0.25 0.465 422 101.1 -0.04 0.910 G8II-III HD002506 20001216 1.245 003 72.0 -59.33 0.933 G4III HD002910 20031101 4745 2.75 +0.10 0.904 434 218.8 -13.46 1.074 K0III HD003546 19961003 4878 2.38 -0.69 0.780 344 120.9 -84.14 0.843 G5III... HD003651 20011125 5192 4.42 +0.14 5.650 444 136.1 -33.06 0.849 K0V HD003712 19970822 4594 2.14 -0.10 -1.973 423 376.0 -4.49 1.182 K0II-IIIvar HD003765 19951030 5067 4.45 +0.10 6.158 434 135.8 -63.32 0.953 K2V HD004188 20031101 4816 2.79 +0.04 0.734 434 156.0 -0.41 1.006 K0IIIvar HD004256 20040903 4930 4.80 +0.34 6.299 324 111.1 9.33 1.006 K2V HD004482 20021023 4917 2.65 +0.02 0.991 424 151.7 -2.61 0.977 G8II HD004628 20010813 5040 4.64 -0.25 6.360 444 147.1 -10.35 0.876 K2V HD004635 20031103 5129 6.072 303 155.1 -31.75 0.916 K0 ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ----------- ---------- ----------------- ----------- ---------- -------------- ---------- ------- ------------------------------ ------- ------------- ![Distribution of the TGMET library used in this study (724 reference stars observed with ELODIE) in the plane metallicity - absolute magnitude.[]{data-label="f:bib_feHMv"}](bib_FeHMv.pdf){width="8cm"} In order to verify the TGMET results, we have run the code on ELODIE spectra of stars chosen in the library, with the best known parameters, degraded to a S/N typical of our target spectra (i.e. S/N=20). We have applied a bootstrap method : each test spectrum was removed in turn from the library, degraded to S/N=20, and its parameters determined by comparison to the rest of the library. To check results on $M_{\rm{V}}$, we have selected the 158 stars of the library with a relative error on their Hipparcos parallax lower than 10% and with $0.9 \le B - $V$ \le 1.1$. For \[Fe/H\] we have selected 199 stars with $0.9 \le B - $V$ \le 1.1$ having the most reliable spectroscopic metallicity determinations found in the literature. $M_{\rm{V}}$ and \[Fe/H\] determined from TGMET were then compared to their Hipparcos and literature counterpart, as shown in Figs. \[f:compare\_Mv\] and \[f:compare\_FeH\]. The rms of the comparison, respectively 0.25 mag and 0.13 dex on $M_{\rm{V}}$ and \[Fe/H\], measure the accuracy of the TGMET results at S/N=20. The rms on $M_{\rm{V}}$ corresponds to an error in distance of 12%. ![Comparison of TGMET absolute magnitudes from degraded spectra to those deduced from Hipparcos parallaxes for a subset of 158 reference stars.[]{data-label="f:compare_Mv"}](compare_Mv.pdf){width="8cm"} ![Comparison of TGMET metallicities from degraded spectra to those from the literature for a subset of 199 reference stars.[]{data-label="f:compare_FeH"}](compare_FeH.pdf){width="8cm"} In order to test the internal precision of TGMET on \[Fe/H\], we have compared the results obtained for the 17 stars observed twice (Fig. \[f:internal\_FeH\]). As can be seen, the agreement is very good (rms=0.05 dex). ![Comparison of the TGMET metallicities obtained for the 17 target stars observed twice (rms=0.05 dex).[]{data-label="f:internal_FeH"}](pgn_bis_FeH.pdf){width="8cm"} An important verification has to be made to check that TGMET does not introduce a bias in the absolute magnitude and metallicity distributions of giants. In the following sections, parameters of distant giants, relying on TGMET, will be compared to parameters of local giants, relying on literature and Hipparcos data. We thus have to ensure that these parameters are on the same scales. Fig. \[f:bib\_Mv\_histo\] shows the histograms of absolute magnitudes of the library’s giants deduced from Hipparcos and deduced from the bootstrap test on degraded spectra, in 0.25 mag bins. Similarly, Fig. \[f:bib\_feh\_histo\] shows the two metallicity histograms, from the literature and from the bootstrap test. These histograms are perfectly aligned and present similar dispersions which guarantees the lack of bias in the TGMET results. ![Absolute magnitude histograms of the library’s giants deduced from Hipparcos (filled) and deduced from the bootstrap test on degraded spectra (red line).[]{data-label="f:bib_Mv_histo"}](bib_Mv_histo.pdf){width="8cm"} ![Metallicity histograms of the library’s clump giants deduced from the literature (filled) and deduced from the bootstrap test on degraded spectra (red line).[]{data-label="f:bib_feh_histo"}](bib_feh_histo.pdf){width="8cm"} Distances, spatial velocities ----------------------------- Distances have been computed for all the target stars from the TGMET $M_{\textrm v}$ and Tycho2 $V_T$ magnitude transformed into Johnson $V$. No correction of interstellar absorption was applied since it is supposed to be very low in the NGP direction. Proper motions, distances and radial velocities have been combined to compute the 3 velocity components $(U,V,W)$ with respect to the Sun. Figure \[f:pgn\] shows the distribution of the 523 target stars in the planes $M_{\rm v}$ vs $T_{\rm eff}$, $M_{\rm v}$ vs \[Fe/H\] and $V$ vs $U$. ![The NGP sample in the $M_{\rm v}$ vs $T_{\rm eff}$, $M_{\rm v}$ vs \[Fe/H\] and $V$ vs $U$ diagrams.[]{data-label="f:pgn"}](pgn_TeffMv.pdf "fig:"){width="8cm"} ![The NGP sample in the $M_{\rm v}$ vs $T_{\rm eff}$, $M_{\rm v}$ vs \[Fe/H\] and $V$ vs $U$ diagrams.[]{data-label="f:pgn"}](pgn_FeHMv.pdf "fig:"){width="8cm"} ![The NGP sample in the $M_{\rm v}$ vs $T_{\rm eff}$, $M_{\rm v}$ vs \[Fe/H\] and $V$ vs $U$ diagrams.[]{data-label="f:pgn"}](pgn_UV.pdf "fig:"){width="8cm"} Ages, Galactic orbits {#s:ages} ===================== Ages have been computed with the code PARAM developed by L. Girardi, available via an interactive web form[^4]. The method was initially developed by J[ø]{}rgenson & Lindegren ([@jor05]) and slightly modified as described in da Silva et al. ([@dasil06]). It is a Bayesian estimation method which uses theoretical isochrones computed by Girardi et al. ([@gir00]) taking into account mass loss along the red giant branch. A convincing application of the method to derive the fundamental parameters of evolved stars in an open cluster is presented in Biazzo et al. ([@bia07]). Inputs to be given to the code are the observed effective temperatures, absolute magnitudes, metallicities and related errors. The output for each star is a probability distribution function (PDF) of the age (and other parameters which are not used here). As shown in da Silva et al. ([@dasil06]), in their Fig. 5, the PDF of ages can be asymetric or even double peaked, especially in the case of red clump giants. As a consequence, ages are accurate for only a tiny part of our sample. This should be kept in mind for the use of individual ages. Nevertheless the ages have significance when used statistically. As a proof, the age-metallicity plot for the 891 stars (Fig. \[f:ld\_af\]) shows a regular trend and a remarkably low dispersion as compared to other studies (e.g. Nordström et al. [@nor04], da Silva et al. [@dasil06]). The 143 stars (83 local, 60 distant) with relative age errors $<$ 25% have been highlighted in Fig. \[f:ld\_af\]. Considering only these stars, we measure a mean metallicity of -0.06 with a dispersion of 0.10 dex for stars younger than 2 Gyr, whereas the mean metallicity of older stars (age $>$ 8 Gyr) is -0.44 with a dispersion of 0.27 dex. There is no young star with a metallicity lower than -0.32, and no old star with a metallicity higher than -0.13, contrary to common findings in samples of dwarfs, as for instance in Feltzing et al. ([@fel01]) and Nordström et al. ([@nor04]). It is important to note this property of our sample, because the existence of old metal-rich stars is often mentioned to explain the large dispersion of the AMR (Haywood [@hay06]). We come back to the AMR of the thin disk in Sect. \[s:clump\]. ![Age - metallicity diagram for the 891 stars. Stars (83 local, 60 distant) with relative age errors lower than 25 % are highlighted as large filled circles.](age_feh_loc-dist.pdf "fig:"){width="8cm"} \[f:ld\_af\] The orbital parameters have been computed by integrating the equations of motion in the galactic model of Allen & Santillan ([@allen]), adopting a default value of 4 Gyr as the integration time. The adopted velocity of the Sun with respect to the LSR is (9.7, 5.2, 6.7) [$\mathrm{\,km\,s}^{-1}$ ]{}(Bienaymé 1999), the solar galactocentric distance ${\mathrm R}_{\odot}=8.5$ kpc and circular velocity ${V_{\rm LSR}}=220$ [$\mathrm{\,km\,s}^{-1}$ ]{}. Population membership {#s:pop} ===================== The $U$ vs $V$ velocity distributions of the local and distant samples can be compared from Figures \[f:loc\] and \[f:pgn\]. It is clear, from these plots, that the two samples contain different kinematical populations. In the local sample, the velocities are clumpy and reflect moving groups and superclusters that dominate the kinematics in the solar neighbourhood. Compared to Fig. 9 of Famaey et al. ([@fam05]), we can identify the Hercules stream at $(U,V)\simeq (-40,-50)$ [$\mathrm{\,km\,s}^{-1}$ ]{}, the Hyades-Pleiades supercluster at $(U,V)\simeq (-30,-20)$ [$\mathrm{\,km\,s}^{-1}$ ]{}, and the Sirius moving group at $(U,V)\simeq (0,0)$[$\mathrm{\,km\,s}^{-1}$ ]{}. There are very few high velocity stars that could correspond to the thick disk. On the contrary, the velocities of the distant sample are better mixed with higher dispersions. This reflects the dynamical heating of the disk together with the growing number of thick stars with increasing distance to the plane. In order to build a sample of pure thin disk stars, we have performed the classification of all the stars into different kinematical populations. We have taken into account the Hercules stream because its velocity ellipsoid is just intermediate between that of the thin disk and the thick disk, and is likely to contaminate both populations. We did not attempt to distinguish the other groups of the thin disk.\ We assign to each star its probability to belong to the thin disk, the thick disk, the Hercules stream and the halo on the basis of its $(U,V,W)$ velocity and the velocity ellipsoids of these populations, in the same way as Soubiran & Girard (2005) and with similar kinematical parameters of the populations. In the distant sample we find that 305 stars and 65 stars have a probability higher than 80% to belong to the thin disk and the thick disk respectively. In the local sample, the numbers are 304 and 11. One important question that we can immediately investigate thanks to this kinematical classification is whether the thin disk and the thick disk overlap in age and metallicity. Our data strongly suggest that this is the case. Fig. \[f:tt\_af\] shows with different symbols the age-metallicity diagram for the most probable thin disk and the thick disk stars, restricted to relative age errors lower than 25 % (suspected binaries rejected). It is clear that the oldest thin disk stars and thick disk stars overlap in the metallicity range -0.30 $\leq$ \[Fe/H\] $\leq$ -0.70, and age range 8-10 Gyr. It is also worth noticing that there are no young thick disk stars. ![Age - metallicity diagram for stars with relative age errors lower than 25 % and belonging to the thin disk (crosses) and the thick disk (filled circles).](age_feh_thin-thick.pdf "fig:"){width="8cm"} \[f:tt\_af\] All the parameters which have been determined as described in the previous sections are presented in Table \[t:full\]. The full table with all 891 stars is only available in electronic form at the CDS. The file with the age PDFs is also available upon request. ----------------------- ------------- -------- ---------- ------------------------------ ------------------------------ ------------------------------ -------- -------- ------------ -------- -------- -------- -------------------- -------- -------- -------- -------- ---- ID $M_{\rm v}$ $B-V$ \[Fe/H\] U V W Rmin Rmax $|$Zmax$|$ ecc d age $\sigma_{\rm age}$ p1 p2 p3 p4 SB dex [$\mathrm{\,km\,s}^{-1}$ ]{} [$\mathrm{\,km\,s}^{-1}$ ]{} [$\mathrm{\,km\,s}^{-1}$ ]{} kpc kpc kpc pc Gyr Gyr ..................... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... HD166229 1.471 1.165 0.01 50.3 -38.6 6.9 5.80 9.40 0.17 0.24 64 2.68 0.96 0.89 0.11 0.00 0.00 HD169913 1.510 1.050 0.00 -25.4 -5.6 0.4 7.96 8.87 0.08 0.05 100 1.32 0.39 0.98 0.02 0.00 0.00 HD171994 1.549 0.882 -0.23 -45.1 -15.9 -4.6 7.10 9.18 0.03 0.13 90 1.74 0.54 0.97 0.03 0.00 0.00 HD180610 1.525 1.160 -0.01 14.7 -25.6 -10.4 6.90 8.74 0.05 0.12 50 7.01 2.02 0.96 0.04 0.00 0.00 HD192836 1.321 1.040 0.01 3.5 -9.2 -8.3 7.99 8.66 0.02 0.04 91 1.85 1.18 0.98 0.02 0.00 0.00 HD196134 1.564 0.984 -0.14 25.1 -2.9 -17.9 7.77 9.58 0.14 0.10 97 4.46 1.94 0.98 0.02 0.00 0.00 HD198431 1.453 1.061 -0.37 -52.3 -42.4 -20.3 5.76 8.94 0.17 0.22 77 9.55 1.69 0.35 0.08 0.57 0.00 HD211006 1.461 1.175 0.07 -22.1 -23.4 -8.7 7.11 8.59 0.04 0.09 77 4.88 1.79 0.96 0.03 0.01 0.00 HD212943 1.336 1.040 -0.34 34.9 -16.1 -82.9 7.36 9.85 1.69 0.15 49 8.72 1.94 0.06 0.92 0.00 0.02 HD214995 1.363 1.101 -0.09 -29.7 -36.6 -4.9 6.28 8.64 0.05 0.16 82 6.36 2.36 0.73 0.06 0.21 0.00 HD221833 1.585 1.155 0.02 24.0 3.5 -3.0 7.94 9.95 0.10 0.11 95 5.57 2.35 0.98 0.02 0.00 0.00 T0880-00075 0.693 0.918 -0.49 -40.4 -33.9 81.8 7.05 9.10 2.21 0.13 423 4.62 2.72 0.02 0.97 0.00 0.01 T0880-00132 0.941 1.062 -0.11 -30.6 -20.8 -22.9 7.27 8.72 0.35 0.09 266 3.93 1.67 0.92 0.07 0.01 0.00 T0880-00746 2.416 0.996 0.14 8.1 -33.2 3.8 6.51 8.62 0.24 0.14 197 3.71 1.14 0.90 0.07 0.03 0.00 T0881-00374 1.327 1.085 -0.10 3.5 -38.4 -38.5 6.33 8.56 0.67 0.15 425 6.87 2.21 0.61 0.27 0.12 0.00 T0881-00435 0.816 1.000 -0.27 -46.7 -16.7 6.3 7.18 9.20 0.37 0.12 316 4.69 2.70 0.95 0.05 0.00 0.00 T0881-00494 0.948 1.049 -0.21 44.2 -69.6 -5.9 4.47 8.99 0.27 0.34 272 4.28 2.71 0.18 0.80 0.01 0.00 T0885-00642 1.105 1.044 -0.10 -54.7 -11.7 -13.5 7.22 9.57 0.35 0.14 322 4.03 1.73 0.94 0.06 0.00 0.00 T0888-00115 0.680 1.016 -0.28 -6.2 -3.6 5.9 8.40 8.72 0.33 0.02 283 3.94 2.37 0.97 0.03 0.00 0.00 T0888-00875 1.427 0.983 -0.20 18.3 -1.3 -33.6 7.89 9.61 0.45 0.10 210 8.33 2.11 0.93 0.07 0.00 0.00 T0889-01220 0.737 0.995 -0.30 -66.1 -34.2 7.5 6.04 9.38 0.30 0.22 223 4.47 2.51 0.55 0.12 0.33 0.00 T0897-00666 0.679 0.955 -0.44 -78.2 -31.4 1.3 5.98 9.81 0.24 0.24 205 4.25 2.11 0.63 0.15 0.21 0.00 T0897-00860 1.278 1.053 -0.01 5.7 -10.7 -5.4 7.80 8.75 0.38 0.06 378 3.03 1.16 0.97 0.03 0.00 0.00 T1442-00319 0.902 0.946 -0.64 49.1 5.2 -22.5 7.40 11.23 0.54 0.21 397 5.93 2.88 0.94 0.06 0.00 0.00 b T1442-00453 1.404 0.941 -0.56 27.8 -104.7 -28.4 3.17 8.71 0.39 0.47 247 7.23 2.96 0.00 0.99 0.00 0.01 ..................... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ..................... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ----------------------- ------------- -------- ---------- ------------------------------ ------------------------------ ------------------------------ -------- -------- ------------ -------- -------- -------- -------------------- -------- -------- -------- -------- ---- : Stellar parameters of the programme stars derived in this work. The four columns p1, p2, p3 and p4 refer to the probability of belonging to the thin disk, the thick disk, the Hercules stream and the halo respectively. SB=b indicate a suspected spectroscopic binary. []{data-label="t:full"} The thin disk traced by clump giants {#s:clump} ==================================== Among the many studies that can be done with the new sample presented here, we focus on the properties of the thin disk probed for the first time up to large distances above the Galactic plane, from a complete stellar sample and with 3D kinematics and spectroscopic metallicities. In order to work with a homogeneous sample, with well defined boundaries in both its local and distant counterparts, we have selected clump giants on the basis of a colour and absolute magnitude restriction : $0.9 \le B-V \le 1.1$, $0.0 \le M_{\rm v} \le 1.6$. According to Koen & Lombard (2003), this ensures the lowest contamination by other giants. Rejecting suspected binaries, 597 stars fall into these limits. We further restrict the sample to the 396 stars having a probability higher than 80% to belong to the thin disk. In this section we investigate some basic distributions of this sample. Raw metallicity distribution and vertical gradient {#s:grad} -------------------------------------------------- We compare the metallicity distributions of the local and distant clump giants in Fig. \[f:feh\_histo\_thin\]. The local sample has an average of \[Fe/H\]=-0.11 and a standard deviation of $\sigma_{[Fe/H]}=0.15$ whereas the distant sample has an average of \[Fe/H\]=-0.21 and a standard deviation of $\sigma_{[Fe/H]}=0.17$. The metallicity distribution of the thin disk is thus significantly shifted towards lower values at larger distance above the galactic plane. This is not due to the comparison of metallicities coming from the literature for the local sample and from TGMET for the distant sample since we have verified that the two scales are consistent (Sect. \[s:TGMET\]). More likely this difference indicates a vertical metallicity gradient which is represented in Fig. \[f:feh\_grad\_thin\], using as the distance the maximum height from the plane, Zmax, reached by the star in its galactic orbit. A linear fit indicates a gradient of $\partial \mathrm{[Fe/H]} / \partial Z = -0.31 \pm 0.03$ dex kpc$^{-1}$. Taking for each star its current distance from the plane, instead of Zmax, leads to a consistent result of $\partial \rm{[Fe/H]} \partial z = -0.30 \pm 0.03$ dex kpc$^{-1}$. According to numerous previous studies, the existence of a vertical metallicity gradient in the thin disk seems to be firmly established. However the value of its amplitude, constrained by the observation of different kinds of tracers, at various distances from the Sun, is still oscillating between $\sim$ -0.25 and -0.35 dex kpc$^{-1}$. Using open clusters, Piatti et al. ([@pia95]) find -0.34 dex kpc$^{-1}$ whereas Carraro et al. ([@car98]) measure -0.25 dex kpc$^{-1}$ and Chen et al. ([@chen03]) measure $-0.295 \pm 0.050$ dex kpc$^{-1}$. Other studies are based, like ours, on field stars and have used kinematical information to select thin disc stars. This is the case of Marsakov & Borkova ([@mar06]) who have selected the most probable thin disk stars in their compilation of spectroscopic abundances, using their 3D velocties and orbital parameters. They measure a gradient of -0.29 $\pm$ 0.06 dex kpc$^{-1}$. Bartašiūtė et al. ([@bar03]) have observed 650 stars at high galactic latitude, up to 1.1 kpc, and identified thin and thick disk stars on the basis of their rotational lag. They measure a gradient of -0.23 $\pm$ 0.04 dex kpc$^{-1}$ in the thin disk. The direct comparison of the metallicity distribution of our sample with other distributions probing different galactic volumes would imply a scaleheight correction. The reason is that metal-poor stars, which have hotter kinematics, have a larger scaleheight than more metal-rich stars, and may be under-represented in local samples. A correction, relying on a mass model of the disk, would thus increase the number of metal-poor stars with hotter kinematics which would have been missed in our sample. On the contrary, stars more metal-rich than the Sun are supposed to be over-represented in local samples (see for instance Fig. 3 in Haywood [@hay06]). We have not attempted to correct such bias in our sample and we restrict here the discussion to a qualitative comparison between dwarfs and giants. When we compare the metallicity distribution of clump giants to that of dwarfs, as presented by Haywood ([@hay02]), we find a good agreement for the metal-poor side. We confirm with this new sample Haywood’s finding that the thin disk is not an important contributor to stars with \[Fe/H\] $<$ -0.5. We find that 2.5% of our sample has \[Fe/H\] $<$ -0.5 with the most metal-poor thin disk giant at \[Fe/H\]=-0.71. According to Fig. 3 in Haywood ([@hay06]), the scaleheight correction factor is comprised between $\sim$ 1.5 and 3.5 in the metallicity range -0.70 $<$ \[Fe/H\] $<$ -0.50. Taking this correction into account would not change substantially our findings. On the contrary, we find a significant difference between clump giants and dwarfs for the metal-rich side of the \[Fe/H\] histogram. Haywood ([@hay02]) finds that 40-50% of long-lived dwarfs have a metallicity higher than \[Fe/H\]=0 whereas the proportion is only 20% in our local sample and 13% in our distant sample. Super Metal-Rich (\[Fe/H\] $>$ +0.20) FGK dwarfs are quite usual in the Solar Neighbourhood whereas we have only two thin disk clump giants at \[Fe/H\]=+0.21 and \[Fe/H\]=+0.27. Our first guess was that such a low ratio of metal-poor stars and metal-rich stars was correlated with the colour cuts that we used to restrict the sample to clump giants. We have verified that this is not the case by comparing the metallicity histograms of local giants ($0 \le M_{\rm v} \le 1.6$) in the B-V intervals \[0.9; 1.1\] and \[0.7; 1.2\]. We found that the metal-poor sides are similar. The ratio of metal-rich stars turns out to be slightly higher in the extended colour interval : 24% instead of 20%. We conclude that our adopted colour cutoff affects the metallicity distribution in a way that metal-rich stars are slightly under-represented. This bias is however not sufficient to reconcile the metallicity distribution of clump giants with that of dwarfs. Pasquini et al. ([@pas07]) have also noticed a difference in the metallicity distribution of giants and dwarfs hosting planets. They propose as an interpretation the pollution of stellar atmospheres, causing a metal excess visible in the thin atmosphere of dwarfs, while diluted in the extended envelope of giants. Our sample suggests that the difference is not limited to stars hosting planets so that the pollution hypothesis should be investigated in a more general context. If validated in the general case, it would imply that dwarfs are not appropriate to probe the chemical history of the Galaxy. ![Metallicity distribution of thin disk clump giants of the local (filled) and distant samples (red line). ](feh_histo_thin.pdf "fig:"){width="8cm"} \[f:feh\_histo\_thin\] ![Vertical gradient in the metallicity distribution of thin disk clump giants. ](feh_grad_thin.pdf "fig:"){width="8cm"} \[f:feh\_grad\_thin\] Age - metallicity relation {#s:AMR} -------------------------- We use the same method as da Silva et al. ([@dasil06]) to determine the AMR of the sample. For each time interval ($\Delta_t$=1 Gyr), we measure the cumulative \[Fe/H\] by adding the measured \[Fe/H\] of each star weighted by its PDF. With such a method, a given star contributes to several bins which are consequently not independant. However it is a good way to take errors on ages into account. The mean metallicity and dispersion per age bin are presented in Fig. \[f:AMR\] and Table \[t:AMR\]. A remarkable result is the low dispersion obtained at all ages. Subtracting the estimated observational error (0.09 dex for local stars, 0.13 dex for distant stars) yields to a cosmic scatter in \[Fe/H\] lower than 0.12 dex. A transition occurs around 4 Gyr in both the mean metallicity and dispersion. From 10 Gyr to 4 Gyr, we see a very smooth and regular increase of the mean metallicity, 0.01 dex per Gyr, with constant spread, which characterizes an homogeneous interstellar medium. An upturn occurs at 4 Gyr with a steeper metallicity rise at younger ages. What is the state-of-the-art of the AMR in the Solar Neighbourhood and how does our relation compare with previous ones ? Despite numerous studies on the subject over nearly 30 years, there is still no consensus on the existence or not of a slope in the AMR, neither in the amplitude of the cosmic scatter. Major contributions like Edvardsson et al. ([@edv93]), Feltzing et al. ([@fel01]) and Nordström et al. ([@nor04]), using classical isochrone ages, find little evidence for a slope in the relation of \[Fe/H\] with age, and a broad dispersion ($\sigma_{\rm [Fe/H]} > 0.20$ dex). In contrast, Rocha-Pinto et al. ([@roc00]), using chromospheric ages, find a significant trend in the AMR, with lower dispersion ($\sigma_{\rm [Fe/H]} \sim 0.12$ dex). Pont & Eyer ([@pon04]) have re-analysed Edvardsson et al.’s sample with a Bayesian approach and also find a significant trend with a dispersion $\sigma_{\rm [Fe/H]} < 0.15$ dex. We note that all these studies involve nearby dwarf stars. To our knowledge, the only AMR based on giants is that of da Silva et al. ([@dasil06]). Despite the small size of their sample, they find like us metallicities rising from \[Fe/H\] $\sim$ -0.23 at 10.5 Gyr to \[Fe/H\] $\sim$ 0.00 at 0.5 Gyr. The shape of their AMR is however different of ours, shallower at young ages and steeper at old ages. The dispersion of their AMR is also much larger than ours, reaching 0.30 dex in the oldest age bins. We notice that the rather large metallicity variation that we observe in the 4 youngest bins in our AMR is also visible in the AMR derived by Nordström et al. ([@nor04]) and by Feltzing et al. ([@fel01]). Both studies interpret this feature as a bias against young metal-poor dwarfs due to a colour cut. This explaination is not valid for our sample since we have verified that our colour cuts only affect very slightly the metal-rich part of the metallicity distribution (see previous Sect.). We thus conclude that this peculiar shape of the AMR is real. Piatti et al. ([@pia95]) and Carraro et al. ([@car98]) have corrected their AMR from the positional dependency, justified by the use of open clusters. Open clusters have a wide spatial distribution and trace different histories of the chemical evolution, depending on their galactocentric distances. The AMR has thus to be corrected from the observed radial metallicity gradient, which has an amplitude of 0.07 dex kpc$^{-1}$ according to Piatti et al. ([@pia95]), or 0.09 dex kpc$^{-1}$ according to Carraro et al. ([@car98]). Field stars are also supposed to be affected by a radial metallicity gradient. A consequence of orbital diffusion is that samples of nearby stars may include stars born in the inner or outer parts of the disk where the chemical enrichement may have been different from that of the Solar Neighbourhood. Such stars are easily recognized with their orbital parameters Rmin and Rmax, respectively perigalactic radius and apogalactic radius, different from that of the true local stars. It is worth mentioning that Edvardsson et al. ([@edv93]) have studied the AMR for stars restricted to the solar circle and still found a large and significant scatter. Our sample of thin disk clump giants is free from the influence of stars from other galactocentric distances since our kinematical selection has naturally eliminated stars on eccentric orbits. The question whether the AMR should be corrected from the vertical metallicity gradient is more difficult to assess. We note that Carraro et al. ([@car98]) have not attempted to correct their open cluster AMR from the observed vertical metallicity gradient. Moreover they argue that “In the case of field stars, orbital diffusion is expected to be effective enough to smooth out a vertical metallicity gradient within a single-age population, so that the vertical structure of the disk is dominated by the different scaleheights of different age populations”. We also note that, in the case of field star AMRs, while the radial migration is often refered to (Edvardsson et al. [@edv93], Haywood [@hay06]), the influence of the vertical metallicity gradient is not discussed. ![Age - metallicity relation of thin disk clump giants. The error bars represent the dispersion in each bin, including observational errors and cosmic scatter.](AMR.pdf "fig:"){width="8cm"} \[f:AMR\] $<{\rm t}>$ (Gyr) $<{\rm [Fe/H]}>$ $\sigma_{\rm [Fe/H]}$ $<{\rm N}>$ ------------------- ------------------ ----------------------- ------------- 0.5 +0.01 0.10 43.8 1.5 -0.07 0.13 79.3 2.5 -0.16 0.15 54.6 3.5 -0.19 0.15 41.9 4.5 -0.22 0.15 34.1 5.5 -0.23 0.15 28.1 6.5 -0.24 0.15 28.5 7.5 -0.25 0.15 22.2 8.5 -0.26 0.16 15.3 9.5 -0.27 0.16 15.7 : \[t:AMR\] Age-metallicity relation derived from our sample of thin disk clump giants. N is the number of stars contributing to each age bin. It is fractional because we use the complete probability function of each star to bin the age axis (see text). Age - velocity relation {#s:AVR} ----------------------- The thin disk AVR has been revisited recently by Seabroke & Gilmore ([@sea07]) using the data of Nordström et al. ([@nor04]) and Famaey et al. ([@fam05]). They show that the kinematical streams in these local samples do not permit one to safely constrain the relations in the U and $V$ directions, contrary to the $W$ direction where the samples are well mixed. Our sample of clump giants, spanning larger distances from the Galactic plane, is well suited to investigate these relations. However, for such a purpose, we cannot work on the thin disk sample which was built to study the metallicity and age distributions, in Sections \[s:grad\] and \[s:AMR\]. Our selection of thin disk stars on a kinematical criterion has favoured stars in the central parts of the velocity ellipsoid, with moderate velocities, resulting in a serious kinematical bias. In order to study how the velocity dispersions increase with time, we need to work also with the warmer part of the thin disk, but excluding as well as possible stars which do not follow the kinematical behaviour of the thin disk. To do so, we consider our distant sample of clump giants and reject stars having a probability higher than 80% to belong to the thick disk, the Hercules stream and the halo, resulting in 320 stars. Results are presented in Fig. \[f:AVR\] and Table \[t:AVR\]. An important question is whether the dispersions saturate at a given age of the thin disk. Seabroke & Gilmore ([@sea07]) have shown that local data are in agreement with several models of disk heating : continuous or with saturation at 4.5, 5.5 and 6.5 Gyr. Our data show evidence for a transition at $\sim$ 5 Gyr, with saturation occuring in V at 29 [$\mathrm{\,km\,s}^{-1}$ ]{}and in $W$ at 24 [$\mathrm{\,km\,s}^{-1}$ ]{}. The velocity dispersion in $U$ seems to increase smoothly, reaching 46 [$\mathrm{\,km\,s}^{-1}$ ]{}at 9.5 Gyr. A consequence is that the velocity ellipsoid axis ratios $\sigma_V / \sigma_U$ and $\sigma_W / \sigma_U$ are not constant. The ratio $\sigma_V / \sigma_U$ is related to the Oort constants and is expected to be $\sim$0.5. Here this ratio is varying from 0.55 at 1-2 Gyr to a maximum value of 0.68 at 4-5 Gyr. The ratio $\sigma_W / \sigma_U$ is related to the scattering process responsible of the dynamical heating of the disk. With our data, it has a maximum value of 0.56 at 4-5 Gyr. Although these ratios are supposed to be constant in an axisymmetric Galaxy, there are previous reports of variations related to colour or spectral types (e.g. Mignard [@mi00]) . We mention the study by Vallenari et al. ([@val06]) who have also probed the thin disk kinematics towards the NGP. Their method is however significantly different from ours since they analyse, through a galactic model, proper motions and the colour magnitude diagram of $\sim$ 15000 stars down to V=20. Their best-fit for the velocity dispersions of the thin disk, presented in 4 age bins, differ significantly from ours, especially in the oldest age bin (7-10 Gyr) where their values are lower by 3$\sigma$. Simple statistics on our sample gives $(\sigma_U,\sigma_V,\sigma_W)=(41.5, 26.4, 22.1)$[$\mathrm{\,km\,s}^{-1}$ ]{}, significantly higher than values determined from late-type Hipparcos stars (e.g. Bienaymé [@bien99], Mignard [@mi00]). Although we cannot rule out the contamination of the sample with thick disk stars, it nicely compares to recent results by de Souza & Teixeira ([@desou07]) who show that Mignard’s sample is better explained by the superposition of 2 velocity ellipsoids the hotter one having $(\sigma_U,\sigma_V,\sigma_W)=(41.0, 27.0, 19.0)$[$\mathrm{\,km\,s}^{-1}$ ]{}. It is also worth noticing in Table \[t:AVR\] that the mean $U$ and $W$ are roughly constant at all ages whereas $V$ declines from $\sim$ -14 [$\mathrm{\,km\,s}^{-1}$ ]{}to -21 [$\mathrm{\,km\,s}^{-1}$ ]{}. We retrieve for $U$ and $V$ the Solar motion with respect to late-type stars, as determined by Mignard ([@mi00]), although we find a significant difference in $W$. We get a mean value of $W_{\odot}=11.5$ [$\mathrm{\,km\,s}^{-1}$ ]{}, whereas he finds values around 7 [$\mathrm{\,km\,s}^{-1}$ ]{}. We recall that our $W$ velocities of the distant stars at the NGP rely mainly on radial velocities, which have an accuracy better than 1 [$\mathrm{\,km\,s}^{-1}$ ]{}, and thus are not affected by uncertainties on distances and proper motions. ![Age - velocity relations of distant clump giants, the most probable thick disk, Hercules stream and halo stars being excluded. ](AVR.pdf "fig:"){width="8cm"} \[f:AVR\] t (Gyr) $U$ $\sigma_U$ $V$ $\sigma_V$ $W$ $\sigma_W$ N --------- ------- ------------ ------- ------------ ------- ------------ ------ 0.50 -11.3 22.3 -14.2 19.9 -14.2 16.4 10.3 1.50 -11.6 36.2 -13.9 19.9 -10.6 18.7 37.9 2.50 -9.6 40.4 -15.5 24.8 -10.6 21.7 44.7 3.50 -9.3 41.0 -16.7 27.0 -10.7 22.6 37.8 4.50 -9.2 41.9 -17.5 28.3 -11.1 23.5 31.9 5.50 -9.3 42.6 -17.9 28.8 -11.6 23.5 27.0 6.50 -9.8 43.4 -18.6 29.2 -12.2 23.2 28.8 7.50 -9.7 44.4 -19.5 29.3 -12.8 23.1 23.7 8.50 -9.2 45.3 -20.3 29.3 -13.1 23.2 17.1 9.50 -8.7 46.0 -21.0 29.2 -13.1 23.6 18.2 : \[t:AVR\] Age-velocity relation derived from 320 distant clump giants, the most probable thick disk, Hercules stream and halo stars being excluded. Summary ======= The data presented here are the result of several years of effort to obtain high resolution spectra for a large and complete sample of clump giants. Besides our own observations, on the ELODIE spectrograph at OHP, we have also taken advantage of other available material like the Hipparcos and Tycho2 catalogues, the \[Fe/H\] catalogue (Cayrel de Strobel et al. [@cay01]) updated with a number of new references and the PARAM code to derive ages (da Silva et al. [@dasil06]). We have described how these data were combined to provide a catalogue of stellar parameters for 891 stars, mainly giants, giving atmospheric parameters with spectroscopic metallicities, absolute magnitudes and distances, galactic velocities $(U,V,W)$, orbits, ages and population membership probabilities. Our main motivation in conducting this project was to probe the Galactic disk using an unbiased and significant sample, with high quality data, in particular with spectroscopic metallicities and accurate distances and radial velocities. We have chosen to observe giants in the direction of the NGP in order to reach distances to the galactic plane, up to 1 kpc, which are not covered by spectroscopic surveys, usually limited to the closer Solar Neighbourhood. Clump giants are particularly well suited for this purpose. Compared to previous studies on the subject, our analysis presents several improvements, which are briefly outlined: - for binning the age axis, we have considered for each star its entire age PDF, instead of averaging it, following da Silva et al. ([@dasil06]) - we have considered several kinematical populations likely to be present in our sample: the thin disk, the thick disk, the Hercules stream and the halo - in order to study the thin disk metallicity and age distributions, we have taken care to select stars with the highest probability of belonging to this population - in order to study the thin disk velocity distribution, we have taken care to reject the most probable non thin disk stars Our results are summarized as follows : - we do not find any young metal-poor stars nor old metal-rich stars, contrary to common findings in dwarf samples - the old thin disk and the thick disk overlap in the metallicity range -0.70 $\leq$ \[Fe/H\] $\leq$ -0.30 and age range 8-10 Gyr - among stars with accurate individual ages, we do not find any young thick disk stars - the metallicity distribution of our sample of thin disk clump giants extends down to \[Fe/H\]$\simeq$-0.70, but the fraction of stars with \[Fe/H\]$\leq $-0.50 is only 2.5% - the metallicity distributions of giants and dwarfs differ significantly on the metal-rich side: metal-rich giants are less frequent - a vertical metallicity gradient is measured in the thin disk: $\partial \mathrm{[Fe/H]} / \partial Z = -0.31 \pm 0.03$ dex kpc$^{-1}$ - the AMR of the thin disk presents a low dispersion, implying a cosmic scatter lower than 0.12 dex, in agreement with previous findings by Rocha-Pinto et al. ([@roc00]) and Pont & Eyer ([@pon04]) - 2 regimes are visible in the AMR of the thin disk : from 10 Gyr to 4 Gyr, the metallicity increases smoothly by 0.01 dex per Gyr, while for younger stars the rise of \[Fe/H\] is steeper - in the thin disk, the $V$ and $W$ dispersions saturate at 29 and 24 [$\mathrm{\,km\,s}^{-1}$ ]{}respectively at $\sim$ 4-5 Gyr, whereas $U$ shows continuous heating - the Solar motion is found to be nearly constant in $U$ and $W$ with respect to stars of all ages, while the amplitude of the asymmetric drift increases from 14 to 21 [$\mathrm{\,km\,s}^{-1}$ ]{}with respect to young and old stars respectively We are grateful to L. Girardi for computing the ages for the 891 stars of this sample. This research has made use of the SIMBAD and VIZIER databases, operated at CDS, Strasbourg, France. It is based on data from the ESA [*Hipparcos*]{} satellite (Hipparcos and Tycho2 catalogues). Allen, C., & Santillan, A. 1993, RMxAA, 25, 39 Alonso, A., Arribas, S., & Mart' inez-Roger, C. 2001, A&A, 376, 1039 Baranne, A., Queloz, D., Mayor, M. et al. 1996, A&AS, 119, 373 Barbier-Brossat, M. & Figon, P. 2000, A&A Sup., 142, 217 Bartašiūtė, S., Aslan, Z., Boyle, R. P., Kharchenko, N. V., Ossipkov, L. P., & Sperauskas, J. 2003, Baltic Astron., 12, 539 Biazzo, K., Pasquini, L., Girardi, L., et al. 2007, A&A, 475, 981 Blackwell, D.E., & Lynas-Gray, A.E. 1998, A&AS, 129, 505 Bienaymé O. 1999 A&A, 341, 86 Bienaymé, O., Soubiran, C., Mishenina, T.V., Kovtyukh, V.V., & Siebert, A. 2005, A&A, 456, 1109 (Paper III) Carraro, G., Ng, Y. K., & Portinari, L. 1998, MNRAS, 296, 1045 Cayrel de Strobel, G., Soubiran, C., & Ralite, N. 2001, A&A, 373, 159 Chen, L., Hou, J. L., & Wang, J. J. 2003, AJ, 125, 1397 da Silva L., Girardi L., Pasquini L. et al 2006, A&A, 458, 609 de Souza, R.E. & Teixeira, R. 2007, A&A, 471, 475 di Benedetto, G.P. 1998, A&A, 339, 858 Edvardsson, B., Andersen, J., Gustafsson, B., Lambert, D. L., Nissen, P. E., & Tomkin, J. 1993, A&A, 275, 101 ESA 1997, The Hipparcos and Tycho Catalogues, (Noordwijk) Series: ESA-SP 1200 Famaey, B., Jorissen, A., Luri, X. et al. 2005, A&A, 430, 165 Feltzing, S., Holmberg, J., & Hurley, J. R. 2001, A&A, 377, 911 Girardi L., Bressan A., Bertelli G., & Chiosi C. 2000, A&AS, 141, 371 Harmanec, P. 1998, A&A, 335, 173 Haywood, M. 2002, MNRAS, 337, 151 Haywood, M. 2006, MNRAS, 371, 1760 H[ø]{}g, E., Fabricius, C., Makarov, V.V. et al. 2000, A&A, 363, 385 Johnson, D.R.H. & Soderblom, D.R. 1987, AJ, 93, 864 J[ø]{}rgensen, B.R. & Lindegren, L. 2005, A&A, 436, 127 Koen, C. & Lombard, F. 2003, MNRAS, 343, 241 Katz, D., Soubiran, C., Cayrel, R. et al. 1998, A&A, 338, 151 Kovtyukh, V.V., Soubiran, C., Bienaymé, O., Mishenina, T.V., & Belik, S. I. 2006, MNRAS, 371, 879 Marsakov, V.A. & Borkova, T.V. 2006, AstL, 32, 376 McWilliam, A. 1990, ApJS, 74, 1075 Mignard, F. 2000, A&A, 354, 522 Mishenina, T.V., Bienaymé, O., Gorbaneva, T.I., Soubiran, C., Charbonnel, C., Korotin, S.A., & Kovtyukh, V.V. 2006, A&A, 456, 1109 Moultaka, J., Ilovaisky, S.A., Prugniel, P., & Soubiran, C. 2004, PASP, 116,693 Nordström B., Mayor, M., Andersen, J., et al. 2004, A&A, 418, 989 Pasquini, L., Döllinger, M.P., Weiss, A. et al. 2007, A&A 473, 979 Piatti, A., Claria, J. J., & Abadi, M. G. 1995, AJ, 110, 2813 Pont, F., & Eyer, L. 2004, MNRAS, 351, 487 Prugniel, P., & Soubiran, C. 2004, astro-ph/0409214 Prugniel, P., & Soubiran, C. 2001, A&A, 369, 1048 Ramírez I., & Meléndez J. 2005, ApJ, 626, 446 Rocha-Pinto, H. J., Maciel, W. J., Scalo, J., & Flynn, C. 2000, A&A, 358, 850 Seabroke G.M., & Gilmore G. 2007, MNRAS, 380, 1348 Siebert, A., Bienaymé, O., & Soubiran, C. 2003, A&A, 399, 531 (Paper II) Soubiran, C., Bienaymé, O., & Siebert, A. 2003, A&A, 398, 141 (Paper I) Soubiran, C., Katz, D., & Cayrel, D. 1998, A&A, 133, 221 Vallenari, A., Pasetto, S., Bertelli, G., Chiosi, C., Spagna, A., & Lattanzi, M. 2006, A&A, 451, 125 [^1]: Based on observations made at the Observatoire de Haute Provence (OHP, France). Data only available in electronic form at the CDS (Strasbourg, France) [^2]: Full Tables \[t:TGMET\_lib\] and \[t:full\] are only available electronically at the CDS [^3]: http://atlas.obs-hp.fr/elodie/ [^4]: http://stev.oapd.inaf.it/ lgirardi/cgi-bin/param
--- title: Dielectric Coatings for IACT Mirrors --- Introduction ============ Imaging Atmospheric Cherenkov Telescopes (IACTs) for very-high energy (VHE) gamma-ray astronomy image the Cherenkov light of particle showers in the atmosphere onto a photosensitive detector. The wavelength range of interest is roughly between 300 and 550 nm. Typically, IACTs have tesselated mirror areas of the order of 1 m$^2$ and larger. The current standard (e.g. in H.E.S.S., VERITAS and partially MAGIC) is mirrors with glass surfaces, coated on the front surface with aluminium (Al) which is protected by a single protective layer (e.g. SiO$_2$, Al$_2$O$_3$). Not being protected by a dome, the mirrors are constantly exposed to the environment and show a loss of reflectance of a few per cent per year. This requires re-coating of all mirrors after a few years of operation. For the future CTA observatory (see [@Hofmann:2010]) with a total planned mirror area of about 10,000 m$^2$ this would mean a significant maintenance effort. Coatings which increase the lifetime of the mirrors can therefore play a major role in keeping the maintenance costs of the observatory low. In addition, coatings with higher reflectance in the relevant wavelength range compared to the classical Al + SiO$_2$ coatings will increase the sensitivity of the instrument while a reduced reflectance above roughly 550 nm can help to suppress sensitivity to background light from the night sky. Coatings under Investigation ============================ Aluminium coatings with a single SiO$_2$ layer typically show a reflectance of 80 to 90% between 300 and 550 nm. To improve the reflectance and the durability of the mirrors two commercially available coating options are currently under investigation in this study (further coating designs are being investigated at the University of Tübingen and are presented in [@Bonardi:2013].). \(a) A three-layer protective coating (SiO$_2$ + HfO$_2$ + SiO$_2$) on top of an Al coating. This already enhances the reflectance by about 5%. Fig. \[fig:fig001\] shows the reflectance of this coating in comparison to the reflectance of an Al + SiO$_2$ coating. \(b) A dielectric coating, consisting of a stack of many alternating layers of two materials with different refractive indices, without any metallic layer. Properly optimized, this results into an pure interference mirror that allows a box-shaped reflectance curve to be custom-made with $>95\%$ reflectance in a defined wavelength range and $<30\%$ elsewhere. The first attempts were designed such that a range of 300 to 600 nm was covered. This is called “version 1” in the following. For “version 2” the design was adjusted such that a cut-off around 550 nm allows the reduction of the the night-sky background contribution (first emission line around 556 nm). The latter might become important in combination with a possible future replacement of the current photomultiplier tubes in the photodetectors of the telescopes (which are not particularly susceptible to night-sky background) by silicon detectors that have a good quantum efficiency for wavelengths above 550 nm. The reflectance curves of these coatings are as well shown in Fig. \[fig:fig001\]. Most durability testing reported on in the following has been performed on version 1 and is currently ongoing for version 2. Nevertheless, the materials and the technology used are exactly the same for both versions, the only difference is the number of layers. Application to Large Substrates and at Low Temperatures ======================================================= The largest mirrors needed for CTA will be of hexagonal shape with 1.5 m distance from flat to flat, resulting in a mirror area of roughly 2 m$^2$. While the Al based coatings are in principle commercially available for substrates of this size, applying the dielectric coatings to such large surfaces was more challenging and needed additional development effort. Given that these are interference coatings consisting of many layers of materials with different refractive indices a homogeneuos thickness of each layer over the full mirror size is important. Fig. \[fig:fig002\] shows the uniformity achieved in the reflectance over the diameter of such a big mirror. The second challenge was created by the mirror substrates themselves. Many substrate technologies under development for CTA are sandwich structures of different materials that are glued together [@Canestrari:2013; @Brun:2013; @Foerster:2013]. Most of these glues cannot be heated to temperatures above 80$^{\circ}$ C without damage. While Al-based coatings can be applied without heating the substrate and with only limited heat impact from the evaporation sources during the coating process, a special process was developed for the dielectric coating to keep the substrate temperature below the required limit. This comes at the expense of longer coating times and therefore higher costs. In parallel, the option is being investigated to coat only the front glass sheet and then to construct the sandwich with this coated sheet rather than coating the final mirror. This way the the more costly low-temperature process would not be needed. Durability Testing ================== A series of durability tests have been performed in the laboratory with small glass samples coated with the different coatings to evaluate their resistance to environmental impact.\ [**Temperature and humidity cycling:**]{} Samples of the three-layer coating as well as of version 1 of the dielectric coating have been exposed to overlapping cycles in temperature (-10$^{\circ}$ C $<$ T $<$ 60$^{\circ}$ C; 5 h cycle duration) and in humidity (5% to 95%; 8h cycle duration) for a total of approximately 8000 h. The different cycle duration was chosen to expose the samples to all possible combinations of temperature and humidity. The reflectance as a function of wavelength of the samples has been measured with a spectrophotometer (angle of incidence 7$^{\circ}$) before and after the cycling. The results of these measurements are shown in Fig. \[fig:fig003\]. The classical Al + SiO$_2$ coating shows a significant loss of reflectance. The Al coating with the three layer protective coating exhibts a much smaller change in reflectance after the cycling and the dielectric coating has not changed its reflective properties at all within the accuracy of the measurement. Samples of version 2 of the dielectric coating optimized for the cutoff at 550 nm are currently undergoing the same test.\ [**Adhesion tests:**]{} Coating adhesion was tested by applying a clear tape with an adhesion power of 16 N per 25 mm to the coated surface of the samples and removing it at a rate of $<1$ s per 25 mm (so-called “snap test”). All tested samples survived this procedure without any removal of coating, including the new coatings under investigation and the classical Al+SiO$_2$ coating.\ [**Abrasion tests:**]{} Three different abrasion tests have been performed on samples with all three coatings: a\) A standard cheesecloth test using a force of 10 N and 50 strokes on the coated surface was performed. The Al+SiO$_2$ reference samples showed mild to moderate abrasion under this test (defined as few to many visible scratches left behind after the test), the three-layer coating showed very mild to mild abrasion (one or two to few scratches) and the dielectric coating none to very mild abrasion (zero to one or two scratches).\ b) In a more severe test an eraser was used to perform 20 strokes with a force of 10 N. After this test all three coatings showed signs of abrasion, but again to very different levels: the reference samples coated with SiO$_2$ showed serious to severe abrasion (removal of some to most of the coating), the three-layer coating moderate to severe abrasion (many scratches to removal of some coating), and the dielectric coating only showed very mild to mild abrasion (one or two to few scratches). Figure \[fig:fig004\] shows a few samples after this severe abrasion test. \ c) Samples with all three coatings were exposed to a sand-blasting test. The abrading medium used was silicon carbide with a grade of 220 $\mu$m. The flow rate was approximately 25 g/min and the total amount of abrading medium used per sample was 125 g. The setup was operated using an air pressure of 15 kPa and the air was fed in at a rate of 50 l/min. The sample was placed under an angle of 45$^{\circ}$ under the abrasive jet nozzle. This test results into an ellipse on the coated surface on which the coating is partially and/or fully removed. The sizes of these ellipses are a measure of how easily the coating is abraded. Figure \[fig:fig005\] shows three samples after the sand-blasting test; at the top the Al + SiO$_2$ coating, in the middle the Al + SiO$_2$ + HfO$_2$ + SiO$_2$ coating and at the bottom the dielectric coating. Three elliptical areas have been measured to quantify the different abrasion levels of the three coatings: The “clear area”, meaning the central region in which the coating has been fully removed, the darker “partially clear area” around it and the penumbra being the reach of the silicon carbide, which shows as a light ‘halo’. The results are given in Tab. \[table:ellipses\] and demonstrate that the abrasion resistance of the dielectric coating is significantly higher than of the Al-based coatings. Coating Clear Area Partially Clear Penumbra ------------ ----------------- ----------------- -------------- Al+SiO$_2$ $103 \pm 7$ $144 \pm 1$ $420 \pm 40$ 3-layer $98.9 \pm 0.05$ $136\pm 4$ $400 \pm 20$ dielectric $ 0 $ $114 \pm 3$ $320 \pm 10$ : Average areas of abrasion ellipses in mm$^2$ resulting from the sand blasting test on the three different types of coatings. Given are mean values and standard deviations over all tested samples of each coating type.[]{data-label="table:ellipses"} \ [**Artificial Bird Faeces:**]{} Samples of all coatings have been treated with pancreatin, a pancreas enzyme that is regularly used to simulate the effects of bird feaces on lacquers and other material. A 1:2 mixture of pancreatin and de-ionized water was applied to the coated surfaces of the samples and they were “baked” for 4 weeks at 40$^{\circ}$ C in a climate chamber to simulate the effect of the bird faeces staying on the mirror surface for some time in a hot and dry environment as is typical for locations of Cherenkov telescopes. No influence on either of the the three coatings was observed after cleaning the samples and repeated reflectance measurements.\ Conclusions and Outlook ======================= In the laboratory tests described above, the three layer protective coating on top of an aluminium coating performs slightly better than the standard Al + SiO$_2$ coating. The dielectric coating shows a significantly better performance still. Nevertheless, the predictive power of these laboratory tests for the real outdoor performance is not clearly established and additional outdoor experience is needed. Over the last 2 years all of the approximately 1520 mirrors of the 4 original telescopes of the H.E.S.S. experiment in Namibia have been exchanged and refurbished. 380 mirrors have the standard Al + SiO$_2$ coating, rougly 1040 the Al + three layer coating, and about 100 mirrors the dielectric coating. This way data on long-term outdoor exposure will become available. One problem of the three layer protective coating directly applied on top of the Al layer has been noted for these mirrors: it was observed that in the first months after coating the interference minimum around 280 nm (see Figs. \[fig:fig001\] and \[fig:fig003\]) was getting deeper, slightly affecting the reflectance at 300 nm as well. To solve this problem, an additional protection layer inbetween the Al and the three-layer coating is now being applied to prevent this effect. Detailed tests of this improved coating are ongoing. A significant problem of the dielectric coating has been detected in a study that intended to compare different substrate technologies in terms of the probability to form condensation on the reflective surface [@Chadwick:2013]. It was observed that the same substrates are more likely to mist over if coated with the dielectric coating rather than with an Al based coating. Laboratory tests have associated this effect with a much higher emissivity of the dielectric coatings in the infrared (8-14 $\rm{\mu}$m). Investigations are ongoing to solve this problem by applying an additional layer below the dielectric coating that does not reflect in the regime of the night-sky background but has a high reflectance in the mid- infrared. To conclude, the Al + SiO$_2$ + HfO$_2$ + SiO$_2$ as well as the dielectric coating are now readily available alternatives to the standard Al + SiO$_2$ coating. The three layer coating provides an about 5% better reflectance and a slightly better performance in durability tests in the laboratory at no significant extra cost. The dielectric coating can now be applied up to substrate areas of 2 m$^2$ and at substrate temperatures $<80^{\circ}$ C during the coating process. This covers the largest mirrors forseen for CTA and is suitable for the application on glued sandwich substrates. It provides a significantly better reflectance in the desired wavelength range, a significant suppression of the night-sky background and a significantly better performances in the durability tests, but it needs further improvement concerning the high emissivity in the infrared that leads to a higher probability of forming condensation on the mirrors. Acknowledgements {#acknowledgements .unnumbered} ================ We gratefully acknowledge support from the agencies and organisations listed in this page: http://www.cta-observatory.org/?q=node/22. W. Hofmann et al., arXiv:1008.3702 (2010) A. Bonardi et al., these proceedings (2013) contribution 0207 R. Canestrari et al., Optical Engimeering 52 (2013) 051204 P. Brun et al., NIM A 714 (2013) 58 A. F[ö]{}rster et al., these proceedings (2013) contribution 0747 P. Chadwick et al., these proceedings (2013) contribution 0847
--- abstract: 'We consider quantum-memory assisted protocols for discriminating quantum channels. We show that for optimal discrimination of memory channels, memory assisted protocols are needed. This leads to a new notion of distance for channels with memory. For optimal discrimination and estimation of sets of independent unitary channels memory-assisted protocols are not required.' author: - Giulio Chiribella - 'Giacomo M. D’Ariano' - Paolo Perinotti title: Memory effects in quantum channel discrimination --- The problem of discrimination between quantum channels has been recently considered in quantum information [@darloppar; @acin; @sacchi1; @sacchi2; @feng; @chefles]. For example, in Ref. [@chefles] an application of discrimination of unitary channels as oracles in quantum algorithms is suggested. The optimal discrimination is achieved by applying the unknown channel locally on some bipartite input state of the system with an ancilla, and then performing some measurement at the output. A natural extension to multiple uses is obtained by applying the uses in [*parallel*]{} to a global input state. However, more generally, one can apply the uses partly in parallel and partly in series, even intercalated with other fixed transformations, as in Ref. [@combs]. Indeed, due to its intrinsic causally ordered structure, the memory channel can be used either in parallel or in a causal fashion (see Fig. \[uno\]). In this Letter we show that this [*causal*]{} scheme is necessary when the multiple uses are correlated—i. e. for memory channels—whereas it is not needed for independent uses of unitary channels (the case of non unitary channels remains an open problem). Memory channels [@palmacc; @manc; @kretwer; @nila; @plenio] attracted increasing attention in the last years. They are quantum channels whose action on the input state at the $n$-th use can depend on the previous $n-1$ uses through a quantum ancilla. The problem of optimal discriminability of two memory channels is relevant for assessing that a cryptographic protocol is concealing [@bitcomm] and for minimization of oracle calls in quantum algorithms. We will provide an example showing that a pair of memory channels can be perfectly discriminabile, even though they never provide orthogonal output states when applied to the same global input state. This new causal setup provides the most general discrimination scheme for multiple quantum channels, and this fact leads to a new notion of distance between channels. In the case of two unitary channels, optimal parallel discrimination with $N$ uses was derived in Ref. [@darloppar; @acin], and in Ref. [@feng] a causal scheme without entanglement was proved to be equivalently optimal. In the following, we will prove the optimality of both schemes for discrimination of unitaries. We will generalize this result to discrimination of sequences of unitaries, and to estimation with multiple copies. Differently from the case of memory channels, we will prove that for all these examples causal schemes are not necessary. It is convenient to represent a channel $\map C$ by means of its Choi operator $C$ defined as follows $$C:=\map C\otimes\map I(|I\kk\bb I|),$$ for a channel $\map C$ with input/output states in $\sH_{\mathrm{in}/\mathrm{out}}$, respectively, where $|I\kk:=\sum_{n}|n\>|n\>\in\sH_\mathrm{in}^{\otimes2}$, $\{|n\>\}$ being an orthonormal basis for $\sH_\mathrm{in}$. In this representation complete positivity of $\map C$ is simply $C\geq0$) and the trace-preserving constraint is $\Tr_\mathrm{out}[C]=I_\mathrm{in}$. In a memory channel with $N$ inputs and $N$ outputs labeled as in Fig. \[uno\], the causal independence of output $2n+1$ on input $2m$ with $m>n$ is translated to the following recursive property [@combs] of the Choi operator $C=:C^{(N)}$ $$\Tr_{2n-1}[C^{(n)}]=I_{2n-2}\otimes C^{(n-1)},\quad\forall 1\leq n\leq N, \label{caus}$$ where conventionally $C^{(0)}=1$. A [*tester*]{} is a set of positive operators $P_i\geq0$ such that the probability of outcome $i$ while testing the channel $\map C$ is provided by the generalized Born rule $$p(i|\map C):=\Tr[P_i C].$$ The notion of tester is an extension of that of POVM, which describes the statistics of customary measurements on quantum states. The normalization of probabilities for testers on memory channels with $N$ input-output systems is equivalent to the following recursive property, analogous to that in Eq.  $$\begin{split} &\sum_iP_i=I_{2N-1}\otimes \Xi^{(N)},\\ &\Tr_{2n-2}[\Xi^{(n)}]=I_{2n-3}\otimes \Xi^{(n-1)},\quad\forall 2\leq n\leq N,\\ &\Tr[\Xi^{(1)}]=1. \end{split} \label{tester}$$ One can prove [@combs] that any tester can be realized by a concrete measurement scheme of the class represented in Fig. \[genersch\]. Mathematical structures analogous to Eqs. and have been introduced in Ref. [@watgut] to describe strategies in a quantum game. Every tester $\{P_i\}$ can be written in terms of a usual POVM $\{\tilde P_i\}$ as follows $$P_i=(I\otimes\Xi^{(N)\frac12})\tilde P_i (I\otimes\Xi^{(N)\frac12}), \label{povm}$$ and for every memory channel $\map C$ the generalized Born rule rewrites as the usual one in terms of the state $$\label{C} \tilde C:=(I\otimes\Xi^{(N)\frac12})C (I\otimes\Xi^{(N)\frac12}).$$ The state $\tilde C$ corresponds to the output system-ancilla state in Fig \[genersch\] after the evolution through all boxes of both the tester and the memory channel, on which the final POVM $\{\tilde P_i\}$ is performed [@delayed]. The standard discriminability criterion for channels is the following. Two channels $\map C_0$ and $\map C_1$ on a $d$-dimensional system are perfectly discriminable if there exists a pure state $|\Psi\kk$ in dimension $d^2$ such that $\map C_i\otimes\map I(|\Psi\kk\bb \Psi|)$ with $i=0,1$ are orthogonal (every joint mixed state with an ancilla of any dimension can be purified with an ancilla of dimension $d$). Here we use the notation $|\Psi\kk:=\sum_{m,n}|m\>|n\>$ which associates an operator $\Psi$ to a bipartite vector. It is easy to see that the orthogonality between the two output states is equivalent to the following condition [@noteort] $$C_0(I\otimes \rho)C_1=0, \label{discchan}$$ where $\rho:=\Psi^*\Psi^T$, where $\Psi^*$ and $\Psi^T$ denote the complex conjugate and transpose of $\Psi$ in the canonical basis $\{|n\>\}$, respectively. The criterion in Eq.  is too restrictive for memory channels. Indeed, the correct condition for perfect discriminability of two memory channels $\map C_i$ with $i=0,1$ is equivalent to the existence of a tester $\{P_i\}$ with $i=0,1$, such that $$\label{PC} \Tr[P_i C_j]=\delta_{ij},$$ which means that the two channels can be perfectly discriminated by a measurement scheme as that of Fig. \[genersch\]. Using Eqs. (\[povm\]) and (\[C\]), Eq. (\[PC\]) becomes $\Tr[\tilde P_i \tilde C_j]=\delta_{ij}$, whence the states $\tilde C_i$ with $i=0,1$ are orthogonal, and the same derivation as for Eq. (\[discchan\]) leads to $$C_0\left(I\otimes\Xi^{(N)}\right)C_1=0, \label{condiscr}$$ with $\Xi^{(N)}$ as in Eq. . In Eq.  the identity operator acts only on space $2N-1$, differently from Eq.  where it acts on all output spaces. It is interesting to analyze the special case of memory channels made of sequences of independent channels $\{\map C_{ij}\}_{1\leq j\leq N}$ and $i=0,1$ (in Fig. \[genersch\], the memory channel is replaced by an array of channels without the ancillas $A_1$ and $A_2$). The condition for perfect discriminability is the same as Eq.  with $C_0$ and $C_1$ replaced by $\bigotimes_j C_{ij}$ for $i=0,1$, respectively. In terms of a Kraus form $\map C_i=\sum_jK_{ij}\cdot K_{ij}^\dag$ Eq. (\[condiscr\]) becomes the orthogonality condition $\bb K_{0j}|\left(I\otimes\Xi^{(N)}\right)|K_{1k}\kk=0$, which for the sequences of maps becomes $$\bigotimes_{l=1}^N\bb K^{l}_{0j_l}|\left(I\otimes\Xi^{(N)}\right)\bigotimes_{m=1}^N |K^{m}_{1k_m}\kk=0.$$ for all choices of indices $(j),(k)$, where $K^m_{ij}$ are the Kraus operators for the channel $\map C_{im}$. For sets composed by single channels $\map C_i$ with $i=0,1$, the condition becomes simply the existence of a state $\rho$ such that $$\Tr[\rho K^\dag_{0j} K_{1k}]=0,\quad\forall j,k,$$ and the minimum rank of such state $\rho$ determines the amount of entanglement required for discrimination. We now provide an example of memory channels that cannot be discriminated by a parallel scheme, but can be discriminated with a tester. Each memory channel has two uses, and is denoted as $\map C_i=\map W_i\circ\map Z_i$ for $i=0,1$, where the two uses $\map W_i$ and $\map Z_i$ are connected only through the ancilla $A$, and $\map W_i$ has input $0$ and output $A$ and $1$, and $\map Z_i$ has input $A$ and $2$ and output $3$. The first use $\map W_0$ of $\map C_0$ is the channel with $d$-dimensional input and fixed output $$\map W_0(\rho)=\frac{1}{d^2}\sum_{p,q=0}^{d-1}|p,q\>\<p,q|\otimes |p,q\>\<p,q|,$$ $|p,q\>$ being an orthonormal basis in a $d^2$ dimensional Hilbert space. The second use $\map Z_0$ of $\map C_0$ is given by $$\map Z_0(\rho)=\sum_{p,q=1}^{d-1}W_{p,q}\Tr_{A}[\rho(I_2\otimes |p,q\>\<p,q|)]W^\dag_{p,q},$$ where the unitaries $W_{p,q}:=Z^p U^q$ are the customary shift-and-multiply operators, with $Z|n\>=|n+1\>$ and $U|n\>=e^{\frac{2\pi i}d n}|n\>$. The second channel $\map C_1$ is given by $$\map W_1(\rho)=\frac I{d^2},\quad\map Z_1(\rho)=|0\>\<0|.$$ We will now show that the two channels are discriminable with a casual setup and not with a parallel one. Their Choi operators are $$\begin{split} C_0&=\frac{1}{d^2}\sum_{p,q=1}^{d-1}|p,q\>\<p,q|_{1}\otimes|W_{p,q}\kk\bb W_{p,q}|_{32}\otimes I_0,\\ C_1&=\frac{1}{d^2}\;I^{\otimes 2}_{1}\otimes |0\>\<0|_3\otimes I_{02}, \end{split} \label{chois}$$ where the output spaces $1,3$ have dimension $d^2$ and $d$, respectively. Suppose that the channels are perfectly discriminable, then by Eq. (\[discchan\]) there exists $\rho$ such that $$C_0 (I_{13}\otimes\rho_{02}) C_1=C_0 C_1(I_{13}\otimes\rho_{02}) =0,$$ where the second equality comes from the expression of $C_1$ in Eq. . Tracing both sides on the output spaces 1 and 3 one has $\Tr_{13}[C_0C_1]\rho=0$. However, $$\Tr_{13}[C_0C_1]=\frac{I}{d^2}$$ whence $\rho=0$. This proves by contradiction that the criterion in Eq. —corresponding to parallel discrimination schemes—is not satisfied by channels $\map C_0$ and $\map C_1$. We will now show a simple causal scheme which allows perfect discrimination of the same channels. The first use of the channel is applied to any state $|\psi\>\<\psi|$, then the measurement with POVM $\{|p,q\>\<p,q|\}$ is performed at the output on system 1. Depending on the outcome $\bar p,\bar q$, the second use of the channel is applied to the state $W^\dag_{\bar p,\bar q}|1\>\<1|W_{\bar p,\bar q}$. It is clear that the output of channel $\map Z_0$ is the state $|1\>\<1|$, whereas the output of $\map Z_1$ is $|0\>\<0|$. This example highlights the need of using a causal scheme in order to discriminate between memory channels. The causal discriminability criterion implies a notion of distance between memory channels different from the usual distance between channels. Indeed, the discriminability criterion (\[discchan\]) between channels corresponds to the cb-norm distance [@paulsen; @kita; @notanorm]. The latter can be rewritten as follows (see e.g. Ref [@sacchi1]) $$\begin{split} &D_{cb}(\map C_0,\map C_1)=\max_{\rho}\N{\left(I\otimes\rho^{\frac12}\right)\Delta \left(I\otimes\rho^{\frac12}\right)}_1,\\ &\Delta:=C_0-C_1, \end{split} \label{distcb}$$ where the maximum is over all states $\rho$, and $\N{X}_1:=\Tr[\sqrt{X^\dag X}]$ denotes the trace-norm. One has $D_{cb}(\map C_0,\map C_1)\leq 2$, with the equal sign for perfectly discriminable channels. For memory channels the discriminability criterion (\[discchan\]) corresponds to the new distance $$D(\map C_0,\map C_1):=\max_{\Xi^{(N)}}\N{\left(I\otimes\Xi^{(N)\frac12}\right)\Delta \left(I\otimes\Xi^{(N)\frac12}\right)}_1,$$ where the maximum is over all $\Xi^{(N)}$ satisfying conditions . For $N=1$ this notion reduces to the usual distance in Eq. . The easiest application of testers is the discrimination of sequences of unitary channels $(T_j)$ and $(V_j)$, with $j=1,\dots,N$. Without loss of generality we can always reduce to the discrimination of the sequence $(U_j):=(T^\dag_jV_j)$ from the constant sequence $(I)$. Let us first consider the case of sequences of two unitaries. By referring to the scheme in Fig. \[genersch\] we can restate the problem as the discrimination of $W^\dag(U_1\otimes I)W(U_2\otimes I)$ from $I$ on a bipartite system, where $W$ describes the interaction with an ancillary system. It is well known that optimal discriminability of a unitary $X$ from the identity is related to the angular spread $\Theta(X)$, defined as the maximum relative phase between two eigenvalues of $X$ [@darloppar; @acin]. Apart from the degenerate case in which $X$ has only two different eigenvalues, the discriminability of $X$ from $I$ is given by the quantity $\max\{0,\cos\Theta(X)/2\}\geq0$, which is zero for $\Theta(X)\geq\pi$, corresponding to perfect discriminability. Since unitary conjugation preseves $\Theta(X)$ and the angular spread of the product of two unitaries $X,Y$ satisfies the following bound [@presk] $$\Theta(XY)\leq\Theta(X)+\Theta(Y), \label{spredis}$$ and finally $\Theta(X\otimes Y)=\Theta(X)+\Theta(Y)$, one has that $\Theta[W^\dag(U_1\otimes I)W(U_2\otimes I)]\leq\Theta(U_1\otimes U_2)$, then no causal scheme can outperform the parallel one. By induction, one can prove that this is true for sequences of any length $N$. Indeed, defining $X_{N-1}$ as the product of the tester unitaries alternated with $U_j\otimes I$ for $1\leq j<N$, if $\Theta(X_{N-1})=\Theta(\bigotimes_{j=1}^{N-1}U_j)$ holds true, then it holds also for $N$, due to Eq.  . By the same argument, one can also prove that the sequential scheme of Ref. [@feng] equals the performances of the parallel scheme, since there always exists $T$ such that $\Theta(UTVT^\dag)=\Theta(U\otimes V)$ (indeed it is sufficient that $T$ transforms the eigenbasis of $V$ into that of $U$, suitably matching the eigenvalues). Therefore, the schemes of Refs. [@darloppar; @acin; @feng] are optimal also for discriminating sequences of unitaries. Notice that this also includes the case of discrimination of two different permutations of a sequence of unitary transformations. Another situation in which a parallel scheme already performs optimally is the case of estimation of unitary transformations $U_g$, $g\in G$ which make a unitary representation of the group $G$. For $N$ uses of the unitary $U_g$ the Choi operator in this case is $$R_g^{(N)}=R_g^{\otimes N},\; R_g=(U_g\otimes I)|I\kk\bb I|(U_g^\dag\otimes I).$$ The probability density of estimating $h$ for actual element $g$ is $$p(h|g)=\Tr[P_h R_g^{(N)}].$$ As a figure of merit for estimation one typically considers a cost function $c(h,g)$ averaged on $h$, with $c(h,g)=c(fh,fg)$ $\forall f\in G$ (the cost depends only on distance, not on specific location) $$C_g(p)=\int_G\mu(\d h)c(h,g)p(h|g),$$ where $\mu(\d g)$ is the invariant Haar measure on $G$. The optimal density $p$ is the one minimizing $\hat C(p):=\max_{g\in G}C_g(p)$. For every density $p(h|g)$ there exists a [ *covariant*]{} one $p_c(h|g)=p_c(fh|fg)$ $\forall f\in G$ which can be obtained as the average $p_c(h|g):=\overline{p(fh|fg)}$ over $f\in G$ (practically this corresponds to randomly transforming the input before measuring and processing the output accordingly). Since $\hat C(p_c)=\overline{C}(p)\leq\hat C(p)$, then the optimal density minimizing both costs $\hat C$ and $\overline{C}$ can be chosen as covariant. Now, since $p_c(h|g)=p_c(e|gh^{-1})$ ($e$ denoting the identity element in $G$), this means that the optimal tester must be of the covariant form $$P_h=(U_h\otimes I)^{\otimes N}P_e(U_h^\dag \otimes I)^{\otimes N}.$$ For such $P_h$, the normalization $\int_G\mu(\d h)P_h=I\otimes\Xi^{(N)}$ implies the commutation $[I\otimes \Xi^{(N)},(U_h\otimes I)^{\otimes N}]=0$, whence the POVM $\tilde P_h$ in Eq.  is itself covariant. The optimal tester problem is then equivalent to the optimal state estimation in the orbit $(I\otimes \Xi^{(N)\frac12})R^{(N)}_g(I\otimes \Xi^{(N)\frac12})$. This proves that the optimal estimation of $U_g$ with $g\in G$ compact group can be reduced to a covariant state estimation problem, and the parallel scheme of Ref. [@entest] is optimal. The possibility of achieving the same optimal estimation using a sequential scheme as in Ref. [@feng] remains an open problem, as, more generally, the possibility of minimizing the amount of entanglement used by the tester. In conclusion, we considered the role of memory effects in the discrimination of memory channels and of customary channels with multiple uses. We used the new notion of [*tester*]{} [@combs], which describes any possible scheme with parallel, sequential, and combined setup of the tested channels. We provided an example of discrimination of memory channels which cannot be optimized by a parallel scheme, and for which the optimal discrimination is achieved by a sequential scheme. The new testing of memory channels corresponds to a new notion of distance between channels. Finally, we showed that for the purpose of unitary channel discrimination and estimation with multiple uses, memory effects are not needed. This work has been supported by the EC through the project SECOQC. [99]{} G. M. D’Ariano, and P. Lo Presti, M. G. A. Paris, Phys. Rev. Lett. [**87**]{}, 270404 (2001). A. Acín, Phys. Rev. Lett. [**87**]{}, 177901 (2001). M. F. Sacchi, Phys. Rev. A [**71**]{}, 062340 (2005). G. M. D’Ariano, M. F. Sacchi, and J. Kahn, Phys. Rev. A [**72**]{}, 052302 (2005). R. Duan, Y. Feng, and M. Ying, Phys. Rev. Lett., [ **98**]{}, 100503, (2007). A. Chefles, A. Kitagawa, M. Takeoka, M. Sasaki, and J. Twamley, To appear in J. Phys. A: Math. Theor. G. Chiribella, G. M. D’Ariano, and P. Perinotti, arXiv:0712.1325 C. Macchiavello and G. M. Palma, Phys. Rev. A [ **65**]{}, 050301(R) (2002). G. Bowen and S. Mancini, Phys. Rev. A [**69**]{}, 012306 (2004). D. Kretschmann and R. F. Werner, Phys. Rev. A [ **72**]{}, 062323 (2005). N. Datta and T. C. Dorlas, J. Phys. A: Math. Theor. [**40**]{}, 8147-8164 (2007). M. B. Plenio and S. Virmani, Phys. Rev. Lett. [ **99**]{}, 120504 (2007). G. M. D’Ariano, D. Kretschmann, D. Schlingemann, and R. F. Werner, Phys. Rev. A [**76**]{}, 032328 (2007). G. Gutoski and J. Watrous, in [*Proceedings of the Thirty-nineth Annual ACM Symposium on Theory of Computation (STOC)*]{}, pag. 565-574, (2007). This is the [*delayed measurement principle*]{}, stating that for any quantum setup involving measurements, there is an equivalent one in which all measurements are postponed at the final stage The orthogonality condition is $0=(I\otimes\Psi^T)C_0(I\otimes\Psi^*\Psi^T)C_1(I\otimes\Psi^*)=H_0^\dag H_0 H_1^\dag H_1$, with $H_i=C_i^{\frac{1}{2}}(I\otimes\Psi^*)$, which holds iff $H_0 H_1^\dag=0$. V. I. Paulsen, [*Completely Bounded Maps and Operator Algebras*]{}, Pitman Research Notes in Math. 146. (Longman Scientific & Technical, Harlow, 1996). D. Aharonov, A. Kitaev, and N. Nsan, in [ *Proceedings of the Thirtieth Annual ACM Symposium on Theory of Computation (STOC)*]{}, pag. 20-30, (1997). This distance is referred to in the literature as cb-norm distance, since it is induced by the norm of complete boundedness [@paulsen] (cb-norm for short, also defined diamond norm in Ref. [@kita]). A. M. Childs, J. Preskill, and J. Renes, J. Mod. Opt. [**47**]{}, 155-176 (2000). In the degenerate case, if $\Theta(U)+\Theta(V)>\pi$, it is always possible to find $T$ such that $\Theta(UTVT^\dag)=\pi$. G. Chiribella, G. M. D’Ariano and M. F. Sacchi, Phys. Rev. A [**72**]{}, 042338 (2005).
--- abstract: | In this paper we consider the problem of full-duplex multiple-input multiple-output (MIMO) relaying between multi-antenna source and destination nodes. The principal difficulty in implementing such a system is that, due to the limited attenuation between the relay’s transmit and receive antenna arrays, the relay’s outgoing signal may overwhelm its limited-dynamic-range input circuitry, making it difficult—if not impossible—to recover the desired incoming signal. While explicitly modeling transmitter/receiver dynamic-range limitations and channel estimation error, we derive tight upper and lower bounds on the end-to-end achievable rate of decode-and-forward-based full-duplex MIMO relay systems, and propose a transmission scheme based on maximization of the lower bound. The maximization requires us to (numerically) solve a nonconvex optimization problem, for which we detail a novel approach based on bisection search and gradient projection. To gain insights into system design tradeoffs, we also derive an analytic approximation to the achievable rate and numerically demonstrate its accuracy. We then study the behavior of the achievable rate as a function of signal-to-noise ratio, interference-to-noise ratio, transmitter/receiver dynamic range, number of antennas, and training length, using optimized half-duplex signaling as a baseline. *Keywords:* MIMO relays, full-duplex relays, limited dynamic range, channel estimation. author: - 'Brian P. Day, Adam R. Margetts, Daniel W. Bliss, and Philip Schniter [^1] [^2] [^3] [^4] [^5]' bibliography: - 'macros\_abbrev.bib' - 'stc.bib' - 'books.bib' - 'comm.bib' - 'misc.bib' - 'multicarrier.bib' title: 'Full-Duplex MIMO Relaying: Achievable Rates under Limited Dynamic Range' --- Introduction {#sec:intro} ============ We consider the problem of communicating from a source node to a destination node through a relay node. Traditional relay systems operate in a half-duplex mode, whereby the time-frequency signal-space used for the source-to-relay link is kept orthogonal to that used for the relay-to-destination link, such as with non-overlapping time periods or frequency bands. Half-duplex operation is used to avoid the high levels of relay self-interference that are faced with full-duplex[^6] operation (see [Fig. \[fig:relay\_phy\]]{}), where the source and relay share a common time-frequency signal-space. For example, it is not unusual for the ratio between the relay’s self-interference power and desired incoming signal power to exceed 100 dB [@Hua:MIL:10], or—in general—some value larger than the dynamic range of the relay’s front-end hardware, making it impossible to recover the desired signal. The importance of *limited dynamic-range* (DR) cannot be overstressed; notice that, even if the self-interference signal was perfectly known, limited-DR renders perfect cancellation impossible. \[\]\[Bl\]\[1\][S]{} \[\]\[Bl\]\[1\][R]{} \[\]\[Bl\]\[1\][D]{} ![Full-duplex MIMO relaying from source to destination. Solid lines denote desired propagation and dashed lines denote interference.[]{data-label="fig:relay_phy"}](figures/relay_phy.eps "fig:"){width="2.25in"} Recently, multiple-input multiple-output (MIMO) relaying has been proposed as a means of increasing spectral efficiency (e.g., [@Wang:TIT:05; @Simoens:TSP:09]). By MIMO relaying, we mean that the source, relay, and destination each use multiple antennas for both reception and transmission. MIMO relaying brings the possibility of full-duplex operation through *spatial* self-interference suppression (e.g., ). As a simple example, one can imagine using the relay’s transmit array to form spatial nulls at a subset of the relay’s receive antennas, which are then free of self-interference and able to recover the desired signal. In forming these nulls, however, it can be seen that the relay consumes spatial degrees-of-freedom that could have been used in communicating data to the destination. Thus, maximizing the end-to-end throughput involves navigating a tradeoff between the source-to-relay link and relay-to-destination link. Of course, maximizing end-to-end throughput is more involved than simply protecting an arbitrary subset of the relay’s receive antennas; one also needs to consider which subset to protect, and the degree to which each of those antennas are protected, given the source-to-relay and relay-to-destination MIMO channel coefficients, the estimation errors on those coefficients, and the DR limitations of the various nodes. These considerations motivate the following fundamental questions about full-duplex MIMO relaying in the presence of self-interference: *1) What is the maximum achievable end-to-end throughput under a transmit power constraint? 2) How can the system be designed to achieve this throughput?* In this paper, we aim to answer these two fundamental questions while paying special attention to the effects of both limited-DR and channel estimation error. 1. Limited-DR is a natural consequence of non-ideal amplifiers, oscillators, analog-to-digital converters (ADCs), and digital-to-analog converters (DACs). To model the effects of limited receiver-DR, we inject, at each receive antenna, an additive white Gaussian “receiver distortion” with variance $\beta$ times the energy impinging on that receive antenna (where $\beta\ll 1$). Similarly, to model the effects of limited transmitter-DR, we inject, at each transmit antenna, an additive white Gaussian “transmitter noise” with variance $\kappa$ times the energy of the intended transmit signal (where $\kappa\ll 1$). Thus, $\kappa^{-1}$ and $\beta^{-1}$ characterize the transmitter and receiver dynamic ranges, respectively. 2. Imperfect CSI can result for several reasons, including channel time-variation, additive noise, and DR limitations. We focus on CSI imperfections that result from the use of pilot-aided least-squares (LS) channel estimation performed in the presence of limited-DR. Moreover, we consider regenerative relays that decode-and-forward (as in [@Bliss:SSP:07; @PLarsson:VTC:09; @Riihonen:ASIL:09; @Riihonen:ASIL:10; @Hua:MIL:10; @Riihonen:CISS:11]), as opposed to simpler non-regenerative relays that only amplify-and-forward (as in ). The contributions of this paper are as follows. For the full-duplex MIMO relaying problem, an explicit model for transmitter/receiver-DR limitations is proposed; pilot-aided least-squares MIMO-channel estimation, under DR limitations, is analyzed; the residual self-interference, from DR limitations and channel-estimation error, is analyzed; lower and upper bounds on the achievable rate are derived; a transmission scheme is proposed based on maximizing the achievable-rate lower bound subject to a power constraint, requiring the solution of a nonconvex optimization problem, to which we apply bisection search and Gradient Projection; an analytic approximation of the maximum achievable rate is proposed; and, the achievable rate is numerically investigated as a function of signal-to-noise ratio, interference-to-noise ratio, transmitter/receiver dynamic range, number of antennas, and number of pilots. The paper is structured as follows. In [Section \[sec:model\]]{}, we state our channel model, limited-DR model, and assumptions on the transmission protocol. Then, in [Section \[sec:analysis\]]{}, we derive upper and lower bounds on the achievable rate under pilot-aided channel estimation and partial self-interference cancellation at the relay. In [Section \[sec:approach\]]{}, we propose a novel transmission scheme that is based on maximizing the achievable-rate lower-bound subject to a power constraint and, in [Section \[sec:approx\]]{}, we derive a closed-form approximation of the optimized achievable rate whose accuracy is numerically verified. Then, in [Section \[sec:sims\]]{}, we numerically investigate achievable rate as a function of the SNRs $(\rho{_\textsf{r}},\rho{_\textsf{d}})$, the INRs $(\eta{_\textsf{r}},\eta{_\textsf{d}})$, the dynamic range parameters $(\kappa,\beta)$, the number of antennas $(N{_\textsf{r}},N{_\textsf{d}})$, and the training length $T$, and we also investigate the gain of full-duplex signaling (over half-duplex) and partial self-interference cancellation. Finally, in [Section \[sec:conclusion\]]{}, we conclude. *Notation*: We use $(\cdot){^{\textsf{T}}}$ to denote transpose, $(\cdot)^*$ conjugate, and $(\cdot){^{\textsf{H}}}$ conjugate transpose. For matrices ${\ensuremath{\boldsymbol{A}}},{\ensuremath{\boldsymbol{B}}}\in{{\mathbb{C}}}^{M\times N}$, we use $\operatorname{tr}({\ensuremath{\boldsymbol{A}}})$ to denote trace, $\det({\ensuremath{\boldsymbol{A}}})$ to denote determinant, ${\ensuremath{\boldsymbol{A}}}\odot{\ensuremath{\boldsymbol{B}}}$ to denote elementwise (i.e., Hadamard) product, $\operatorname{sum}({\ensuremath{\boldsymbol{A}}})\in{{\mathbb{C}}}$ to denote the sum over all elements, $\operatorname{vec}({\ensuremath{\boldsymbol{A}}})\in{{\mathbb{C}}}^{MN}$ to denote vectorization, $\operatorname{diag}({\ensuremath{\boldsymbol{A}}})$ to denote the diagonal matrix with the same diagonal elements as ${\ensuremath{\boldsymbol{A}}}$, $\operatorname{Diag}({\ensuremath{\boldsymbol{a}}})$ to denote the diagonal matrix whose diagonal is constructed from the vector ${\ensuremath{\boldsymbol{a}}}$, and $[{\ensuremath{\boldsymbol{A}}}]_{m,n}$ to denote the element in the $m^{th}$ row and $n^{th}$ column of ${\ensuremath{\boldsymbol{A}}}$. We denote expectation by $\operatorname{E}\{\cdot\}$, covariance by $\operatorname{Cov}\{\cdot\}$, statistical independence by $\operatorname{\perp\!\!\!\perp}$, the circular complex Gaussian pdf with mean vector ${\ensuremath{\boldsymbol{m}}}$ and covariance matrix ${\ensuremath{\boldsymbol{Q}}}$ by ${\ensuremath{\mathcal{CN}}}({\ensuremath{\boldsymbol{m}}},{\ensuremath{\boldsymbol{Q}}})$, and the Kronecker delta sequence by $\delta_k$. Finally, ${\ensuremath{\boldsymbol{I}}}$ denotes the identity matrix, ${{\mathbb{C}}}$ the complex field, and ${{\mathbb{Z}}}^+$ the positive integers. System Model {#sec:model} ============== We will use ${N{_\textsf{s}}}$ and ${N{_\textsf{r}}}$ to denote the number of transmit antennas at the source and relay, respectively, and ${M{_\textsf{r}}}$ and ${M{_\textsf{d}}}$ to denote the number of receive antennas at the relay and destination, respectively. Here and in the sequel, we use subscript- for source, subscript- for relay, and subscript- for destination. Similarly, we will use subscript- for source-to-relay, subscript- for relay-to-destination, subscript- for relay-to-relay, and subscript- for source-to-destination. At times, we will omit the subscripts when referring to common quantities. For example, we will use ${\ensuremath{\boldsymbol{s}}}(t)\in{{\mathbb{C}}}^{N}$ to denote the time $t\!\in\!{{\mathbb{Z}}}^+$ noisy signals radiated by the transmit antenna arrays, and ${\ensuremath{\boldsymbol{u}}}(t)\in{{\mathbb{C}}}^{M}$ to denote the time-$t$ undistorted signals collected by the receive antenna arrays. More specifically, the source’s and relay’s radiated signals are ${\ensuremath{\boldsymbol{s}}}{_\textsf{s}}(t)\in{{\mathbb{C}}}^{{N{_\textsf{s}}}}$ and ${\ensuremath{\boldsymbol{s}}}{_\textsf{r}}(t)\in{{\mathbb{C}}}^{{N{_\textsf{r}}}}$, respectively, while the relay’s and destination’s collected signals are ${\ensuremath{\boldsymbol{u}}}{_\textsf{r}}(t)\in{{\mathbb{C}}}^{{M{_\textsf{r}}}}$ and ${\ensuremath{\boldsymbol{u}}}{_\textsf{d}}(t)\in{{\mathbb{C}}}^{{M{_\textsf{d}}}}$, respectively. Propagation Channels {#sec:chan} ---------------------- We assume that propagation between each transmitter-receiver pair can be characterized by a Raleigh-fading MIMO channel ${\ensuremath{\boldsymbol{H}}}\in{{\mathbb{C}}}^{M\times N}$ corrupted by additive white Gaussian noise (AWGN) ${\ensuremath{\boldsymbol{n}}}(t)$. By “Rayleigh fading,” we mean that $\operatorname{vec}({\ensuremath{\boldsymbol{H}}})\sim{\ensuremath{\mathcal{CN}}}({\ensuremath{\boldsymbol{0}}},{\ensuremath{\boldsymbol{I}}}_{MN})$, and by “AWGN,” we mean that ${\ensuremath{\boldsymbol{n}}}(t) \sim {\ensuremath{\mathcal{CN}}}({\ensuremath{\boldsymbol{0}}},{\ensuremath{\boldsymbol{I}}}_{M})$. The time-$t$ radiated signals ${\ensuremath{\boldsymbol{s}}}(t)$ are then related to the received signals ${\ensuremath{\boldsymbol{u}}}(t)$ via $$\begin{aligned} {\ensuremath{\boldsymbol{u}}}{_\textsf{r}}(t) &= \sqrt{\rho{_\textsf{r}}} {\ensuremath{\boldsymbol{H}}}{_\textsf{sr}}{\ensuremath{\boldsymbol{s}}}{_\textsf{s}}(t) + \sqrt{\eta{_\textsf{r}}} {\ensuremath{\boldsymbol{H}}}{_\textsf{rr}}{\ensuremath{\boldsymbol{s}}}{_\textsf{r}}(t) + {\ensuremath{\boldsymbol{n}}}{_\textsf{r}}(t) \label{eq:uR}\\ {\ensuremath{\boldsymbol{u}}}{_\textsf{d}}(t) &= \sqrt{\rho{_\textsf{d}}} {\ensuremath{\boldsymbol{H}}}{_\textsf{rd}}{\ensuremath{\boldsymbol{s}}}{_\textsf{r}}(t) + \sqrt{\eta{_\textsf{d}}} {\ensuremath{\boldsymbol{H}}}{_\textsf{sd}}{\ensuremath{\boldsymbol{s}}}{_\textsf{s}}(t) + {\ensuremath{\boldsymbol{n}}}{_\textsf{d}}(t) . \label{eq:uD}\end{aligned}$$ In [(\[eq:uR\])]{}-[(\[eq:uD\])]{}, $\rho{_\textsf{r}}>0$ and $\rho{_\textsf{d}}>0$ denote the signal-to-noise ratio (SNR) at the relay and destination, while $\eta{_\textsf{r}}>0$ and $\eta{_\textsf{d}}>0$ denote the interference-to-noise ratio (INR) at the relay and destination. (As described in the sequel, the destination treats the source-to-destination link as interference). The INR $\eta{_\textsf{r}}$ will depend on the separation between, and orientation of, the relay’s transmit and receive antenna arrays [@Riihonen:CISS:11], whereas the INR $\eta{_\textsf{d}}$ will depend on the separation between source and destination modems, so that typically $\eta{_\textsf{d}}\ll \eta{_\textsf{r}}$. We emphasize that [(\[eq:uR\])]{}-[(\[eq:uD\])]{} models the channels ${\ensuremath{\boldsymbol{H}}}{_\textsf{sr}}$, ${\ensuremath{\boldsymbol{H}}}{_\textsf{rr}}$, ${\ensuremath{\boldsymbol{H}}}{_\textsf{rd}}$, and ${\ensuremath{\boldsymbol{H}}}{_\textsf{sd}}$, as time-invariant quantities. Transmission Protocol {#sec:protocol} ----------------------- For full-duplex decode-and-forward relaying, we partition the time indices $t=0,1,2,\dots$ into a sequence of communication epochs $\{{\ensuremath{\mathcal{T}}}_i\}_{i=0}^\infty$ where, during epoch ${\ensuremath{\mathcal{T}}}_i\subset{{\mathbb{Z}}}^+$, the source communicates the $i^{th}$ information packet to the relay, while simultaneously the relay communicates the $(i\!-\!1)^{th}$ information packet to the destination. Before the first data communication epoch, we assume the existence of a training epoch ${\ensuremath{\mathcal{T}}}{_\textsf{train}}$ during which the modems estimate the channel state. From the estimated channel state, the data communication design parameters are optimized and the resulting parameters are used for every data communication epoch. Since the design and analysis will be identical for every data-communication epoch (as a consequence of channel time-invariance), we suppress the index $i$ in the sequel and refer to an arbitrary data communication epoch as ${\ensuremath{\mathcal{T}}}{_\textsf{data}}$. The training epoch is partitioned into two equal-length periods (i.e., ${\ensuremath{\mathcal{T}}}{_\textsf{train}}[1]$ and ${\ensuremath{\mathcal{T}}}{_\textsf{train}}[2]$) to avoid self-interference when estimating the channel matrices. Each data epoch is also partitioned into two periods (i.e., ${\ensuremath{\mathcal{T}}}{_\textsf{data}}[1]$ and ${\ensuremath{\mathcal{T}}}{_\textsf{data}}[2]$) of normalized duration $\tau\in[0,1]$ and $1-\tau$, respectively, over which the transmission parameters can be independently optimized. As we shall see in the sequel, such flexibility is critical when the INR $\eta{_\textsf{r}}$ is large relative to the SNR $\rho{_\textsf{r}}$. Moreover, this latter partitioning allows us to formulate both half- and full-duplex schemes as special cases of a more general transmission protocol. For use in the sequel, we find it convenient to define $\tau[1] {\triangleq}\tau$ and $\tau[2]{\triangleq}1-\tau$. Within each of these periods, we assume that the transmitted signals are zero-mean and wide-sense stationary. Limited Transmitter Dynamic Range {#sec:lim_tdr} ----------------------------------- We model the effect of limited transmitter dynamic range (DR) by injecting, per transmit antenna, an independent zero-mean Gaussian “transmitter noise” whose variance is $\kappa$ times the energy of the *intended* transmit signal at that antenna. In particular, say that ${\ensuremath{\boldsymbol{x}}}(t)\in{{\mathbb{C}}}^{N}$ denotes the transmitter’s intended time-$t$ transmit signal, and say ${\ensuremath{\boldsymbol{Q}}}{\triangleq}\operatorname{Cov}\{{\ensuremath{\boldsymbol{x}}}(t)\}$ over the relevant time period (e.g., $t\in{\ensuremath{\mathcal{T}}}{_\textsf{data}}[1]$). We then write the time-$t$ noisy radiated signal as $${\ensuremath{\boldsymbol{s}}}(t) = {\ensuremath{\boldsymbol{x}}}(t) + {\ensuremath{\boldsymbol{c}}}(t) ~ \text{s.t.} \left\{ \begin{array}{l} {\ensuremath{\boldsymbol{c}}}(t) \sim {\ensuremath{\mathcal{C}}}{\ensuremath{\mathcal{N}}}({\ensuremath{\boldsymbol{0}}},\kappa \operatorname{diag}({\ensuremath{\boldsymbol{Q}}})) \\ {\ensuremath{\boldsymbol{c}}}(t) \operatorname{\perp\!\!\!\perp}{\ensuremath{\boldsymbol{x}}}(t)\\ {\ensuremath{\boldsymbol{c}}}(t) \operatorname{\perp\!\!\!\perp}{\ensuremath{\boldsymbol{c}}}(t'){\ensuremath{\text{\raisebox{-0.5mm}{$\bigl|_{t'\neq t}$}}}} \quad , \end{array} \right. \label{eq:tx}$$ where ${\ensuremath{\boldsymbol{c}}}(t)\in{{\mathbb{C}}}^{N}$ denotes transmitter noise and $\operatorname{\perp\!\!\!\perp}$ statistical independence. Typically, $\kappa\ll 1$. As shown by measurements of various hardware setups (e.g., [@Santella:TVT:98; @Suzuki:JSAC:08]), the independent Gaussian noise model in [(\[eq:tx\])]{} closely approximates the combined effects of additive power-amp noise, non-linearities in the DAC and power-amp, and oscillator phase noise. Moreover, the dependence of the transmitter-noise variance on intended signal power in [(\[eq:tx\])]{} follows directly from the definition of limited dynamic range. Limited Receiver Dynamic Range {#sec:lim_rdr} -------------------------------- We model the effect of limited receiver-DR by injecting, per receive antenna, an independent zero-mean Gaussian “receiver distortion” whose variance is $\beta$ times the energy collected by that antenna. In particular, say that ${\ensuremath{\boldsymbol{u}}}(t)\in{{\mathbb{C}}}^{M}$ denotes the receiver’s undistorted time-$t$ received vector, and say ${\ensuremath{\boldsymbol{\Phi}}}{\triangleq}\operatorname{Cov}\{{\ensuremath{\boldsymbol{u}}}(t)\}$ over the relevant time period (e.g., $t\in{\ensuremath{\mathcal{T}}}{_\textsf{data}}[1]$). We then write the distorted post-ADC received signal as $${\ensuremath{\boldsymbol{y}}}(t) = {\ensuremath{\boldsymbol{u}}}(t) + {\ensuremath{\boldsymbol{e}}}(t) ~ \text{s.t.} ~ \left\{ \begin{array}{l} {\ensuremath{\boldsymbol{e}}}(t) \sim {\ensuremath{\mathcal{C}}}{\ensuremath{\mathcal{N}}}({\ensuremath{\boldsymbol{0}}},\beta \operatorname{diag}({\ensuremath{\boldsymbol{\Phi}}})) \\ {\ensuremath{\boldsymbol{e}}}(t) \operatorname{\perp\!\!\!\perp}{\ensuremath{\boldsymbol{u}}}(t)\\ {\ensuremath{\boldsymbol{e}}}(t) \operatorname{\perp\!\!\!\perp}{\ensuremath{\boldsymbol{e}}}(t'){\ensuremath{\text{\raisebox{-0.5mm}{$\bigl|_{t'\neq t}$}}}} \quad , \end{array} \right. \label{eq:rx}$$ where ${\ensuremath{\boldsymbol{e}}}(t)\in{{\mathbb{C}}}^{M}$ is additive distortion. Typically, $\beta\ll 1$. From a theoretical perspective, automatic gain control (AGC) followed by dithered uniform quantization [@Gray:TIT:93] yields quantization errors whose statistics closely match the model [(\[eq:rx\])]{}. More importantly, studies (e.g., [@Namgoong:TWC:05]) have shown that the independent Gaussian distortion model [(\[eq:rx\])]{} accurately captures the combined effects of additive AGC noise, non-linearities in the ADC and gain-control, and oscillator phase noise in practical hardware. [Figure \[fig:relay\_direct\]]{} summarizes our model. The dashed lines indicate that the distortion levels are proportional to mean energy levels and not to the instantaneous value. \[\]\[Bl\]\[[0.9]{}\][$+$]{} \[\]\[Bl\]\[[0.9]{}\][$\sqrt{\rho{_\textsf{r}}}{\ensuremath{\boldsymbol{H}}}{_\textsf{sr}}$]{} \[\]\[Bl\]\[[0.9]{}\][$\sqrt{\eta{_\textsf{r}}}{\ensuremath{\boldsymbol{H}}}{_\textsf{rr}}$]{} \[\]\[Bl\]\[[0.9]{}\][$\sqrt{\rho{_\textsf{d}}}{\ensuremath{\boldsymbol{H}}}{_\textsf{rd}}$]{} \[\]\[Bl\]\[[0.9]{}\][$\sqrt{\eta{_\textsf{d}}}{\ensuremath{\boldsymbol{H}}}{_\textsf{sd}}$]{} \[b\]\[Bl\]\[[0.9]{}\][${\ensuremath{\boldsymbol{x}}}{_\textsf{s}}$]{} \[t\]\[Bl\]\[[0.9]{}\][${\ensuremath{\boldsymbol{c}}}{_\textsf{s}}$]{} \[b\]\[Bl\]\[[0.9]{}\][${\ensuremath{\boldsymbol{s}}}{_\textsf{s}}$]{} \[t\]\[Bl\]\[[0.9]{}\][${\ensuremath{\boldsymbol{n}}}{_\textsf{r}}$]{} \[b\]\[Bl\]\[[0.9]{}\][${\ensuremath{\boldsymbol{u}}}{_\textsf{r}}$]{} \[t\]\[Bl\]\[[0.9]{}\][${\ensuremath{\boldsymbol{e}}}{_\textsf{r}}$]{} \[b\]\[Bl\]\[[0.9]{}\][${\ensuremath{\boldsymbol{y}}}{_\textsf{r}}$]{} \[\]\[Bl\]\[0.37\] [c]{}\ \[\]\[Bl\]\[0.65\] \[\]\[Bl\]\[0.7\] \[\]\[Bl\]\[0.7\] \[b\]\[Bl\]\[[0.9]{}\][${\ensuremath{\boldsymbol{x}}}{_\textsf{r}}$]{} \[t\]\[Bl\]\[[0.9]{}\][${\ensuremath{\boldsymbol{c}}}{_\textsf{r}}$]{} \[b\]\[Bl\]\[[0.9]{}\][${\ensuremath{\boldsymbol{s}}}{_\textsf{r}}$]{} \[b\]\[Bl\]\[[0.9]{}\][${\ensuremath{\boldsymbol{n}}}{_\textsf{d}}$]{} \[b\]\[Bl\]\[[0.9]{}\][${\ensuremath{\boldsymbol{u}}}{_\textsf{d}}$]{} \[t\]\[Bl\]\[[0.9]{}\][${\ensuremath{\boldsymbol{e}}}{_\textsf{d}}$]{} \[b\]\[Bl\]\[[0.9]{}\][${\ensuremath{\boldsymbol{y}}}{_\textsf{d}}$]{} ![Our model of full-duplex MIMO relaying under limited transmitter/receiver-DR. The dashed lines denote statistical dependence.[]{data-label="fig:relay_direct"}](figures/relay_direct.eps){width="\figsizein"} Analysis of Achievable Rate {#sec:analysis} ============================= Pilot-Aided Channel Estimation {#sec:chan_est} -------------------------------- In this section, we describe the pilot-aided channel estimation procedure that is used to learn the channel matrices ${\ensuremath{\boldsymbol{H}}}$. In our protocol, the training epoch consists of two periods, ${\ensuremath{\mathcal{T}}}{_\textsf{train}}[1]$ and ${\ensuremath{\mathcal{T}}}{_\textsf{train}}[2]$, each spanning $T N$ channel uses (for some $T\in{{\mathbb{Z}}}^+$). For all times $t\in{\ensuremath{\mathcal{T}}}{_\textsf{train}}[1]$, we assume that the source transmits a known pilot signal and the relay remains silent, while, for all $t\in{\ensuremath{\mathcal{T}}}{_\textsf{train}}[2]$, the relay transmits and the source remains silent. Moreover, we construct the pilot sequence ${\ensuremath{\boldsymbol{X}}} = [{\ensuremath{\boldsymbol{x}}}(1),\dots,{\ensuremath{\boldsymbol{x}}}(TN)]\in{{\mathbb{C}}}^{N\times TN}$ to satisfy $\frac{1}{2T}{\ensuremath{\boldsymbol{X}}} {\ensuremath{\boldsymbol{X}}}{^{\textsf{H}}}= {\ensuremath{\boldsymbol{I}}}_{N}$, where the scaling has been chosen to satisfy a per-period power constraint of the form $\operatorname{tr}({\ensuremath{\boldsymbol{Q}}}) = 2$, consistent with the data power constraints that will be described in the sequel. Our limited transmitter/receiver-DR model implies that the (distorted) space-time pilot signal observed by a given receiver takes the form $$\begin{aligned} {\ensuremath{\boldsymbol{Y}}} &= \sqrt{\alpha} {\ensuremath{\boldsymbol{H}}}({\ensuremath{\boldsymbol{X}}} + {\ensuremath{\boldsymbol{C}}}) + {\ensuremath{\boldsymbol{N}}} + {\ensuremath{\boldsymbol{E}}} , \label{eq:Y}\end{aligned}$$ where $\alpha\in\{\rho{_\textsf{r}},\eta{_\textsf{r}},\rho{_\textsf{d}}, \eta{_\textsf{d}}\}$ for ${\ensuremath{\boldsymbol{H}}}\in\{{\ensuremath{\boldsymbol{H}}}{_\textsf{sr}},{\ensuremath{\boldsymbol{H}}}{_\textsf{rr}},{\ensuremath{\boldsymbol{H}}}{_\textsf{rd}}, {\ensuremath{\boldsymbol{H}}}{_\textsf{sd}}\}$, respectively. In [(\[eq:Y\])]{}, ${\ensuremath{\boldsymbol{C}}}, {\ensuremath{\boldsymbol{E}}}$ and ${\ensuremath{\boldsymbol{N}}}$ are $N\times TN$ matrices of transmitter noise, receiver distortion, and AWGN, respectively. At the conclusion of training, we assume that each receiver uses least-squares (LS) to estimate the corresponding channel ${\ensuremath{\boldsymbol{H}}}$ as $$\begin{aligned} \sqrt{\alpha}{\ensuremath{\Hat{\boldsymbol{H}}}} &\triangleq \frac{1}{2T} {\ensuremath{\boldsymbol{Y}}} {\ensuremath{\boldsymbol{X}}}{^{\textsf{H}}}, \label{eq:Hhat}\end{aligned}$$ and communicates this estimate to the transmitter.[^7] In the sequel, it will be useful to decompose the channel estimate into the true channel plus an estimation error. In [Appendix \[app:chan\_est\]]{}, it is shown that such a decomposition takes the form $$\begin{aligned} \sqrt{\alpha}{\ensuremath{\Hat{\boldsymbol{H}}}} &= \sqrt{\alpha} {\ensuremath{\boldsymbol{H}}} + {\ensuremath{\boldsymbol{D}}}^{\frac{1}{2}} {\ensuremath{\Tilde{\boldsymbol{H}}}}, \label{eq:Htilde}\end{aligned}$$ where the entries of ${\ensuremath{\Tilde{\boldsymbol{H}}}}$ are i.i.d ${\ensuremath{\mathcal{CN}}}(0,1)$, and where $$\begin{aligned} {\ensuremath{\boldsymbol{D}}} &= \frac{1}{2T} \bigg( (1 + \beta){\ensuremath{\boldsymbol{I}}} + \alpha \frac{2\kappa}{N} {\ensuremath{\boldsymbol{H}}}{\ensuremath{\boldsymbol{H}}}{^{\textsf{H}}}\nonumber\\&\quad + \alpha \frac{2\beta}{N} (1 + \kappa) \operatorname{diag}\Big( {\ensuremath{\boldsymbol{H}}} {\ensuremath{\boldsymbol{H}}}{^{\textsf{H}}}\Big) \bigg) \label{eq:chan_est_err}\end{aligned}$$ characterizes the spatial covariance of the estimation error. Using $\beta \ll 1$ and $\kappa \ll 1$, this covariance reduces to $$\begin{aligned} {\ensuremath{\boldsymbol{D}}} &\approx \frac{1}{2T} \bigg( {\ensuremath{\boldsymbol{I}}} + \alpha \frac{2\kappa}{N} {\ensuremath{\boldsymbol{H}}}{\ensuremath{\boldsymbol{H}}}{^{\textsf{H}}}+ \alpha \frac{2\beta}{N} \operatorname{diag}\Big( {\ensuremath{\boldsymbol{H}}} {\ensuremath{\boldsymbol{H}}}{^{\textsf{H}}}\Big) \bigg) . \end{aligned}$$ Interference Cancellation and Equivalent Channel {#sec:cancellation} -------------------------------------------------- We now describe how the relay partially cancels its self-interference, and construct a simplified model for the result. Recall that the data communication period is partitioned into two periods, ${\ensuremath{\mathcal{T}}}{_\textsf{data}}[1]$ and ${\ensuremath{\mathcal{T}}}{_\textsf{data}}[2]$, and that—within each—the transmitted signals are wide-sense stationary. Thus, at any time $t\in{\ensuremath{\mathcal{T}}}{_\textsf{data}}[l]$, the relay’s (instantaneous, distorted) observed signal takes the form $$\begin{aligned} {\ensuremath{\boldsymbol{y}}}{_\textsf{r}}(t) &= (\sqrt{\rho{_\textsf{r}}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}}- {\ensuremath{\boldsymbol{D}}}{_\textsf{sr}}^{\frac{1}{2}}{\ensuremath{\Tilde{\boldsymbol{H}}}}{_\textsf{sr}}) ({\ensuremath{\boldsymbol{x}}}{_\textsf{s}}(t) + {\ensuremath{\boldsymbol{c}}}{_\textsf{s}}(t)) + {\ensuremath{\boldsymbol{n}}}{_\textsf{r}}(t) + {\ensuremath{\boldsymbol{e}}}{_\textsf{r}}(t) \nonumber\\&\quad + (\sqrt{\eta{_\textsf{r}}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rr}}- {\ensuremath{\boldsymbol{D}}}{_\textsf{rr}}^{\frac{1}{2}}{\ensuremath{\Tilde{\boldsymbol{H}}}}{_\textsf{rr}}) ({\ensuremath{\boldsymbol{x}}}{_\textsf{r}}(t) + {\ensuremath{\boldsymbol{c}}}{_\textsf{r}}(t)) , \label{eq:y}\end{aligned}$$ as implied by [Fig. \[fig:relay\_direct\]]{} and [(\[eq:Htilde\])]{}. Defining the aggregate noise term $$\begin{aligned} {\ensuremath{\boldsymbol{v}}}{_\textsf{r}}(t) &{\triangleq}\sqrt{\rho{_\textsf{r}}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}}{\ensuremath{\boldsymbol{c}}}{_\textsf{s}}(t) - {\ensuremath{\boldsymbol{D}}}^{\frac{1}{2}}{_\textsf{sr}}{\ensuremath{\Tilde{\boldsymbol{H}}}}{_\textsf{sr}}({\ensuremath{\boldsymbol{x}}}{_\textsf{s}}(t) + {\ensuremath{\boldsymbol{c}}}{_\textsf{s}}(t)) + {\ensuremath{\boldsymbol{n}}}{_\textsf{r}}(t) \nonumber\\&\quad + {\ensuremath{\boldsymbol{e}}}{_\textsf{r}}(t) + \sqrt{\eta{_\textsf{r}}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rr}}{\ensuremath{\boldsymbol{c}}}{_\textsf{r}}(t) - {\ensuremath{\boldsymbol{D}}}{_\textsf{rr}}^{\frac{1}{2}}{\ensuremath{\Tilde{\boldsymbol{H}}}}{_\textsf{rr}}({\ensuremath{\boldsymbol{x}}}{_\textsf{r}}(t) + {\ensuremath{\boldsymbol{c}}}{_\textsf{r}}(t)) , \label{eq:v}\end{aligned}$$ we can write the observed signal as ${\ensuremath{\boldsymbol{y}}}{_\textsf{r}}(t) = \sqrt{\rho{_\textsf{r}}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}}{\ensuremath{\boldsymbol{x}}}{_\textsf{s}}(t) + \sqrt{\eta{_\textsf{r}}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rr}}{\ensuremath{\boldsymbol{x}}}{_\textsf{r}}(t) + {\ensuremath{\boldsymbol{v}}}{_\textsf{r}}(t)$, where the self-interference term $\sqrt{\eta{_\textsf{r}}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rr}}{\ensuremath{\boldsymbol{x}}}{_\textsf{r}}(t)$ is known and thus can be canceled. The interference-canceled signal ${\ensuremath{\boldsymbol{z}}}{_\textsf{r}}(t){\triangleq}{\ensuremath{\boldsymbol{y}}}{_\textsf{r}}(t) - \sqrt{\eta{_\textsf{r}}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rr}}{\ensuremath{\boldsymbol{x}}}{_\textsf{r}}(t)$ can then be written as $$\begin{aligned} {\ensuremath{\boldsymbol{z}}}{_\textsf{r}}(t) &= \sqrt{\rho{_\textsf{r}}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}}{\ensuremath{\boldsymbol{x}}}{_\textsf{s}}(t) + {\ensuremath{\boldsymbol{v}}}{_\textsf{r}}(t) . \label{eq:z}\end{aligned}$$ Equation [(\[eq:z\])]{} shows that, in effect, the information signal ${\ensuremath{\boldsymbol{x}}}{_\textsf{s}}(t)$ propagates through a known channel $\sqrt{\rho{_\textsf{r}}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}}$ corrupted by an aggregate (possibly non-Gaussian) noise ${\ensuremath{\boldsymbol{v}}}{_\textsf{r}}(t)$, whose $({\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}},{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rr}})$-conditional covariance we denote as ${\ensuremath{\Hat{\boldsymbol{\Sigma}}}}{_\textsf{r}}[l]{\triangleq}\operatorname{Cov}\{{\ensuremath{\boldsymbol{v}}}{_\textsf{r}}(t){\,|\,}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}},{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rr}}\}_{t\in{\ensuremath{\mathcal{T}}}{_\textsf{data}}[l]}$, recalling that $l\in\{1,2\}$ indexes the data-period. In [Appendix \[app:cancellation\]]{}, we show that $$\begin{aligned} {\ensuremath{\Hat{\boldsymbol{\Sigma}}}}{_\textsf{r}}[l] &\approx {\ensuremath{\boldsymbol{I}}} + \kappa\rho{_\textsf{r}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}}\operatorname{diag}({\ensuremath{\boldsymbol{Q}}}{_\textsf{s}}[l]){\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}}{^{\textsf{H}}}+ {\ensuremath{\Hat{\boldsymbol{D}}}}{_\textsf{sr}}\operatorname{tr}({\ensuremath{\boldsymbol{Q}}}{_\textsf{s}}[l]) \nonumber\\&\quad + \kappa\eta{_\textsf{r}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rr}}\operatorname{diag}({\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}[l]){\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rr}}{^{\textsf{H}}}+ {\ensuremath{\Hat{\boldsymbol{D}}}}{_\textsf{rr}}\operatorname{tr}({\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}[l]) \label{eq:sigma} \nonumber\\&\quad + \beta\rho{_\textsf{r}}\operatorname{diag}( {\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}}{\ensuremath{\boldsymbol{Q}}}{_\textsf{s}}[l]{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}}{^{\textsf{H}}}) \nonumber\\&\quad + \beta\eta{_\textsf{r}}\operatorname{diag}( {\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rr}}{\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}[l]{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rr}}{^{\textsf{H}}}) ,\end{aligned}$$ where ${\ensuremath{\Hat{\boldsymbol{D}}}}{_\textsf{sr}}{\triangleq}\operatorname{E}\{{\ensuremath{\boldsymbol{D}}}{_\textsf{sr}}{\,|\,}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}}\}$ and ${\ensuremath{\Hat{\boldsymbol{D}}}}{_\textsf{rr}}{\triangleq}\operatorname{E}\{{\ensuremath{\boldsymbol{D}}}{_\textsf{rr}}{\,|\,}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rr}}\}$ obey $$\begin{aligned} {\ensuremath{\Hat{\boldsymbol{D}}}} &\approx \frac{1}{2T} \bigg( {\ensuremath{\boldsymbol{I}}} + \alpha \frac{2\kappa}{N} {\ensuremath{\Hat{\boldsymbol{H}}}}{\ensuremath{\Hat{\boldsymbol{H}}}}{^{\textsf{H}}}+ \alpha \frac{2\beta}{N} \operatorname{diag}\Big( {\ensuremath{\Hat{\boldsymbol{H}}}} {\ensuremath{\Hat{\boldsymbol{H}}}}{^{\textsf{H}}}\Big) \bigg) \label{eq:Dhat}\end{aligned}$$ and where the approximations in [(\[eq:sigma\])]{}-[(\[eq:Dhat\])]{} follow from $\kappa\ll 1$ and $\beta\ll 1$. We note, for later use, that the channel estimation error terms ${\ensuremath{\Hat{\boldsymbol{D}}}}$ can be made arbitrarily small through appropriate choice of $T$. The effective channel from the relay to the destination can be similarly stated as $$\begin{aligned} {\ensuremath{\boldsymbol{y}}}{_\textsf{d}}(t) &= \sqrt{\rho{_\textsf{d}}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rd}}{\ensuremath{\boldsymbol{x}}}{_\textsf{r}}(t) + {\ensuremath{\boldsymbol{v}}}{_\textsf{d}}(t) \label{eq:yD}\\ {\ensuremath{\boldsymbol{v}}}{_\textsf{d}}(t) &{\triangleq}\sqrt{\rho{_\textsf{d}}}{\ensuremath{\boldsymbol{H}}}{_\textsf{rd}}{\ensuremath{\boldsymbol{c}}}{_\textsf{r}}(t) - {\ensuremath{\boldsymbol{D}}}^{\frac{1}{2}}{_\textsf{rd}}{\ensuremath{\Tilde{\boldsymbol{H}}}}{_\textsf{rd}}{\ensuremath{\boldsymbol{x}}}{_\textsf{r}}(t) + {\ensuremath{\boldsymbol{n}}}{_\textsf{d}}(t) + {\ensuremath{\boldsymbol{e}}}{_\textsf{d}}(t) \nonumber\\&\quad + \sqrt{\eta}{_\textsf{d}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sd}}\big( {\ensuremath{\boldsymbol{x}}}{_\textsf{s}}(t) + {\ensuremath{\boldsymbol{c}}}{_\textsf{s}}(t)\big) - {\ensuremath{\boldsymbol{D}}}^{\frac{1}{2}}{_\textsf{sd}}{\ensuremath{\Tilde{\boldsymbol{H}}}}{_\textsf{sd}}\big({\ensuremath{\boldsymbol{x}}}{_\textsf{s}}(t) \nonumber\\&\quad + {\ensuremath{\boldsymbol{c}}}{_\textsf{s}}(t)\big) \label{eq:vD},\end{aligned}$$ and an expression similar to [(\[eq:sigma\])]{} can be derived for the destination’s aggregate noise covariance, ${\ensuremath{\Hat{\boldsymbol{\Sigma}}}}{_\textsf{d}}[l] {\triangleq}\operatorname{Cov}\{{\ensuremath{\boldsymbol{v}}}{_\textsf{d}}(t){\,|\,}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rd}},{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sd}}\}_{t\in{\ensuremath{\mathcal{T}}}{_\textsf{data}}[l]}$ during data-period $l\in\{1,2\}$. Unlike the relay node, however, the destination node does not cancel the interference term $\sqrt{\eta}{_\textsf{d}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sd}}{\ensuremath{\boldsymbol{x}}}{_\textsf{s}}(t)$, but rather lumps it in with the aggregate noise ${\ensuremath{\boldsymbol{v}}}{_\textsf{d}}(t)$. The latter practice is well motivated under the assumption that $\eta{_\textsf{d}}\ll\rho{_\textsf{r}}$, i.e., that the source-to-destination link is much weaker than the relay-to-destination link. [Figure \[fig:relay\_equiv\]]{} summarizes the equivalent system model. \[\]\[Bl\]\[[0.9]{}\][$+$]{} \[\]\[Bl\]\[[0.9]{}\][$\sqrt{\rho{_\textsf{r}}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}}$]{} \[\]\[Bl\]\[[0.9]{}\][$\sqrt{\rho{_\textsf{d}}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rd}}$]{} \[b\]\[Bl\]\[[0.9]{}\][${\ensuremath{\boldsymbol{x}}}{_\textsf{s}}$]{} \[b\]\[Bl\]\[[0.9]{}\][${\ensuremath{\boldsymbol{v}}}{_\textsf{r}}$]{} \[b\]\[Bl\]\[[0.9]{}\][${\ensuremath{\boldsymbol{z}}}{_\textsf{r}}$]{} \[\]\[Bl\]\[0.48\] [c]{}\ \[b\]\[Bl\]\[[0.9]{}\][${\ensuremath{\boldsymbol{x}}}{_\textsf{r}}$]{} \[b\]\[Bl\]\[[0.9]{}\][${\ensuremath{\boldsymbol{v}}}{_\textsf{d}}$]{} \[b\]\[Bl\]\[[0.9]{}\][${\ensuremath{\boldsymbol{y}}}{_\textsf{d}}$]{} ![Equivalent model of full-duplex MIMO relaying.[]{data-label="fig:relay_equiv"}](figures/relay_equiv.eps){width="2.8in"} Bounds on Achievable Rate {#sec:bounds} --------------------------- The end-to-end mutual information can be written, for a given time-sharing parameter $\tau$, as [@Wang:TIT:05] $$\begin{aligned} I_{\tau}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}) &= \min\left\{\sum_{l=1}^2 \tau[l] I{_\textsf{sr}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}[l]), \sum_{l=1}^2 \tau[l] I{_\textsf{rd}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}[l])\right\}, \label{eq:I}\end{aligned}$$ where $I{_\textsf{sr}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}[l])$ and $I{_\textsf{rd}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}[l])$ are the period-$l$ mutual informations of the source-to-relay channel and relay-to-destination channel, respectively, and where ${{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}[l]{\triangleq}\big( {\ensuremath{\boldsymbol{Q}}}{_\textsf{s}}[l], {\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}[l]\big)$ and ${{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}{\triangleq}\big( {{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}[1], {{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}[2]\big)$. To analyze $I{_\textsf{sr}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}[l])$ and $I{_\textsf{rd}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}[l])$, we leverage the equivalent system model shown in [Fig. \[fig:relay\_equiv\]]{}, which includes channel-estimation error and relay-self-interference cancellation, and treats the source-to-destination link as a source of noise. The mutual-information analysis is, however, still complicated by the fact that the aggregate noises ${\ensuremath{\boldsymbol{v}}}{_\textsf{r}}(t)$ and ${\ensuremath{\boldsymbol{v}}}{_\textsf{d}}(t)$ are generally non-Gaussian, as a result of the channel-estimation-error components in [(\[eq:v\])]{} and [(\[eq:vD\])]{}. However, it is known that, among all noise distributions of a given covariance, the Gaussian one is worst from a mutual-information perspective [@Hassibi:TIT:03]. In particular, treating the noise as Gaussian yields the lower bounds $I{_\textsf{sr}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}[l]) \geq \underline{I}{_\textsf{sr}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}[l])$ and $I{_\textsf{rd}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}[l]) \geq \underline{I}{_\textsf{rd}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}[l])$, where [@Tse:Book:05] $$\begin{aligned} \lefteqn{\underline{I}{_\textsf{sr}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}[l])}\nonumber\\ &= \log\det\Big( {\ensuremath{\boldsymbol{I}}} + \rho{_\textsf{r}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}}{\ensuremath{\boldsymbol{Q}}}{_\textsf{s}}[l]{\ensuremath{\Hat{\boldsymbol{H}}}}{^{\textsf{H}}}{_\textsf{sr}}{\ensuremath{\Hat{\boldsymbol{\Sigma}}}}^{-1}{_\textsf{r}}[l] \Big) \label{eq:IsR}\\ &= \log\det\Big( \rho{_\textsf{r}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}}{\ensuremath{\boldsymbol{Q}}}{_\textsf{s}}[l]{\ensuremath{\Hat{\boldsymbol{H}}}}{^{\textsf{H}}}{_\textsf{sr}}+ {\ensuremath{\Hat{\boldsymbol{\Sigma}}}}{_\textsf{r}}[l] \Big) -\log\det({\ensuremath{\Hat{\boldsymbol{\Sigma}}}}{_\textsf{r}}[l]) \end{aligned}$$ and $$\begin{aligned} \lefteqn{ \underline{I}{_\textsf{rd}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}[l]) }\nonumber\\ &= \log\det\Big( {\ensuremath{\boldsymbol{I}}} + \rho{_\textsf{d}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rd}}{\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}[l]{\ensuremath{\Hat{\boldsymbol{H}}}}{^{\textsf{H}}}{_\textsf{rd}}{\ensuremath{\Hat{\boldsymbol{\Sigma}}}}^{-1}{_\textsf{r}}[l] \Big) \label{eq:IRD} \\ &= \log\det\Big( \rho{_\textsf{d}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rd}}{\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}[l]{\ensuremath{\Hat{\boldsymbol{H}}}}{^{\textsf{H}}}{_\textsf{rd}}+ {\ensuremath{\Hat{\boldsymbol{\Sigma}}}}{_\textsf{d}}[l] \Big) -\log\det({\ensuremath{\Hat{\boldsymbol{\Sigma}}}}{_\textsf{d}}[l]) , \end{aligned}$$ and thus a lower bound on the end-to-end $\tau$-specific achievable-rate is $$\begin{aligned} \underline{I}_{\tau}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}) &= \min\Bigg\{ \underbrace{ \sum_{l=1}^2 \tau[l] \underline{I}{_\textsf{sr}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}[l]) }_{\displaystyle {\triangleq}\underline{I}{_{\textsf{sr},\tau}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}})},\, \underbrace{ \sum_{l=1}^2 \tau[l] \underline{I}{_\textsf{rd}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}[l]) }_{\displaystyle {\triangleq}\underline{I}{_{\textsf{rd},\tau}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}})} \Bigg\}. \label{eq:mutinfo}\end{aligned}$$ Moreover, the rate $\underline{I}_{\tau}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}})$ bits[^8]-per-channel-use (bpcu) can be achieved via independent Gaussian codebooks at the transmitters and maximum-likelihood detection at the receivers [@Tse:Book:05]. A straightforward achievable-rate upper bound $\overline{I}_{\tau}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}})$ results from the case of perfect CSI (i.e., ${\ensuremath{\Hat{\boldsymbol{D}}}}={\ensuremath{\boldsymbol{0}}}$), where ${\ensuremath{\boldsymbol{v}}}{_\textsf{r}}(t)$ and ${\ensuremath{\boldsymbol{v}}}{_\textsf{d}}(t)$ are Gaussian. Moreover, the lower bound $\underline{I}_{\tau}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}})$ converges to the upper bound $\overline{I}_{\tau}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}})$ as the training $T\rightarrow \infty$. Transmit Covariance Optimization {#sec:approach} ================================== We would now like to find the transmit covariance matrices ${{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}$ that maximize the achievable-rate lower bound $\underline{I}_{\tau}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}})$ in [(\[eq:mutinfo\])]{} subject to the per-link power constraint ${{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}\in{\mathbb{Q}_{\tau}}$, where $$\begin{aligned} {\mathbb{Q}_{\tau}}{\triangleq}\bigg\{ &{{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}\text{~s.t.~} \sum_{l=1}^2 \!\tau[l] \operatorname{tr}\big({\ensuremath{\boldsymbol{Q}}}{_\textsf{s}}[l]\big) \!\leq\! 1,\, \sum_{l=1}^2 \!\tau[l] \operatorname{tr}\big({\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}[l]\big) \!\leq\! 1, \nonumber\\& {\ensuremath{\boldsymbol{Q}}}{_\textsf{s}}[l] = {\ensuremath{\boldsymbol{Q}}}{_\textsf{s}}{^{\textsf{H}}}[l] \geq 0,\, {\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}[l] = {\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}{^{\textsf{H}}}[l] \geq 0 \bigg\} , \label{eq:constraint}\end{aligned}$$ and subsequently optimize the time-sharing parameter $\tau$. We note that optimizing the transmit covariance matrices is equivalent to jointly optimizing the transmission beam-patterns and power levels. In the sequel, we denote the optimal (i.e., maximin) rate, for a given $\tau$, by $${\underline{I}_{*,\tau}}{\triangleq}\max_{{\ensuremath{\boldsymbol{{{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}}}}\in{\mathbb{Q}_{\tau}}} \min\big\{ \underline{I}{_{\textsf{sr},\tau}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}), \underline{I}{_{\textsf{rd},\tau}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}) \big\}, \label{eq:opt}$$ and we use ${\mathbb{Q}_{*,\tau}}$ to denote the corresponding set of maximin covariance designs ${{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}$ (which are, in general, not unique). Then, with $\tau_*{\triangleq}\arg\max_{\tau\in[0,1]} {\underline{I}_{*,\tau}}$, the optimal rate is ${\underline{I}_*}{\triangleq}\underline{I}_{*,\tau_*}$, and the corresponding set of maximin designs is ${\mathbb{Q}_*}{\triangleq}\mathbb{Q}_{*,\tau_*}$. Weighted-Sum-Rate Optimization ------------------------------ It is important to realize that, within the maximin design set ${\mathbb{Q}_{*,\tau}}$, there exists at least one “link-equalizing” design, i.e., $ \exists {{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}\in{\mathbb{Q}_{*,\tau}}~~\text{s.t.}~~ \underline{I}{_{\textsf{sr},\tau}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}) = \underline{I}{_{\textsf{rd},\tau}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}). $ To see why this is the case, notice that, given any maximin design ${{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}$ such that $\underline{I}{_{\textsf{sr},\tau}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}) > \underline{I}{_{\textsf{rd},\tau}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}})$, a simple scaling of ${\ensuremath{\boldsymbol{Q}}}{_\textsf{s}}[l]$ can yield $\underline{I}{_{\textsf{sr},\tau}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}) = \underline{I}{_{\textsf{rd},\tau}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}})$, and thus an equalizing design. A similar argument can be made when $\underline{I}{_{\textsf{rd},\tau}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}) > \underline{I}{_{\textsf{sr},\tau}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}})$. Referring to the set of *all* link-equalizing designs (maximin or otherwise), for a given $\tau$, as $${\mathbb{Q}_{=,\tau}}{\triangleq}\left\{ {{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}\in{\mathbb{Q}_{\tau}}~~\text{s.t.}~~ \underline{I}{_{\textsf{sr},\tau}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}) = \underline{I}{_{\textsf{rd},\tau}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}) \right\} ,$$ the maximin equalizing design can be found by solving either $\arg\max_{{{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}\in{\mathbb{Q}_{=,\tau}}} \underline{I}{_{\textsf{sr},\tau}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}})$ or$\arg\max_{{{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}\in{\mathbb{Q}_{=,\tau}}} \underline{I}{_{\textsf{rd},\tau}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}})$, where the equivalence is due to the equalizing property. More generally, the maximin equalizing design can be found by solving $$ \arg\max_{{{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}\in{\mathbb{Q}_{=,\tau}}} \underline{I}_{\tau}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}},\zeta) \label{eq:desired}$$ with *any* fixed $\zeta\in[0,1]$ and the $\zeta$-weighted sum-rate $$\underline{I}_{\tau}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}},\zeta) {\triangleq}\zeta \underline{I}{_{\textsf{sr},\tau}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}) + (1-\zeta) \underline{I}{_{\textsf{rd},\tau}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}) .$$ To find the maximin equalizing design, we propose relaxing the constraint on ${{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}$ from ${\mathbb{Q}_{=,\tau}}$ to ${\mathbb{Q}_{\tau}}$, yielding the $\zeta$-weighted-sum-rate optimization problem $${{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}_{*,\tau}}(\zeta)=\arg\max_{{{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}\in{\mathbb{Q}_{\tau}}} \underline{I}_{\tau}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}},\zeta) . \label{eq:weighted}$$ Now, *if* there exists ${\zeta_=}\in[0,1]$ such that the solution ${{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}_{*,\tau}}({\zeta_=})$ to [(\[eq:weighted\])]{} is link-equalizing, then, because ${\mathbb{Q}_{=,\tau}}\subset{\mathbb{Q}_{\tau}}$, we know that ${{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}_{*,\tau}}({\zeta_=})$ must also solve the problem [(\[eq:desired\])]{}, implying that ${{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}_{*,\tau}}({\zeta_=})$ is maximin. [Figure \[fig:weighted\]]{}(a) illustrates the case where such a ${\zeta_=}$ exists. It may be, however, that no $\zeta\in[0,1]$ yields a link-equalizing solution ${{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}_{*,\tau}}(\zeta)$, as illustrated in [Fig. \[fig:weighted\]]{}(b). This case occurs when $\underline{I}{_{\textsf{sr},\tau}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}_{*,\tau}}(\zeta))>\underline{I}{_{\textsf{rd},\tau}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}_{*,\tau}}(\zeta))$ for all $\zeta\in[0,1]$, such as when $\rho{_\textsf{r}}\gg \rho{_\textsf{d}}$. In this latter case, the maximin rate reduces to ${\underline{I}_{*,\tau}}= \lim_{\zeta\rightarrow 0} \underline{I}{_{\textsf{rd},\tau}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}_{*,\tau}}(\zeta))$. \[Bl\]\[Bl\]\[[0.9]{}\][(a)]{} \[Bl\]\[Bl\]\[[0.9]{}\][(b)]{} \[r\]\[Bl\]\[[0.9]{}\][$0$]{} \[lt\]\[Bl\]\[[0.9]{}\][$0$]{} \[lt\]\[Bl\]\[[0.9]{}\][$\frac{1}{2}$]{} \[lt\]\[Bl\]\[[0.9]{}\][$1$]{} \[l\]\[Bl\]\[[0.9]{}\][$\zeta$]{} \[lt\]\[Bl\]\[[0.9]{}\][$\zeta_=$]{} \[r\]\[Bl\]\[[0.9]{}\][${\underline{I}_{*,\tau}}$]{} \[l\]\[Bl\]\[[0.9]{}\][$\underline{I}_{\tau}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}_{*,\tau}}(\zeta),\zeta)$]{} \[l\]\[Bl\]\[[0.9]{}\][$\underline{I}{_{\textsf{rd},\tau}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}_{*,\tau}}(\zeta))$]{} \[l\]\[Bl\]\[[0.9]{}\][$\underline{I}{_{\textsf{sr},\tau}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}_{*,\tau}}(\zeta))$]{} ![Illustrative examples of $\tau$-specific $\zeta$-weighted sum-rate optimization in the case (a) when a link-equalizing solution exists and (b) when one does not exist. Here, $\underline{I}{_{\textsf{sr},\tau}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}})$ and $\underline{I}{_{\textsf{rd},\tau}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}})$ are the source-to-relay and relay-to-destination rates, respectively, $\underline{I}_\tau({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}},\zeta)=\zeta\underline{I}{_{\textsf{sr},\tau}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}})+(1-\zeta)\underline{I}{_{\textsf{rd},\tau}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}})$ is the $\zeta$-weighted sum-rate, and ${{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}_{*,\tau}}(\zeta)$ is the set of optimal covariance matrices for a given time-share $\tau$ and weight $\zeta$. []{data-label="fig:weighted"}](figures/weighted.eps){width="3.0in"} Whether or not ${\zeta_=}\in[0,1]$ actually exists, we propose to search for ${\zeta_=}$ using bisection, leveraging the fact that $\underline{I}{_{\textsf{rd},\tau}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}_{*,\tau}}(\zeta))$ is non-increasing in $\zeta$ and $\underline{I}{_{\textsf{sr},\tau}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}_{*,\tau}}(\zeta))$ is non-decreasing in $\zeta$. To perform the bisection search, we initialize the search interval ${\ensuremath{\mathcal{I}}}$ at $[0,1]$, and bisect it at each step after testing the condition $\underline{I}{_{\textsf{rd},\tau}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}_{*,\tau}}(\zeta)) > \underline{I}{_{\textsf{sr},\tau}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}_{*,\tau}}(\zeta))$ at the midpoint location $\zeta$ in ${\ensuremath{\mathcal{I}}}$; if the condition holds true, we discard the left sub-interval of ${\ensuremath{\mathcal{I}}}$, else we discard the right sub-interval. We stop bisecting when $|\underline{I}{_{\textsf{rd},\tau}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}_{*,\tau}}(\zeta)) - \underline{I}{_{\textsf{sr},\tau}}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}_{*,\tau}}(\zeta))|$ falls below a threshold or a maximum number of iterations has elapsed. Notice that, even when there exists no ${\zeta_=}\in[0,1]$, bisection converges towards the desired weight $\zeta=0$. Subsequently, we optimize over $\tau\in[0,1]$ using a grid-search. Gradient Projection ------------------- At each bisection step, we use Gradient Projection (GP) to solve[^9] the $\tau$-specific, $\zeta$-weighted-sum-rate optimization problem [(\[eq:weighted\])]{}. The GP algorithm [@Bertsekas:Book:99] is defined as follows. For the generic problem of maximizing a function $f({\ensuremath{\boldsymbol{x}}})$ over ${\ensuremath{\boldsymbol{x}}}\in \mathcal{X}$, the GP algorithm starts with an initialization ${\ensuremath{\boldsymbol{x}}}{^{(0)}}$ and iterates the following steps for $k=0,1,2,3,\dots$ $$\begin{aligned} {\ensuremath{\Tilde{\boldsymbol{x}}}}{^{(k)}} &= {\ensuremath{\mathcal{P}}}_{{\ensuremath{\mathcal{X}}}}\big( {\ensuremath{\boldsymbol{x}}}{^{(k)}} + s{^{(k)}} \nabla f({\ensuremath{\boldsymbol{x}}}{^{(k)}}) \big) \label{eq:GPalg2}\\ {\ensuremath{\boldsymbol{x}}}{^{(k+1)}} &= {\ensuremath{\boldsymbol{x}}}{^{(k)}} + \gamma{^{(k)}}({\ensuremath{\Tilde{\boldsymbol{x}}}}{^{(k)}} - {\ensuremath{\boldsymbol{x}}}{^{(k)}}) \label{eq:GPalg1} ,\end{aligned}$$ where ${\ensuremath{\mathcal{P}}}_{{\ensuremath{\mathcal{X}}}}(\cdot)$ denotes projection onto the set $\mathcal{X}$ and $\nabla f(\cdot)$ denotes the gradient of $f(\cdot)$. The parameters $\gamma{^{(k)}} \in (0,1]$ and $s{^{(k)}}$ act as stepsizes. In the sequel, we assume $s{^{(k)}}=1~\forall k$. In applying GP to the optimization problem [(\[eq:weighted\])]{}, we first take gradient steps for ${\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}[1]$ and ${\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}[2]$, and then project onto the constraint set [(\[eq:constraint\])]{}. Next, we take gradient steps for ${\ensuremath{\boldsymbol{Q}}}{_\textsf{s}}[1]$ and ${\ensuremath{\boldsymbol{Q}}}{_\textsf{s}}[2]$, and then project onto the constraint set. In summary, denoting the relay gradient by ${\ensuremath{\boldsymbol{G}}}{_\textsf{r}}[l] {\triangleq}\nabla_{{\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}[l]}\underline{I}_\tau({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}},\zeta)$, our GP algorithm iterates the following steps to convergence: $$\begin{aligned} {\ensuremath{\boldsymbol{P}}}{_\textsf{r}}{^{(k)}}[1] &= {\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}{^{(k)}}[1] + {\ensuremath{\boldsymbol{G}}}{_\textsf{r}}{^{(k)}}[1] \label{eq:ourGPbegin} \\ {\ensuremath{\boldsymbol{P}}}{_\textsf{r}}{^{(k)}}[2] &= {\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}{^{(k)}}[2] + {\ensuremath{\boldsymbol{G}}}{_\textsf{r}}{^{(k)}}[2] \\ \hspace{-2mm} \big({\ensuremath{\Tilde{\boldsymbol{Q}}}}{_\textsf{r}}{^{(k)}}[1],{\ensuremath{\Tilde{\boldsymbol{Q}}}}{_\textsf{r}}{^{(k)}}[2]\big) &= {\ensuremath{\mathcal{P}}}_{{\ensuremath{\mathcal{X}}}} \big({\ensuremath{\boldsymbol{P}}}{_\textsf{r}}{^{(k)}}[1],{\ensuremath{\boldsymbol{P}}}{_\textsf{r}}{^{(k)}}[2]\big) \\ {\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}{^{(k+1)}}[1] &= {\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}{^{(k)}}[1] + \gamma{^{(k)}} \big( {\ensuremath{\Tilde{\boldsymbol{Q}}}}{_\textsf{r}}{^{(k)}}[1] - {\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}{^{(k)}}[1] \big) \\ {\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}{^{(k+1)}}[2] &= {\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}{^{(k)}}[2] + \gamma{^{(k)}} \big( {\ensuremath{\Tilde{\boldsymbol{Q}}}}{_\textsf{r}}{^{(k)}}[2] - {\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}{^{(k)}}[2] \big) \label{eq:ourGPend}\end{aligned}$$ and then repeats similar steps for ${\ensuremath{\boldsymbol{Q}}}{_\textsf{s}}[1]$ and ${\ensuremath{\boldsymbol{Q}}}{_\textsf{s}}[2]$. An outer loop then repeats this pair of inner loops until the maximum change in ${{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}$ is below a small positive threshold $\epsilon$. We now provide additional details on the GP steps. As for the gradient, [Appendix \[app:grad\_proj\]]{} shows that the gradient ${\ensuremath{\boldsymbol{G}}}{_\textsf{r}}[l]$ can be written as in [(\[eq:G\])]{}, at the top of the next page, [rCl]{} &=& { [$\Hat{\boldsymbol{H}}$]{}[\^]{}[\_]{}( [$\boldsymbol{S}$]{}[\_]{}\^[-1]{}\[l\] + ([$\boldsymbol{S}$]{}[\_]{}\^[-1]{}\[l\] - [$\Hat{\boldsymbol{\Sigma}}$]{}[\_]{}\^[-1]{}\[l\]) ) [$\Hat{\boldsymbol{H}}$]{}[\_]{} + ( [$\Hat{\boldsymbol{H}}$]{}[\^]{}[\_]{}( [$\boldsymbol{S}$]{}[\_]{}\^[-1]{}\[l\] - [$\Hat{\boldsymbol{\Sigma}}$]{}[\_]{}\^[-1]{}\[l\] ) [$\Hat{\boldsymbol{H}}$]{}[\_]{}) }\ && + ( [$\Hat{\boldsymbol{D}}$]{}[\_]{}\^\* ([$\boldsymbol{S}$]{}[\_]{}\^[-1]{}\[l\] - [$\Hat{\boldsymbol{\Sigma}}$]{}[\_]{}\^[-1]{}\[l\])) [$\boldsymbol{I}$]{} + ( [$\Hat{\boldsymbol{D}}$]{}[\_]{}\^\* ([$\boldsymbol{S}$]{}[\_]{}\^[-1]{}\[l\] - [$\Hat{\boldsymbol{\Sigma}}$]{}[\_]{}\^[-1]{}\[l\])) [$\boldsymbol{I}$]{}\ && + { ( [$\Hat{\boldsymbol{H}}$]{}[\^]{}[\_]{}([$\boldsymbol{S}$]{}[\_]{}\^[-1]{}\[l\] - [$\Hat{\boldsymbol{\Sigma}}$]{}[\_]{}\^[-1]{}\[l\]) [$\Hat{\boldsymbol{H}}$]{}[\_]{}) + [$\Hat{\boldsymbol{H}}$]{}[\^]{}[\_]{}([$\boldsymbol{S}$]{}[\_]{}\^[-1]{}\[l\] - [$\Hat{\boldsymbol{\Sigma}}$]{}[\_]{}\^[-1]{}\[l\]) [$\Hat{\boldsymbol{H}}$]{}[\_]{}} , \[eq:G\] where $$\begin{aligned} {\ensuremath{\boldsymbol{S}}}{_\textsf{d}}[l] &{\triangleq}\rho{_\textsf{d}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rd}}{\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}[l] {\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rd}}{^{\textsf{H}}}+ {\ensuremath{\Hat{\boldsymbol{\Sigma}}}}{_\textsf{d}}[l] \label{eq:Sd} \\ {\ensuremath{\boldsymbol{S}}}{_\textsf{r}}[l] &{\triangleq}\rho{_\textsf{r}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}}{\ensuremath{\boldsymbol{Q}}}{_\textsf{s}}[l] {\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}}{^{\textsf{H}}}+ {\ensuremath{\Hat{\boldsymbol{\Sigma}}}}{_\textsf{r}}[l] \label{eq:Sr} .\end{aligned}$$ For ${\ensuremath{\boldsymbol{G}}}{_\textsf{s}}[l]$, a similar expression can be derived. To compute the projection ${\ensuremath{\mathcal{P}}}_{{\ensuremath{\mathcal{X}}}}({\ensuremath{\boldsymbol{P}}}{_\textsf{r}}[1],{\ensuremath{\boldsymbol{P}}}{_\textsf{r}}[2])$, we first notice that, due to the Hermitian property of ${\ensuremath{\boldsymbol{P}}}{_\textsf{r}}[l]$, we can construct an eigenvalue decomposition ${\ensuremath{\boldsymbol{P}}}{_\textsf{r}}[l] = {\ensuremath{\boldsymbol{U}}}{_\textsf{r}}[l] {\ensuremath{\boldsymbol{\Lambda}}}{_\textsf{r}}[l] {\ensuremath{\boldsymbol{U}}}{_\textsf{r}}{^{\textsf{H}}}[l]$ with unitary ${\ensuremath{\boldsymbol{U}}}{_\textsf{r}}[l]$ and real-valued ${\ensuremath{\boldsymbol{\Lambda}}}{_\textsf{r}}[l] = \operatorname{Diag}(\lambda_{\textsf{r},1}[l], \lambda_{\textsf{r},2}[l], \ldots, \lambda_{\textsf{r},N}[l])$. The projection of $({\ensuremath{\boldsymbol{P}}}{_\textsf{r}}[1],{\ensuremath{\boldsymbol{P}}}{_\textsf{r}}[2])$ onto the constraint set [(\[eq:constraint\])]{} then equals ${\ensuremath{\Tilde{\boldsymbol{Q}}}}{_\textsf{r}}[l] = {\ensuremath{\boldsymbol{U}}}{_\textsf{r}}[l] ({\ensuremath{\boldsymbol{\Lambda}}}{_\textsf{r}}[l] - \mu {\ensuremath{\boldsymbol{I}}})^{+} {\ensuremath{\boldsymbol{U}}}{_\textsf{r}}{^{\textsf{H}}}[l]$, where $( {\ensuremath{\boldsymbol{B}}} )^{+} = \max({\ensuremath{\boldsymbol{B}}},{\ensuremath{\boldsymbol{0}}})$ elementwise, and where $\mu$ is chosen such that $\sum_{n=1}^{N} \sum_{l=1}^{2} \tau[l] \max( \lambda_{\textsf{r},n}[l] - \mu, 0 ) = 1$. In essence, ${\ensuremath{\mathcal{P}}}_{{\ensuremath{\mathcal{X}}}}(\cdot)$ performs water-filling. To adjust the stepsize $\gamma{^{(k)}}$, we use the Armijo stepsize rule [@Bertsekas:Book:99], i.e., $\gamma{^{(k)}} = \nu^{m_k}$ where $m_k$ is the smallest nonnegative integer that satisfies $$\begin{aligned} \lefteqn{ \underline{I}_{\tau}( {{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}{^{(k+1)}},\zeta) - \underline{I}_{\tau}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}{^{(k)}},\zeta) } \nonumber\\ &\geq \sigma \nu^{m_k} \sum_{l = 1}^{2} \operatorname{tr}\!\bigg( {\ensuremath{\boldsymbol{G}}}{_\textsf{s}}{^{{(k)}\textsf{H}}}[l] \Big( {\ensuremath{\Tilde{\boldsymbol{Q}}}}{_\textsf{s}}{^{(k)}}[l] - {\ensuremath{\boldsymbol{Q}}}{_\textsf{s}}{^{(k)}}[l] \Big) \nonumber\\&\quad + {\ensuremath{\boldsymbol{G}}}{_\textsf{r}}{^{{(k)}\textsf{H}}}[l] \Big( {\ensuremath{\Tilde{\boldsymbol{Q}}}}{_\textsf{r}}{^{(k)}}[l] - {\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}{^{(k)}}[l] \Big) \bigg)\end{aligned}$$ for some constants $\sigma,\nu$ typically chosen so that $\sigma\in[10^{-5},10^{-1}]$ and $\nu\in[0.1,0.5]$. Above, we used the shorthand ${{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}{^{(k)}}{\triangleq}({\ensuremath{\boldsymbol{Q}}}{_\textsf{s}}{^{(k)}}[1], {\ensuremath{\boldsymbol{Q}}}{_\textsf{s}}{^{(k)}}[2], {\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}{^{(k)}}[1], {\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}{^{(k)}}[2])$. Achievable-Rate Approximation {#sec:approx} =============================== The complicated nature of the optimization problem [(\[eq:opt\])]{} motivates us to approximate its solution, i.e., the covariance-optimized achievable rate $\underline{I}_*= \max_{\tau\in[0,1]}\max_{{{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}\in{\mathbb{Q}_{\tau}}}\underline{I}_\tau({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}})$. In doing so, we focus on the case of $T\rightarrow\infty$, where channel estimation error is driven to zero so that $\underline{I}_\tau({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}})=I_\tau({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}})=\overline{I}_\tau({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}})$. In addition, for tractability, we restrict ourselves to the case ${N{_\textsf{s}}}={N{_\textsf{r}}}=N$ and ${M{_\textsf{r}}}={M{_\textsf{d}}}=M$ (i.e., $N$ transmit antennas and $M$ receive antennas at each node), the case $\eta{_\textsf{d}}=0$ (i.e., no direct source-to-destination link), and the case $\tau=\frac{1}{2}$ (i.e., equal time-sharing). Our approximation is built around the simplifying case that the channel matrices $\{{\ensuremath{\boldsymbol{H}}}{_\textsf{sr}},{\ensuremath{\boldsymbol{H}}}{_\textsf{rr}},{\ensuremath{\boldsymbol{H}}}{_\textsf{rd}}\}$ are each diagonal, although not necessarily square, and have $R{\triangleq}\min\{M,N\}$ identical diagonal entries equal to $\sqrt{MN/R}$. (The latter value is chosen so that $\operatorname{E}\{\operatorname{tr}({\ensuremath{\boldsymbol{H}}}{\ensuremath{\boldsymbol{H}}}{^{\textsf{H}}})\}=MN$ as assumed in [Section \[sec:chan\]]{}.) In this case, the mutual information [(\[eq:mutinfo\])]{} becomes [(\[eq:approx1\])]{}, at the top of the next page. [rCl]{} I\_([[$\boldsymbol{{\ensuremath{\mathcal{Q}}}}$]{}]{}) && { \_[l=1]{}\^2 ( [$\boldsymbol{I}$]{} + [\_]{} [$\boldsymbol{Q}$]{}[\_]{}\[l\] ( [$\boldsymbol{I}$]{} + (+) ( [\_]{}([$\boldsymbol{Q}$]{}[\_]{}\[l\]) + [\_]{}([$\boldsymbol{Q}$]{}[\_]{}\[l\]) ) )\^[-1]{} ) ,\ && \_[l=1]{}\^2 ( [$\boldsymbol{I}$]{} + [\_]{} [$\boldsymbol{Q}$]{}[\_]{}\[l\] ( [$\boldsymbol{I}$]{} + (+) [\_]{}([$\boldsymbol{Q}$]{}[\_]{}\[l\]) )\^[-1]{} ) } . \[eq:approx1\] When $\eta{_\textsf{r}}\ll\rho{_\textsf{r}}$, the $\eta{_\textsf{r}}$-dependent terms in [(\[eq:approx1\])]{} can be ignored, after which it is straightforward to show that, under the constraint [(\[eq:constraint\])]{}, the optimal covariances are the “full duplex” ${{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}{_{\textsf{FD}}}{\triangleq}(\frac{1}{N}{\ensuremath{\boldsymbol{I}}},\frac{1}{N}{\ensuremath{\boldsymbol{I}}}, \frac{1}{N}{\ensuremath{\boldsymbol{I}}},\frac{1}{N}{\ensuremath{\boldsymbol{I}}})$, for which [(\[eq:approx1\])]{} gives $$\begin{aligned} \lefteqn{I({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}{_{\textsf{FD}}})}\nonumber\\ &\approx\, R \log \left( 1 + \min\Big\{ \textstyle \frac{\rho{_\textsf{r}}}{\frac{R}{M} + (\kappa+\beta)(\rho{_\textsf{r}}+\eta{_\textsf{r}})} , \frac{\rho{_\textsf{d}}}{\frac{R}{M} + (\kappa+\beta)\rho{_\textsf{d}}} \Big\} \right) \\ &= \begin{cases} R \log \Big( 1 + \frac{\rho{_\textsf{d}}}{\frac{R}{M} + (\kappa+\beta)\rho{_\textsf{d}}}\Big) & \text{if~} \frac{\rho{_\textsf{r}}}{\rho{_\textsf{d}}} \geq 1\!+\! \frac{(\kappa+\beta) \eta{_\textsf{r}}M}{R} \\ R \log \Big( 1 + \frac{\rho{_\textsf{r}}}{\frac{R}{M} + (\kappa+\beta)(\rho{_\textsf{r}}+\eta{_\textsf{r}})}\Big) & \text{else}. \end{cases} \label{eq:Ifd} \end{aligned}$$ When $\eta{_\textsf{r}}\gg\rho{_\textsf{r}}$, the $\eta{_\textsf{r}}$-dependent term in [(\[eq:approx1\])]{} dominates unless ${\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}[l]={\ensuremath{\boldsymbol{0}}}$. In this case, the optimal covariances are the “half duplex” ones ${{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}{_{\textsf{HD}}}{\triangleq}(\frac{2}{N}{\ensuremath{\boldsymbol{I}}},{\ensuremath{\boldsymbol{0}}},{\ensuremath{\boldsymbol{0}}},\frac{2}{N}{\ensuremath{\boldsymbol{I}}})$, for which [(\[eq:approx1\])]{} gives $$\begin{aligned} I({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}{_{\textsf{HD}}}) &\approx \left\{ \begin{array}{@{}l@{~}l@{}} \frac{R}{2} \log \Big( 1 + \frac{\rho{_\textsf{d}}}{\frac{R}{2M} + (\kappa+\beta)\rho{_\textsf{d}}}\Big) & \text{if~} \frac{\rho{_\textsf{r}}}{\rho{_\textsf{d}}}\geq 1 \\ \frac{R}{2} \log \Big( 1 + \frac{\rho{_\textsf{r}}}{\frac{R}{2M} + (\kappa+\beta)\rho{_\textsf{r}}}\Big) & \text{else}. \end{array} \right. \label{eq:Ihd} \end{aligned}$$ Finally, given any triple $(\rho{_\textsf{r}},\eta{_\textsf{r}},\rho{_\textsf{d}})$, we approximate the achievable rate as follows: $I_*\approx \max\{I({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}{_{\textsf{FD}}}),I({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}{_{\textsf{HD}}})\}$. From [(\[eq:Ifd\])]{}-[(\[eq:Ihd\])]{}, using $\theta {\triangleq}\frac{R}{M(\kappa+\beta)}$, it is straightforward to show that the approximated system operates as follows. 1. Say $\frac{\rho{_\textsf{r}}}{\rho{_\textsf{d}}}\leq 1$. Then full-duplex is used iff $$\eta{_\textsf{r}}\leq \frac{1}{2}\sqrt{(\theta+2\rho{_\textsf{r}})^2+\frac{2\rho{_\textsf{r}}}{\kappa+\beta}(\theta+2\rho{_\textsf{r}})} -\frac{1}{2}\theta . \label{eq:bndry1}$$ For either half- or full-duplex, $I_*$ is invariant to $\rho{_\textsf{d}}$, i.e., the source-to-relay link is the limiting one. 2. Say $1\leq \frac{\rho{_\textsf{r}}}{\rho{_\textsf{d}}}\leq 1+\frac{(\kappa+\beta)\eta{_\textsf{r}}M}{R}$. Full-duplex is used iff $$\eta{_\textsf{r}}\leq \frac{\rho{_\textsf{r}}}{2\rho{_\textsf{d}}} \sqrt{(\theta+2\rho{_\textsf{d}})^2+\frac{2\rho{_\textsf{d}}}{\kappa+\beta}(\theta+2\rho{_\textsf{d}})} -\theta\Big(1-\frac{\rho{_\textsf{r}}}{2\rho{_\textsf{d}}}\Big) . \label{eq:bndry2}$$ 3. Say $1+\frac{(\kappa+\beta)\eta{_\textsf{r}}M}{R}\leq \frac{\rho{_\textsf{r}}}{\rho{_\textsf{d}}}$, or equivalently $\eta{_\textsf{r}}\leq \eta{_\textsf{crit}}{\triangleq}\big(\frac{\rho{_\textsf{r}}}{\rho{_\textsf{d}}}-1\big)\frac{R}{M(\kappa+\beta)}$. Then full-duplex is always used, and $I_*$ is invariant to $\rho{_\textsf{r}}$ and $\eta{_\textsf{r}}$, i.e., the rate is limited by the relay-to-destination link. [Figure \[fig:minrate\_approx\]]{} shows a contour plot of the proposed achievable-rate approximation as a function of INR $\eta{_\textsf{r}}$ and SNR $\rho{_\textsf{r}}$, for the case that $\rho{_\textsf{r}}/\rho{_\textsf{d}}=2$. We shall see in [Section \[sec:sims\]]{} that our approximation of the covariance-optimized achievable-rate is reasonably close to that found by solving [(\[eq:opt\])]{} using bisection/GP. \[\]\[\]\[0.7\][INR $\eta{_\textsf{r}}$ \[dB\]]{} \[\]\[\]\[0.7\][SNR $\rho{_\textsf{r}}$ \[dB\]]{} ![Contour plot of the approximated achievable rate $I_*$ versus relay SNR $\rho{_\textsf{r}}$ and INR $\eta{_\textsf{r}}$, for $N=3$, $M=4$, $\beta=\kappa=-40$dB, and $\rho{_\textsf{r}}/\rho{_\textsf{d}}= 2$. The horizontal dashed line shows the INR $\eta{_\textsf{crit}}$, and the dark curve shows the boundary between full- and half-duplex regimes described in [(\[eq:bndry2\])]{}. []{data-label="fig:minrate_approx"}](figures/minrate_approx.eps "fig:"){width="\figsizein"} Numerical Results {#sec:sims} =================== In this section, we numerically investigate the behavior of the end-to-end rates achievable for full-duplex MIMO relaying under the proposed limited transmitter/receiver-DR and channel-estimation-error models. Recall that, in [Section \[sec:analysis\]]{}, it was shown that, for a fixed set of transmit covariance matrices ${{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}$ and time-sharing parameter $\tau$, the achievable rate $I_{\tau}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}})$ can be lower-bounded using $\underline{I}_{\tau}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}})$ from [(\[eq:mutinfo\])]{}, and upper-bounded using the perfect-CSI $\overline{I}_{\tau}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}})$, where the bounds converge as training $T\rightarrow\infty$. Then, in [Section \[sec:approach\]]{}, a bisection/GP scheme was proposed to maximize $\underline{I}_{\tau}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}})$ subject to the power-constraint ${{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}\in{\mathbb{Q}_{\tau}}$, which was subsequently maximized over $\tau\in[0,1]$. We now study the average behavior of the bisection/GP-optimized rate ${\underline{I}_*}=\max_\tau\max_{{{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}\in{\mathbb{Q}_{\tau}}}\underline{I}_{\tau}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}})$ as a function of SNRs $\rho{_\textsf{r}}$ and $\rho{_\textsf{d}}$; INRs $\eta{_\textsf{r}}$ and $\eta{_\textsf{d}}$; dynamic range parameters $\kappa$ and $\beta$; number of antennas $N{_\textsf{s}}$, $N{_\textsf{r}}$, $M{_\textsf{r}}$, and $M{_\textsf{d}}$; and training length $T$. We also investigate the role of interference cancellation, the role of two distinct data periods, the role of $\tau$-optimization, and the relation to optimized half-duplex (OHD) signaling. In doing so, we find close agreement with the achievable-rate approximation proposed in [Section \[sec:approx\]]{} and illustrated in [Fig. \[fig:minrate\_approx\]]{}. For the numerical results below, the propagation channel model from [Section \[sec:chan\]]{} and the limited transmitter/receiver-DR models from [Section \[sec:lim\_tdr\]]{} and [Section \[sec:lim\_rdr\]]{} were employed, pilot-aided channel estimation was implemented as in [Section \[sec:chan\_est\]]{}, and the power constraint [(\[eq:constraint\])]{} was applied, implying the channel-estimation-error covariance [(\[eq:chan\_est\_err\])]{} and the aggregate-noise covariance [(\[eq:sigma\])]{}. Throughout, we used $N {\triangleq}N{_\textsf{s}}= N{_\textsf{r}}$ transmit antennas, $M {\triangleq}M{_\textsf{r}}= M{_\textsf{d}}$ receive antennas, the SNR ratio $\rho{_\textsf{r}}/\rho{_\textsf{d}}= 2$, the destination INR $\eta{_\textsf{d}}= 1$, training duration $T = 50$ (as justified below), Armijo parameters $\sigma = 0.01$ and $\nu = 0.2$, and GP stopping threshold $\epsilon = 0.01$. For each channel realization, the time-sharing coefficient $\tau$ was optimized over the grid $\tau \in \{0.1,0.2,0.3,\dots,0.9\}$, and all results were averaged over $100$ realizations unless specified otherwise. Below, we denote the full scheme proposed in [Section \[sec:approach\]]{} by “TCO-2-IC,” which indicates the use of interference cancellation (IC) and transmit covariance optimization (TCO) performed individually over the 2 data periods (i.e., ${\ensuremath{\mathcal{T}}}{_\textsf{data}}[1]$ and ${\ensuremath{\mathcal{T}}}{_\textsf{data}}[2]$). To test the impact of IC and of two data periods, we also implemented the proposed scheme but without IC, which we refer to as “TCO-2,” as well as the proposed scheme with only one data period (i.e., ${\ensuremath{\boldsymbol{Q}}}_i[1]={\ensuremath{\boldsymbol{Q}}}_i[2]~\forall i$), which we refer to as “TCO-1-IC.” To optimize[^10] half-duplex, we used GP to maximize the sum-rate $\underline{I}_{\tau}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}},\frac{1}{2})$ under the power constraint [(\[eq:constraint\])]{} and the half-duplex constraint ${\ensuremath{\boldsymbol{Q}}}_1[2]={\ensuremath{\boldsymbol{0}}}={\ensuremath{\boldsymbol{Q}}}_2[1]$; $\tau$-optimization was performed as described above. To mitigate GP’s sensitivity to initialization, we tried two initializations for each $\zeta$-weighted-sum-rate problem, OHD and “naive” full-duplex (NFD), and the one yielding the maximum min-rate was retained. OHD was calculated as explained above, whereas NFD employed non-zero OHD covariance matrices ${\ensuremath{\boldsymbol{Q}}}_1[1]$ and ${\ensuremath{\boldsymbol{Q}}}_2[2]$ over both data periods (which is indeed optimal when $\eta{_\textsf{r}}=0 =\eta{_\textsf{d}}$). Note that both OHD and NFD are invariant to $\zeta$, $\eta{_\textsf{r}}$, and $\eta{_\textsf{d}}$. In [Fig. \[fig:Training\_bw\]]{}, we investigate the role of channel-estimation training length $T$ on the achievable-rate lower bound $\underline{I}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}})$ of TCO-2-IC. There we see that the rate increases rapidly in $T$ for small values of $T$, but quickly saturates for larger values of $T$. This behavior can be understood from [(\[eq:sigma\])]{}-[(\[eq:Dhat\])]{}, which suggest that channel estimation error will have a negligible effect on the noise covariances ${\ensuremath{\Hat{\boldsymbol{\Sigma}}}}{_\textsf{r}}[l]$ and ${\ensuremath{\Hat{\boldsymbol{\Sigma}}}}{_\textsf{d}}[l]$ when $T N \gg 1$. [Figure \[fig:Training\_bw\]]{} also shows the corresponding achievable-rate upper bounds $\overline{I}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}})$. These traces confirm that the nominal training length $T = 50$ ensures $\underline{I}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}) \approx \overline{I}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}) \approx I({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}})$. \[l\]\[l\]\[0.7\][$\eta{_\textsf{r}}\!=\! 0{\text{dB}}$]{} \[l\]\[l\]\[0.7\][$\eta{_\textsf{r}}\!=\! 40{\text{dB}}$]{} \[l\]\[l\]\[0.7\][$\eta{_\textsf{r}}\!=\! 100{\text{dB}}$]{} \[\]\[\]\[0.7\][Training Length $T$]{} \[\]\[\]\[0.7\][Min Rate (bpcu)]{} ![ Achievable-rate lower bound $\underline{I}_*$ for TCO-2-IC versus training interval $T$. Here, $N = 3$, $M = 4$, $\beta = \kappa = -40{\text{dB}}$, $\rho{_\textsf{r}}= 15{\text{dB}}$, $\rho{_\textsf{r}}/\rho{_\textsf{d}}= 2$, and $\eta{_\textsf{d}}= 0{\text{dB}}$. Also shown as a dashed line which is the corresponding upper bound $\overline{I}_*$ for each value of $\eta{_\textsf{r}}$. []{data-label="fig:Training_bw"}](figures/Training_bw.eps){width="\figsizein"} In [Fig. \[fig:rate\_vs\_eta\_bw\]]{}, we examine achievable-rate performance versus INR $\eta{_\textsf{r}}$ for the TCO-2-IC, TCO-1-IC, TCO-2, and OHD schemes, using different dynamic range parameters $\beta = \kappa$. For OHD, we see that rate is invariant to INR $\eta{_\textsf{r}}$, as expected. For the proposed TCO-2-IC, we observe “full duplex” performance for low-to-mid values of $\eta{_\textsf{r}}$ and a transition to OHD performance at high values of $\eta{_\textsf{r}}$, just as predicted by the approximation in [Section \[sec:approx\]]{}. In fact, the rates in [Fig. \[fig:rate\_vs\_eta\_bw\]]{} are very close to the approximated values in [Fig. \[fig:minrate\_approx\]]{}. To see the importance of two distinct data-communication periods, we examine the TCO-1-IC trace, where we observe TCO-2-IC-like performance at low-to-midrange values of $\eta{_\textsf{r}}$, but performance that drops below OHD at high $\eta{_\textsf{r}}$. Essentially, TCO-1-IC forces full-duplex signaling at high INR $\eta{_\textsf{r}}$, where half-duplex signaling is optimal, while TCO-2-IC facilitates the possibility of half-duplex signaling through the use of two distinct data-communication periods, similar to the MIMO-interference-channel scheme in [@Rong:TWC:08]. The effect of $\tau$-optimization can be seen by comparing the two OHD traces, one which uses the fixed value $\tau=0.5$ and the other which uses the optimized value $\tau=\tau_*$. The separation between these traces shows that $\tau$-optimization gives a small but noticable rate gain. Finally, by examining the TCO-2 trace, we conclude that partial interference cancellation is very important for all but extremely low or high values of INR $\eta{_\textsf{r}}$. \[l\]\[l\]\[0.54\][$\beta \!=\! \kappa \!=\! -80{\text{dB}}$]{} \[l\]\[l\]\[0.54\][$\beta \!=\! \kappa \!=\! -40{\text{dB}}$]{} \[B\]\[\]\[0.7\][$\tau = \tau_\star$]{} \[\]\[t\]\[0.7\][$\tau = 0.5$]{} \[\]\[\]\[0.7\][INR $\eta{_\textsf{r}}$ (dB)]{} \[l\]\[l\]\[0.67\][OHD]{} \[\]\[\]\[0.7\][Min Rate (bpcu)]{} ![ Achievable-rate lower bound $\underline{I}_*$ for TCO-2-IC, TCO-2, TCO-1-IC, and OHD versus INR $\eta{_\textsf{r}}$. Here, $N = 3$, $M = 4$, $\rho{_\textsf{r}}= 15{\text{dB}}$, $\rho{_\textsf{r}}/\rho{_\textsf{d}}= 2$, $\eta{_\textsf{d}}= 0{\text{dB}}$, and $T = 50$. OHD is plotted for $\beta = \kappa = -40{\text{dB}}$, but was observed to give nearly identical rate for $\beta = \kappa=-80$dB. Both fixed-time-share ($\tau=0.5$) and optimized-time-share ($\tau=\tau_*$) versions of OHD are shown.[]{data-label="fig:rate_vs_eta_bw"}](figures/rate_vs_eta_bw.eps){width="\figsizein"} In [Fig. \[fig:rate\_vs\_rho\_bw\]]{}, we examine the rate of the proposed TCO-IC-2 and OHD versus SNR $\rho{_\textsf{r}}$, using the dynamic range parameters $\beta=\kappa=-40$dB, $\eta{_\textsf{d}}= 0$dB, and two fixed values of INR $\eta{_\textsf{r}}$. All the behaviors in [Fig. \[fig:rate\_vs\_rho\_bw\]]{} are predicted by the rate approximation described in [Section \[sec:approx\]]{} and illustrated in [Fig. \[fig:minrate\_approx\]]{}. In particular, at the low INR of $\eta{_\textsf{r}}=20$dB, TCO-IC-2 operates in the full-duplex regime for all values of SNR $\rho{_\textsf{r}}$. Meanwhile, at the high INR of $\eta{_\textsf{r}}=60$dB, TCO-IC-2 operates in half-duplex at low values of SNR $\rho{_\textsf{r}}$, but switches to full-duplex after $\rho{_\textsf{r}}$ exceeds a threshold. \[l\]\[l\]\[0.6\][$\eta{_\textsf{r}}= 20{\text{dB}}$]{} \[l\]\[l\]\[0.6\][$\eta{_\textsf{r}}= 80{\text{dB}}$]{} \[\]\[\]\[0.7\][SNR $\rho{_\textsf{r}}$ (dB)]{} \[l\]\[l\]\[0.67\][OHD]{} \[\]\[\]\[0.7\][Min Rate (bpcu)]{} ![ Achievable-rate lower bound $\underline{I}_*$ for TCO-2-IC and OHD versus SNR $\rho{_\textsf{r}}$. Here, $\rho{_\textsf{r}}/\rho{_\textsf{d}}= 2$, $\eta{_\textsf{d}}= 0{\text{dB}}$, $N = 3$, $M = 4$, $\beta=\kappa = -40{\text{dB}}$, and $T = 50$. OHD in this figure is optimized over $\tau$.[]{data-label="fig:rate_vs_rho_bw"}](figures/rate_vs_rho_bw.eps){width="\figsizein"} In [Fig. \[fig:minrate\]]{}, we plot the GP-optimized rate contours of the proposed TCO-IC-2 versus both SNR $\rho{_\textsf{r}}$ and INR $\eta{_\textsf{r}}$, for comparison to the approximation in [Fig. \[fig:minrate\_approx\]]{}. The two plots show a relatively good match, confirming the accuracy of the approximation. The greatest discrepancy between the plots occurs when $\eta{_\textsf{r}}\approx\rho{_\textsf{r}}$ and both $\eta{_\textsf{r}}$ and $\rho{_\textsf{r}}$ are large, which makes sense because the approximation was derived using $\eta{_\textsf{r}}\ll\rho{_\textsf{r}}$ and $\eta{_\textsf{r}}\gg\rho{_\textsf{r}}$. \[\]\[\]\[0.7\][INR $\eta{_\textsf{r}}$ (dB)]{} \[\]\[\]\[0.7\][SNR $\rho{_\textsf{r}}$ (dB)]{} ![Contour plot of the achievable-rate lower bound $\underline{I}_*$ for TCO-2-IC versus INR $\eta{_\textsf{r}}$ and SNR $\rho{_\textsf{r}}$, for $\rho{_\textsf{d}}=\rho{_\textsf{r}}/2$, $\eta{_\textsf{d}}= 0{\text{dB}}$, $N=3$, $M=4$, and $\beta=\kappa=-40$dB. The dark curve (i.e., approximate full/half-duplex boundary) and dashed line (i.e., critical INR $\eta{_\textsf{crit}}$) are the same as in [Fig. \[fig:minrate\_approx\]]{}, and shown for reference. The results are averaged over 250 realizations.[]{data-label="fig:minrate"}](figures/minrate.eps){width="\figsizein"} Finally, in [Fig. \[fig:Increase\_NT\_bw\]]{}, we explore the achievable rate of TCO-2-IC and OHD versus the number of antennas, $N$ and $M$, for fixed values of SNR $\rho{_\textsf{r}}=15{\text{dB}}$ and $\rho{_\textsf{r}}/\rho{_\textsf{d}}=2$, INR $\eta{_\textsf{r}}=30{\text{dB}}$ and $\eta{_\textsf{d}}=0{\text{dB}}$, and DR parameters $\beta=\kappa=-40{\text{dB}}$. We recall, from [Fig. \[fig:rate\_vs\_eta\_bw\]]{}, that these parameters correspond to the interesting regime where TCO-2-IC performs between half- and full-duplex. In [Fig. \[fig:Increase\_NT\_bw\]]{}, we see that achievable rate increases with both $M$ and $N$ numbers of antennas, as expected. More interesting is the achievable-rate behavior when the total number of antennas per modem is fixed, e.g., at $N+M=7$, as illustrated by the triangles in [Fig. \[fig:Increase\_NT\_bw\]]{}. The figure indicates that the configurations $(N,M)=(3,4)$ and $(N,M)=(4,3)$ are best, which (it can be shown) is consistent with approximation from [Section \[sec:approx\]]{}. \[\]\[\]\[0.7\][Number of transmit antennas $N$]{} \[Bl\]\[Bl\]\[0.85\][$M$]{} \[l\]\[l\]\[0.8\][$N\!+\!M\!=\!7$]{} \[l\]\[l\]\[0.67\][OHD]{} Conclusion {#sec:conclusion} ========== We considered the problem of decode-and-forward-based full-duplex MIMO relaying between a source node and destination node. In our analysis, we considered limited transmitter/receiver dynamic range, imperfect CSI, background AWGN, and very high levels of self-interference. Using explicit models for dynamic-range limitation and pilot-aided channel estimation error, we derived upper and lower bounds on the end-to-end achievable rate that tighten as the number of pilots increases. Furthermore, we proposed a transmission scheme based on maximizing the achievable-rate lower-bound. The latter requires the solution to a nonconvex optimization problem, for which we use bisection search and Gradient Projection, the latter of which implicitly performs water-filling. In addition, we derived an analytic approximation to the achievable rate that agrees closely with the results of the numerical optimization. Finally, we studied the achievable-rate numerically, as a function of signal-to-noise ratio, interference-to-noise ratio, transmitter/receiver dynamic range, number of antennas, and number of pilots. In future work, we plan to investigate the effect of practical coding/decoding schemes, channel time-variation, and bidirectional relaying. Channel Estimation Details {#app:chan_est} ========================== In this appendix, we derive certain details of [Section \[sec:chan\_est\]]{}. Under limited transmitter-DR, the undistorted received space-time signal is $${\ensuremath{\boldsymbol{U}}} = \sqrt{\alpha}{\ensuremath{\boldsymbol{H}}}({\ensuremath{\boldsymbol{X}}}+{\ensuremath{\boldsymbol{C}}}) + {\ensuremath{\boldsymbol{N}}} ,$$ where the spatial correlation[^11] of the non-distorted pilot signal ${\ensuremath{\boldsymbol{X}}}$ equals $\frac{2}{N}{\ensuremath{\boldsymbol{I}}}$ and hence the spatial correlation of the transmitter distortion ${\ensuremath{\boldsymbol{C}}}$ equals $\frac{2\kappa}{N}{\ensuremath{\boldsymbol{I}}}$. Conditioned on ${\ensuremath{\boldsymbol{H}}}$, the spatial correlation of ${\ensuremath{\boldsymbol{U}}}$ is then $ {\ensuremath{\boldsymbol{\Phi}}} = \frac{2\alpha(1+\kappa)}{N}{\ensuremath{\boldsymbol{HH}}}{^{\textsf{H}}}+ {\ensuremath{\boldsymbol{I}}} $, and hence the ${\ensuremath{\boldsymbol{H}}}$-conditional spatial correlation of the receiver distortion ${\ensuremath{\boldsymbol{E}}}$ equals $$\beta\operatorname{diag}({\ensuremath{\boldsymbol{\Phi}}}) = \beta\bigg( \frac{2\alpha(1+\kappa)}{N}\operatorname{diag}\Big({\ensuremath{\boldsymbol{HH}}}{^{\textsf{H}}}\Big) + {\ensuremath{\boldsymbol{I}}}\bigg).$$ Given [(\[eq:Y\])]{}, the distorted received signal ${\ensuremath{\boldsymbol{Y}}}$ can be written as $${\ensuremath{\boldsymbol{Y}}} = \sqrt{\alpha}{\ensuremath{\boldsymbol{H}}}{\ensuremath{\boldsymbol{X}}} + {\ensuremath{\boldsymbol{W}}} ,$$ where ${\ensuremath{\boldsymbol{W}}} {\triangleq}\sqrt{\alpha}{\ensuremath{\boldsymbol{H}}}{\ensuremath{\boldsymbol{C}}} + {\ensuremath{\boldsymbol{N}}} + {\ensuremath{\boldsymbol{E}}}$ is aggregate complex Gaussian noise that is temporally white with ${\ensuremath{\boldsymbol{H}}}$-conditional spatial correlation $\frac{2\alpha\kappa}{N}{\ensuremath{\boldsymbol{HH}}}{^{\textsf{H}}}+ {\ensuremath{\boldsymbol{I}}} + \beta\big(\frac{2\alpha(1+\kappa)}{N}\operatorname{diag}({\ensuremath{\boldsymbol{HH}}}{^{\textsf{H}}}) + {\ensuremath{\boldsymbol{I}}}\big)$. Due to the fact that $\frac{1}{2T}{\ensuremath{\boldsymbol{XX}}}{^{\textsf{H}}}= {\ensuremath{\boldsymbol{I}}}$, the channel estimate [(\[eq:Hhat\])]{} takes the form $$\sqrt{\alpha}{\ensuremath{\Hat{\boldsymbol{H}}}} = \frac{1}{2T} {\ensuremath{\boldsymbol{YX}}}{^{\textsf{H}}}= \sqrt{\alpha}{\ensuremath{\boldsymbol{H}}} + \frac{1}{2T}{\ensuremath{\boldsymbol{WX}}}{^{\textsf{H}}},$$ where $\frac{1}{2T}{\ensuremath{\boldsymbol{WX}}}{^{\textsf{H}}}$ is Gaussian channel estimation error. We now analyze the ${\ensuremath{\boldsymbol{H}}}$-conditional correlations among the elements of the channel estimation error matrix. We begin by noticing $$\begin{aligned} \lefteqn{ \operatorname{E}\bigg\{\bigg[\frac{1}{2T}{\ensuremath{\boldsymbol{WX}}}{^{\textsf{H}}}\bigg]_{m,p}\bigg[\frac{1}{2T}{\ensuremath{\boldsymbol{WX}}}{^{\textsf{H}}}\bigg]^*_{n,q} ~\bigg|~{\ensuremath{\boldsymbol{H}}}\bigg\} }\nonumber\\ &= \frac{1}{(2T)^2} \operatorname{E}\bigg\{\sum_{k}[{\ensuremath{\boldsymbol{W}}}]_{m,k}[{\ensuremath{\boldsymbol{X}}}]^*_{p,k} \sum_{l}[{\ensuremath{\boldsymbol{X}}}]_{q,l}[{\ensuremath{\boldsymbol{W}}}]^*_{n,l}\bigg|{\ensuremath{\boldsymbol{H}}}\bigg\} \\ &= \frac{1}{(2T)^2} \sum_{k,l} [{\ensuremath{\boldsymbol{X}}}]^*_{p,k} [{\ensuremath{\boldsymbol{X}}}]_{q,l} \operatorname{E}\big\{[{\ensuremath{\boldsymbol{W}}}]_{m,k} [{\ensuremath{\boldsymbol{W}}}]^*_{n,l} {\,\big|\,}{\ensuremath{\boldsymbol{H}}}\big\} . \end{aligned}$$ To find $\operatorname{E}\big\{[{\ensuremath{\boldsymbol{W}}}]_{m,k} [{\ensuremath{\boldsymbol{W}}}]^*_{n,l} {\,|\,}{\ensuremath{\boldsymbol{H}}}\big\}$, we recall that $$\begin{aligned} \operatorname{E}\big\{[{\ensuremath{\boldsymbol{N}}}]_{m,k} [{\ensuremath{\boldsymbol{N}}}]^*_{n,l} {\,\big|\,}{\ensuremath{\boldsymbol{H}}}\big\} &= \delta_{m-n}\delta_{k-l} \\ \operatorname{E}\big\{[{\ensuremath{\boldsymbol{C}}}]_{q,k} [{\ensuremath{\boldsymbol{C}}}]^*_{p,l} {\,\big|\,}{\ensuremath{\boldsymbol{H}}}\big\} &= \frac{2\kappa}{N}\, \delta_{q-p}\delta_{k-l} \\ \operatorname{E}\big\{[{\ensuremath{\boldsymbol{E}}}]_{m,k} [{\ensuremath{\boldsymbol{E}}}]^*_{n,l} {\,\big|\,}{\ensuremath{\boldsymbol{H}}}\big\} &= \beta [{\ensuremath{\boldsymbol{\Phi}}}]_{m,m} \delta_{m-n}\delta_{k-l} , \end{aligned}$$ implying that $$\begin{aligned} \lefteqn{ \operatorname{E}\big\{[{\ensuremath{\boldsymbol{W}}}]_{m,k} [{\ensuremath{\boldsymbol{W}}}]^*_{n,l} {\,\big|\,}{\ensuremath{\boldsymbol{H}}}\big\} } \nonumber \\ &= \alpha \sum_{q,p} [{\ensuremath{\boldsymbol{H}}}]_{m,q} [{\ensuremath{\boldsymbol{H}}}]^*_{n,p} \operatorname{E}\big\{[{\ensuremath{\boldsymbol{C}}}]_{q,k}[{\ensuremath{\boldsymbol{C}}}]^*_{p,l}{\,\big|\,}{\ensuremath{\boldsymbol{H}}}\big\} \nonumber\\&\quad + \operatorname{E}\big\{[{\ensuremath{\boldsymbol{N}}}]_{m,k} [{\ensuremath{\boldsymbol{N}}}]^*_{n,l} {\,|\,}{\ensuremath{\boldsymbol{H}}}\big\} + \operatorname{E}\big\{[{\ensuremath{\boldsymbol{E}}}]_{m,k} [{\ensuremath{\boldsymbol{E}}}]^*_{n,l} {\,|\,}{\ensuremath{\boldsymbol{H}}}\big\} \\ &= \delta_{k-l} \bigg( \alpha \frac{2\kappa}{N} \sum_p [{\ensuremath{\boldsymbol{H}}}]_{m,p} [{\ensuremath{\boldsymbol{H}}}]^*_{n,p} + (1+\beta [{\ensuremath{\boldsymbol{\Phi}}}]_{m,m}) \delta_{m-n} \bigg), \nonumber \end{aligned}$$ which implies that $$\begin{aligned} \lefteqn{ \operatorname{E}\bigg\{\bigg[\frac{1}{2T}{\ensuremath{\boldsymbol{WX}}}{^{\textsf{H}}}\bigg]_{m,p}\bigg[\frac{1}{2T}{\ensuremath{\boldsymbol{WX}}}{^{\textsf{H}}}\bigg]^*_{n,q} ~\bigg|~{\ensuremath{\boldsymbol{H}}}\bigg\} }\nonumber\\ &= \frac{1}{(2T)^2} \sum_{k} [{\ensuremath{\boldsymbol{X}}}]^*_{p,k} [{\ensuremath{\boldsymbol{X}}}]_{q,k} \bigg( \alpha \frac{2\kappa}{N} \sum_p [{\ensuremath{\boldsymbol{H}}}]_{m,p} [{\ensuremath{\boldsymbol{H}}}]^*_{n,p} \nonumber\\&\quad + (1+\beta [{\ensuremath{\boldsymbol{\Phi}}}]_{m,m}) \delta_{m-n} \bigg) \\ &= \delta_{p-q} \frac{1}{2T} \bigg( \alpha \frac{2\kappa}{N} \sum_p [{\ensuremath{\boldsymbol{H}}}]_{m,p} [{\ensuremath{\boldsymbol{H}}}]^*_{n,p} \nonumber\\ & \quad + (1+ \beta [{\ensuremath{\boldsymbol{\Phi}}}]_{m,m}) \delta_{m-n} \bigg), \label{eq:white} \end{aligned}$$ where the latter expression follows from the fact that $\sum_k[{\ensuremath{\boldsymbol{X}}}]^*_{p,k} [{\ensuremath{\boldsymbol{X}}}]_{q,k} = 2T\delta_{p-q}$, as implied by $\frac{1}{2T}{\ensuremath{\boldsymbol{XX}}}{^{\textsf{H}}}={\ensuremath{\boldsymbol{I}}}$. Equation [(\[eq:white\])]{} implies the estimation error is temporally white with ${\ensuremath{\boldsymbol{H}}}$-conditional spatial correlation $$\begin{aligned} {\ensuremath{\boldsymbol{D}}} &{\triangleq}\frac{1}{2T} \bigg( \alpha \frac{2\kappa}{N} {\ensuremath{\boldsymbol{HH}}}{^{\textsf{H}}}+ {\ensuremath{\boldsymbol{I}}} + \beta \operatorname{diag}({\ensuremath{\boldsymbol{\Phi}}}) \bigg) \\ &= \frac{1}{2T} \bigg( \alpha\frac{2\kappa}{N} {\ensuremath{\boldsymbol{HH}}}{^{\textsf{H}}}+ {\ensuremath{\boldsymbol{I}}} \nonumber\\&\quad+ \beta\Big(\alpha\frac{2(1+\kappa)}{N}\operatorname{diag}\Big({\ensuremath{\boldsymbol{HH}}}{^{\textsf{H}}}\Big) + {\ensuremath{\boldsymbol{I}}}\Big) \bigg) . \end{aligned}$$ Our final claim is that the channel estimation error $\frac{1}{2T}{\ensuremath{\boldsymbol{WX}}}{^{\textsf{H}}}$ is statistically equivalent to ${\ensuremath{\boldsymbol{D}}}^{\frac{1}{2}}{\ensuremath{\Tilde{\boldsymbol{H}}}}$, with ${\ensuremath{\Tilde{\boldsymbol{H}}}}\in{{\mathbb{C}}}^{M\times N}$ constructed from i.i.d ${\ensuremath{\mathcal{CN}}}(0,1)$ entries. This can be seen from the following: $$\begin{aligned} \lefteqn{ \operatorname{E}\Big\{[{\ensuremath{\boldsymbol{D}}}^\frac{1}{2}{\ensuremath{\Tilde{\boldsymbol{H}}}}]_{m,p} [{\ensuremath{\boldsymbol{D}}}^{\frac{1}{2}}{\ensuremath{\Tilde{\boldsymbol{H}}}}]^*_{n,q} \Big\} }\nonumber \\ &= \operatorname{E}\left\{\sum_{k}[{\ensuremath{\boldsymbol{D}}}^{\frac{1}{2}}]_{m,k}[{\ensuremath{\Tilde{\boldsymbol{H}}}}]_{k,p} \sum_{l}[{\ensuremath{\boldsymbol{D}}}^{\frac{1}{2}}]^*_{n,l}[{\ensuremath{\Tilde{\boldsymbol{H}}}}]^*_{l,q}\right\} \\ &= \sum_{k,l} [{\ensuremath{\boldsymbol{D}}}^\frac{1}{2}]_{m,k} [{\ensuremath{\boldsymbol{D}}}^\frac{1}{2}]^*_{n,l} \operatorname{E}\big\{[{\ensuremath{\Tilde{\boldsymbol{H}}}}]_{k,p} [{\ensuremath{\Tilde{\boldsymbol{H}}}}]^*_{l,q} \big\} \\ &= \delta_{p-q} \sum_{k} [{\ensuremath{\boldsymbol{D}}}^\frac{1}{2}]_{m,k} [{\ensuremath{\boldsymbol{D}}}^\frac{1}{2}]^*_{n,k} \\ &= \delta_{p-q} [{\ensuremath{\boldsymbol{D}}}]_{m,n} , \end{aligned}$$ where we used the fact that $\operatorname{E}\big\{[{\ensuremath{\Tilde{\boldsymbol{H}}}}]_{k,p} [{\ensuremath{\Tilde{\boldsymbol{H}}}}]^*_{l,q} \big\}=\delta_{k-l}\delta_{p-q}$. Interference Cancellation Details {#app:cancellation} ================================= In this appendix, we characterize the channel-estimate-conditioned covariance of the aggregate interference ${\ensuremath{\boldsymbol{v}}}{_\textsf{r}}$, whose expression was given in [(\[eq:v\])]{}. Recalling that ${\ensuremath{\Hat{\boldsymbol{D}}}}{\triangleq}\operatorname{E}\{{\ensuremath{\boldsymbol{D}}}{\,|\,}{\ensuremath{\Hat{\boldsymbol{H}}}}\}$, we first establish that $\operatorname{Cov}\{{\ensuremath{\boldsymbol{D}}}^\frac{1}{2}{\ensuremath{\Tilde{\boldsymbol{H}}}}{\ensuremath{\boldsymbol{x}}}{\,|\,}{\ensuremath{\Hat{\boldsymbol{H}}}}\} = {\ensuremath{\Hat{\boldsymbol{D}}}}\operatorname{tr}(\operatorname{Cov}({\ensuremath{\boldsymbol{x}}}))$, which will be useful in the sequel. To show this, we examine the $(m,n)^{th}$ element of the covariance matrix: $$\begin{aligned} \lefteqn{ [\operatorname{Cov}\{{\ensuremath{\boldsymbol{D}}}^\frac{1}{2}{\ensuremath{\Tilde{\boldsymbol{H}}}}{\ensuremath{\boldsymbol{x}}}{\,|\,}{\ensuremath{\Hat{\boldsymbol{H}}}}\}]_{m,n} }\nonumber \\ &= \operatorname{E}\big\{ [{\ensuremath{\boldsymbol{D}}}^\frac{1}{2}{\ensuremath{\Tilde{\boldsymbol{H}}}}{\ensuremath{\boldsymbol{x}}}]_{m} [{\ensuremath{\boldsymbol{D}}}^\frac{1}{2}{\ensuremath{\Tilde{\boldsymbol{H}}}}{\ensuremath{\boldsymbol{x}}}]_{n}^* {\,\big|\,}{\ensuremath{\Hat{\boldsymbol{H}}}}\big\} \\ &= \operatorname{E}\Big\{ \sum_{p,r}[{\ensuremath{\boldsymbol{D}}}^\frac{1}{2}]_{m,p}[{\ensuremath{\Tilde{\boldsymbol{H}}}}]_{p,r}[{\ensuremath{\boldsymbol{x}}}]_{r} \sum_{q,t}[{\ensuremath{\boldsymbol{D}}}^\frac{1}{2}]_{n,q}^*[{\ensuremath{\Tilde{\boldsymbol{H}}}}]_{q,t}^*[{\ensuremath{\boldsymbol{x}}}]_{t}^* {\,\Big|\,}{\ensuremath{\Hat{\boldsymbol{H}}}}\Big\} \nonumber\\ &= \sum_{p,r,q,t} \operatorname{E}\big\{ [{\ensuremath{\boldsymbol{D}}}^\frac{1}{2}]_{m,p} [{\ensuremath{\boldsymbol{D}}}^\frac{1}{2}]^*_{n,q} {\,\big|\,}{\ensuremath{\Hat{\boldsymbol{H}}}}\big\} \nonumber \\ &\quad \times \underbrace{ \operatorname{E}\big\{ [{\ensuremath{\Tilde{\boldsymbol{H}}}}]_{p,r} [{\ensuremath{\Tilde{\boldsymbol{H}}}}]^*_{q,t} \big\} }_{\delta_{p-q}\delta_{r-t}} \operatorname{E}\big\{ [{\ensuremath{\boldsymbol{x}}}]_{r} [{\ensuremath{\boldsymbol{x}}}]_{t}^* \big\} \\ &= [{\ensuremath{\Hat{\boldsymbol{D}}}}]_{m,n} \operatorname{tr}(\operatorname{Cov}\{{\ensuremath{\boldsymbol{x}}}\}) . \end{aligned}$$ Rewriting the previous equality in matrix form, we get the desired result. As a corollary, we note that $\operatorname{E}\{({\ensuremath{\boldsymbol{D}}}^\frac{1}{2}{\ensuremath{\Tilde{\boldsymbol{H}}}}) \operatorname{Cov}\{{\ensuremath{\boldsymbol{x}}}\} ({\ensuremath{\boldsymbol{D}}}^\frac{1}{2}{\ensuremath{\Tilde{\boldsymbol{H}}}}){^{\textsf{H}}}{\,|\,}{\ensuremath{\Hat{\boldsymbol{H}}}}\} = {\ensuremath{\Hat{\boldsymbol{D}}}}\operatorname{tr}(\operatorname{Cov}\{{\ensuremath{\boldsymbol{x}}}\})$, which will also be useful in the sequel. Next we characterize the $({\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}},{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rr}})$-conditional covariance of the receiver distortion ${\ensuremath{\boldsymbol{e}}}{_\textsf{r}}$. Recalling that $\operatorname{Cov}\{{\ensuremath{\boldsymbol{e}}}{_\textsf{r}}\}=\beta\operatorname{diag}({\ensuremath{\boldsymbol{\Phi}}}{_\textsf{r}})$ where ${\ensuremath{\boldsymbol{\Phi}}}{_\textsf{r}}=\operatorname{Cov}\{{\ensuremath{\boldsymbol{u}}}{_\textsf{r}}\}$, we have $\operatorname{Cov}\{{\ensuremath{\boldsymbol{e}}}{_\textsf{r}}{\,|\,}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}},{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rr}}\}=\beta\operatorname{diag}({\ensuremath{\Hat{\boldsymbol{\Phi}}}}{_\textsf{r}})$ where ${\ensuremath{\Hat{\boldsymbol{\Phi}}}}{_\textsf{r}}{\triangleq}\operatorname{Cov}\{{\ensuremath{\boldsymbol{u}}}{_\textsf{r}}{\,|\,}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}},{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rr}}\}$. Then, given that ${\ensuremath{\boldsymbol{u}}}{_\textsf{r}}={\ensuremath{\boldsymbol{y}}}{_\textsf{r}}-{\ensuremath{\boldsymbol{e}}}{_\textsf{r}}$ with ${\ensuremath{\boldsymbol{y}}}{_\textsf{r}}$ from [(\[eq:y\])]{}, and using the facts that $\operatorname{Cov}({\ensuremath{\boldsymbol{x}}}{_\textsf{s}}+{\ensuremath{\boldsymbol{c}}}{_\textsf{s}})={\ensuremath{\boldsymbol{Q}}}{_\textsf{s}}+\kappa\operatorname{diag}({\ensuremath{\boldsymbol{Q}}}{_\textsf{s}})$ and $\operatorname{Cov}({\ensuremath{\boldsymbol{x}}}{_\textsf{r}}+{\ensuremath{\boldsymbol{c}}}{_\textsf{r}})={\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}+\kappa\operatorname{diag}({\ensuremath{\boldsymbol{Q}}}{_\textsf{r}})$, we get $$\begin{aligned} {\ensuremath{\Hat{\boldsymbol{\Phi}}}}{_\textsf{r}}&= \rho{_\textsf{r}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}}\big({\ensuremath{\boldsymbol{Q}}}{_\textsf{s}}+\kappa\operatorname{diag}({\ensuremath{\boldsymbol{Q}}}{_\textsf{s}})\big) {\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}}{^{\textsf{H}}}\nonumber\\&\quad + \operatorname{E}\big\{ ({\ensuremath{\boldsymbol{D}}}{_\textsf{sr}}^{\frac{1}{2}}{\ensuremath{\Tilde{\boldsymbol{H}}}}{_\textsf{sr}}) \big({\ensuremath{\boldsymbol{Q}}}{_\textsf{s}}+\kappa\operatorname{diag}({\ensuremath{\boldsymbol{Q}}}{_\textsf{s}})\big) ({\ensuremath{\boldsymbol{D}}}{_\textsf{sr}}^{\frac{1}{2}}{\ensuremath{\Tilde{\boldsymbol{H}}}}{_\textsf{sr}}){^{\textsf{H}}}{\,\big|\,}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}}\big\} \nonumber\\&\quad +\eta{_\textsf{r}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rr}}\big({\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}+\kappa\operatorname{diag}({\ensuremath{\boldsymbol{Q}}}{_\textsf{r}})\big) {\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rr}}{^{\textsf{H}}}\nonumber\\&\quad + \operatorname{E}\big\{ ({\ensuremath{\boldsymbol{D}}}{_\textsf{rr}}^{\frac{1}{2}}{\ensuremath{\Tilde{\boldsymbol{H}}}}{_\textsf{rr}}) \big({\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}+\kappa\operatorname{diag}({\ensuremath{\boldsymbol{Q}}}{_\textsf{r}})\big) ({\ensuremath{\boldsymbol{D}}}{_\textsf{rr}}^{\frac{1}{2}}{\ensuremath{\Tilde{\boldsymbol{H}}}}{_\textsf{rr}}){^{\textsf{H}}}{\,\big|\,}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rr}}\big\} \nonumber\\&\quad + {\ensuremath{\boldsymbol{I}}} \\ &= \rho{_\textsf{r}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}}\big({\ensuremath{\boldsymbol{Q}}}{_\textsf{s}}+\kappa\operatorname{diag}({\ensuremath{\boldsymbol{Q}}}{_\textsf{s}})\big) {\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}}{^{\textsf{H}}}\nonumber\\ & \quad + {\ensuremath{\Hat{\boldsymbol{D}}}}{_\textsf{sr}}\operatorname{tr}({\ensuremath{\boldsymbol{Q}}}{_\textsf{s}}+\kappa\operatorname{diag}({\ensuremath{\boldsymbol{Q}}}{_\textsf{s}})) \nonumber\\ & \quad +\eta{_\textsf{r}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rr}}\big({\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}+\kappa\operatorname{diag}({\ensuremath{\boldsymbol{Q}}}{_\textsf{r}})\big) {\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rr}}{^{\textsf{H}}}\nonumber\\ & \quad + {\ensuremath{\Hat{\boldsymbol{D}}}}{_\textsf{rr}}\operatorname{tr}({\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}+\kappa\operatorname{diag}({\ensuremath{\boldsymbol{Q}}}{_\textsf{r}})) + {\ensuremath{\boldsymbol{I}}} . \end{aligned}$$ Then, $$\begin{aligned} {\ensuremath{\Hat{\boldsymbol{\Phi}}}}{_\textsf{r}}&= \rho{_\textsf{r}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}}\big({\ensuremath{\boldsymbol{Q}}}{_\textsf{s}}+\kappa\operatorname{diag}({\ensuremath{\boldsymbol{Q}}}{_\textsf{s}})\big) {\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}}{^{\textsf{H}}}+ (1+\kappa){\ensuremath{\Hat{\boldsymbol{D}}}}{_\textsf{sr}}\operatorname{tr}({\ensuremath{\boldsymbol{Q}}}{_\textsf{s}}) \nonumber\\&\quad +\eta{_\textsf{r}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rr}}\big({\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}+\kappa\operatorname{diag}({\ensuremath{\boldsymbol{Q}}}{_\textsf{r}})\big) {\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rr}}{^{\textsf{H}}}\nonumber\\&\quad + (1+\kappa){\ensuremath{\Hat{\boldsymbol{D}}}}{_\textsf{rr}}\operatorname{tr}({\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}) + {\ensuremath{\boldsymbol{I}}} \\ &\approx \rho{_\textsf{r}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}}{\ensuremath{\boldsymbol{Q}}}{_\textsf{s}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}}{^{\textsf{H}}}+ {\ensuremath{\Hat{\boldsymbol{D}}}}{_\textsf{sr}}\operatorname{tr}({\ensuremath{\boldsymbol{Q}}}{_\textsf{s}}) \nonumber\\ & \quad + \eta{_\textsf{r}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rr}}{\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rr}}{^{\textsf{H}}}+ {\ensuremath{\Hat{\boldsymbol{D}}}}{_\textsf{rr}}\operatorname{tr}({\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}) + {\ensuremath{\boldsymbol{I}}} , \end{aligned}$$ where, for the approximation, we assumed $\kappa\ll 1$. Thus, $$\begin{aligned} \lefteqn{ \operatorname{Cov}\{{\ensuremath{\boldsymbol{e}}}{_\textsf{r}}{\,|\,}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}},{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rr}}\} } \nonumber\\ &\approx \beta\big( \rho{_\textsf{r}}\operatorname{diag}({\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}}{\ensuremath{\boldsymbol{Q}}}{_\textsf{s}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}}{^{\textsf{H}}}) + {\ensuremath{\Hat{\boldsymbol{D}}}}{_\textsf{sr}}\operatorname{tr}({\ensuremath{\boldsymbol{Q}}}{_\textsf{s}}) \nonumber\\ & \quad + \eta{_\textsf{r}}\operatorname{diag}({\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rr}}{\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rr}}{^{\textsf{H}}}) + {\ensuremath{\Hat{\boldsymbol{D}}}}{_\textsf{rr}}\operatorname{tr}({\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}) + {\ensuremath{\boldsymbol{I}}} \big) . \quad \label{eq:Cove} \end{aligned}$$ Finally we are ready to characterize ${\ensuremath{\Hat{\boldsymbol{\Sigma}}}}{_\textsf{r}}$, the $({\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}},{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rr}})$-conditional covariance of ${\ensuremath{\boldsymbol{v}}}{_\textsf{r}}$. From [(\[eq:v\])]{}, $$\begin{aligned} {\ensuremath{\Hat{\boldsymbol{\Sigma}}}}{_\textsf{r}}&= \kappa\rho{_\textsf{r}}\operatorname{E}\big\{{\ensuremath{\boldsymbol{H}}}{_\textsf{sr}}\operatorname{diag}({\ensuremath{\boldsymbol{Q}}}{_\textsf{s}}){\ensuremath{\boldsymbol{H}}}{_\textsf{sr}}{^{\textsf{H}}}{\,\big|\,}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}}\big\} + {\ensuremath{\Hat{\boldsymbol{D}}}}{_\textsf{sr}}\operatorname{tr}({\ensuremath{\boldsymbol{Q}}}{_\textsf{s}}) \nonumber\\&\quad +\kappa\eta{_\textsf{r}}\operatorname{E}\big\{{\ensuremath{\boldsymbol{H}}}{_\textsf{rr}}\operatorname{diag}({\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}){\ensuremath{\boldsymbol{H}}}{_\textsf{rr}}{^{\textsf{H}}}{\,\big|\,}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rr}}\big\} + {\ensuremath{\Hat{\boldsymbol{D}}}}{_\textsf{rr}}\operatorname{tr}({\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}) \nonumber\\&\quad +{\ensuremath{\boldsymbol{I}}} + \operatorname{Cov}\{{\ensuremath{\boldsymbol{e}}}{_\textsf{r}}{\,|\,}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}},{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rr}}\} \\ &= \kappa\rho{_\textsf{r}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}}\operatorname{diag}({\ensuremath{\boldsymbol{Q}}}{_\textsf{s}}){\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}}{^{\textsf{H}}}+{\ensuremath{\boldsymbol{I}}} + \operatorname{Cov}\{{\ensuremath{\boldsymbol{e}}}{_\textsf{s}}{\,|\,}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}},{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rr}}\} \nonumber\\&\quad +\kappa\operatorname{E}\big\{({\ensuremath{\boldsymbol{D}}}{_\textsf{sr}}^\frac{1}{2}{\ensuremath{\Tilde{\boldsymbol{H}}}}{_\textsf{sr}}) \operatorname{diag}({\ensuremath{\boldsymbol{Q}}}{_\textsf{s}}) ({\ensuremath{\boldsymbol{D}}}{_\textsf{sr}}^\frac{1}{2}{\ensuremath{\Tilde{\boldsymbol{H}}}}{_\textsf{sr}}{^{\textsf{H}}}){^{\textsf{H}}}{\,\big|\,}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}}\big\} \nonumber\\&\quad + {\ensuremath{\Hat{\boldsymbol{D}}}}{_\textsf{sr}}\operatorname{tr}({\ensuremath{\boldsymbol{Q}}}{_\textsf{s}}) + {\ensuremath{\Hat{\boldsymbol{D}}}}{_\textsf{rr}}\operatorname{tr}({\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}) +\kappa\eta{_\textsf{r}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rr}}\operatorname{diag}({\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}){\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rr}}{^{\textsf{H}}}\nonumber\\&\quad +\kappa\operatorname{E}\big\{({\ensuremath{\boldsymbol{D}}}{_\textsf{rr}}^\frac{1}{2}{\ensuremath{\Tilde{\boldsymbol{H}}}}{_\textsf{rr}}) \operatorname{diag}({\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}) ({\ensuremath{\boldsymbol{D}}}{_\textsf{rr}}^\frac{1}{2}{\ensuremath{\Tilde{\boldsymbol{H}}}}{_\textsf{rr}}{^{\textsf{H}}}){^{\textsf{H}}}{\,\big|\,}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rr}}\big\} \\ &= \kappa\rho{_\textsf{r}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}}\operatorname{diag}({\ensuremath{\boldsymbol{Q}}}{_\textsf{s}}){\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}}{^{\textsf{H}}}+(1+\kappa){\ensuremath{\Hat{\boldsymbol{D}}}}{_\textsf{sr}}\operatorname{tr}({\ensuremath{\boldsymbol{Q}}}{_\textsf{s}}) \nonumber\\&\quad +\kappa\eta{_\textsf{r}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rr}}\operatorname{diag}({\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}){\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rr}}{^{\textsf{H}}}+(1+\kappa){\ensuremath{\Hat{\boldsymbol{D}}}}{_\textsf{rr}}\operatorname{tr}({\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}) \nonumber\\&\quad +{\ensuremath{\boldsymbol{I}}} + \operatorname{Cov}\{{\ensuremath{\boldsymbol{e}}}{_\textsf{s}}{\,|\,}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}},{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rr}}\} \\ &\approx {\ensuremath{\boldsymbol{I}}} + \kappa\rho{_\textsf{r}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}}\operatorname{diag}({\ensuremath{\boldsymbol{Q}}}{_\textsf{s}}){\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}}{^{\textsf{H}}}+{\ensuremath{\Hat{\boldsymbol{D}}}}{_\textsf{sr}}\operatorname{tr}({\ensuremath{\boldsymbol{Q}}}{_\textsf{s}}) \nonumber\\&\quad +\kappa\eta{_\textsf{r}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rr}}\operatorname{diag}({\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}){\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rr}}{^{\textsf{H}}}+{\ensuremath{\Hat{\boldsymbol{D}}}}{_\textsf{rr}}\operatorname{tr}({\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}) \nonumber\\&\quad +\beta\rho{_\textsf{r}}\operatorname{diag}({\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}}{\ensuremath{\boldsymbol{Q}}}{_\textsf{s}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{sr}}{^{\textsf{H}}}) + \beta\eta{_\textsf{r}}\operatorname{diag}({\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rr}}{\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}{\ensuremath{\Hat{\boldsymbol{H}}}}{_\textsf{rr}}{^{\textsf{H}}}) , \end{aligned}$$ where, for the approximation, we assumed $\kappa\ll 1$ and $\beta\ll 1$, and we leveraged [(\[eq:Cove\])]{}. Gradient Details {#app:grad_proj} ================= [rCl]{}\ &=& { (1-) ( ( [$\boldsymbol{S}$]{}[\_]{}\[l\] ) - ( [$\Hat{\boldsymbol{\Sigma}}$]{}[\_]{}\[l\] ) ) + ( ( [$\boldsymbol{S}$]{}[\_]{}\[l\] ) - ( [$\Hat{\boldsymbol{\Sigma}}$]{}[\_]{}\[l\] ) ) }\ &=& { (1-) ( [\_]{}[$\Hat{\boldsymbol{H}}$]{}[\_]{}[$\boldsymbol{Q}$]{}[\_]{}\[l\] [$\Hat{\boldsymbol{H}}$]{}[\_]{}[\^]{} + [\_]{}[$\Hat{\boldsymbol{H}}$]{}[\_]{}([$\boldsymbol{Q}$]{}[\_]{}\[l\]) [$\Hat{\boldsymbol{H}}$]{}[\_]{}[\^]{} + [\_]{}([$\Hat{\boldsymbol{H}}$]{}[\_]{}[$\boldsymbol{Q}$]{}[\_]{}\[l\] [$\Hat{\boldsymbol{H}}$]{}[\_]{}[\^]{}) + [$\Hat{\boldsymbol{D}}$]{}[\_]{} + [$\boldsymbol{Z}$]{}\_1\[l\] )\ && - (1-) ( [\_]{}[$\Hat{\boldsymbol{H}}$]{}[\_]{}([$\boldsymbol{Q}$]{}[\_]{}\[l\]) [$\Hat{\boldsymbol{H}}$]{}[\_]{}[\^]{} + [\_]{}([$\Hat{\boldsymbol{H}}$]{}[\_]{}[$\boldsymbol{Q}$]{}[\_]{}\[l\] [$\Hat{\boldsymbol{H}}$]{}[\_]{}[\^]{}) + [$\Hat{\boldsymbol{D}}$]{}[\_]{} + [$\boldsymbol{Z}$]{}\_2\[l\] )\ && + ( [\_]{}([$\Hat{\boldsymbol{H}}$]{}[\_]{}[$\boldsymbol{Q}$]{}[\_]{}\[l\] [$\Hat{\boldsymbol{H}}$]{}[\_]{}[\^]{}) +[\_]{}[$\Hat{\boldsymbol{H}}$]{}[\_]{}([$\boldsymbol{Q}$]{}[\_]{}\[l\]) [$\Hat{\boldsymbol{H}}$]{}[\_]{}[\^]{} + [$\Hat{\boldsymbol{D}}$]{}[\_]{} +[$\boldsymbol{Z}$]{}\_3\[l\] )\ && - ( [\_]{}([$\Hat{\boldsymbol{H}}$]{}[\_]{}[$\boldsymbol{Q}$]{}[\_]{}\[l\] [$\Hat{\boldsymbol{H}}$]{}[\_]{}[\^]{}) +[\_]{}[$\Hat{\boldsymbol{H}}$]{}[\_]{}([$\boldsymbol{Q}$]{}[\_]{}\[l\]) [$\Hat{\boldsymbol{H}}$]{}[\_]{}[\^]{} + [$\Hat{\boldsymbol{D}}$]{}[\_]{} +[$\boldsymbol{Z}$]{}\_4\[l\] ) }\ &=& { ( [$\Hat{\boldsymbol{H}}$]{}[\_]{}[\^]{}[$\Hat{\boldsymbol{H}}$]{}[\_]{} )[\^]{} + ( [$\Hat{\boldsymbol{H}}$]{}[\_]{}[\^]{}( [$\boldsymbol{S}$]{}[\_]{}\^[-1]{}\[l\] - [$\Hat{\boldsymbol{\Sigma}}$]{}[\_]{}\^[-1]{}\[l\]) [$\Hat{\boldsymbol{H}}$]{}[\_]{} ) }\ && + ( [$\Hat{\boldsymbol{D}}$]{}[\_]{}( [$\boldsymbol{S}$]{}[\_]{}\^[-1]{}\[l\] - [$\Hat{\boldsymbol{\Sigma}}$]{}[\_]{}\^[-1]{}\[l\])[\^]{}) [$\boldsymbol{I}$]{} + { ( [$\Hat{\boldsymbol{H}}$]{}[\_]{}[\^]{}( [$\boldsymbol{S}$]{}[\_]{}\^[-1]{}\[l\] - [$\Hat{\boldsymbol{\Sigma}}$]{}[\_]{}\^[-1]{}\[l\] ) [$\Hat{\boldsymbol{H}}$]{}[\_]{} )\ && + ( [$\Hat{\boldsymbol{H}}$]{}[\_]{}[\^]{}( [$\boldsymbol{S}$]{}[\_]{}\^[-1]{}\[l\] - [$\Hat{\boldsymbol{\Sigma}}$]{}[\_]{}\^[-1]{}\[l\] ) [$\Hat{\boldsymbol{H}}$]{}[\_]{} )[\^]{}} + ( [$\Hat{\boldsymbol{D}}$]{}[\_]{}( [$\boldsymbol{S}$]{}[\_]{}\^[-1]{}\[l\] - [$\Hat{\boldsymbol{\Sigma}}$]{}[\_]{}\^[-1]{}\[l\] )[\^]{} ) [$\boldsymbol{I}$]{} . \[eq:Gradient\] In this appendix, we derive an expression for the gradient $\nabla_{{\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}[l]} \underline{I}({{\ensuremath{\boldsymbol{{\ensuremath{\mathcal{Q}}}}}}}, \zeta)$ by first deriving an expression for the derivative $\frac{\partial \underline{I}}{\partial {\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}[l]}$ and then using the fact that $\nabla_{{\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}[l]}\underline{I} = 2\big(\frac{\partial\underline{I}}{\partial {\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}[l]}\big)^*$. To do this, we first consider the related problem of computing the derivative $\partial\det({\ensuremath{\boldsymbol{Y}}})/\partial {\ensuremath{\boldsymbol{X}}}$, where $$\begin{aligned} {\ensuremath{\boldsymbol{Y}}} &{\triangleq}{\ensuremath{\boldsymbol{C}}}\operatorname{diag}({\ensuremath{\boldsymbol{X}}}){\ensuremath{\boldsymbol{D}}} +\operatorname{diag}( {\ensuremath{\boldsymbol{E}}}{\ensuremath{\boldsymbol{X}}}{\ensuremath{\boldsymbol{F}}} ) + {\ensuremath{\boldsymbol{G}}} \operatorname{tr}({\ensuremath{\boldsymbol{X}}}) + {\ensuremath{\boldsymbol{Z}}}, \label{eq:Ygrad}\end{aligned}$$ and where [(\[eq:Ygrad\])]{} can be written elementwise as $$\begin{aligned} [{\ensuremath{\boldsymbol{Y}}}]_{i,j} &= \sum\limits_{m,n} [{\ensuremath{\boldsymbol{C}}}]_{i,m} [{\ensuremath{\boldsymbol{X}}}]_{m,n} [{\ensuremath{\boldsymbol{D}}}]_{n,j} \delta_{m-n} + [{\ensuremath{\boldsymbol{Z}}}]_{i,j} \label{eq:Yij} \\&\quad{} + \sum\limits_{p,q} [{\ensuremath{\boldsymbol{E}}}]_{i,p} [{\ensuremath{\boldsymbol{X}}}]_{p,q} [{\ensuremath{\boldsymbol{F}}}]_{q,j} \delta_{i-j} + [{\ensuremath{\boldsymbol{G}}}]_{i,j} \sum\limits_{t} [{\ensuremath{\boldsymbol{X}}}]_{t,t} . \nonumber \end{aligned}$$ Notice that, for ${\ensuremath{\boldsymbol{V}}}_{r,s}$ defined as a zero-valued matrix except for a unity element at row $r$ and column $s$, we have $$\begin{aligned} \frac{\partial \det({\ensuremath{\boldsymbol{Y}}})}{\partial {\ensuremath{\boldsymbol{X}}}} &= \sum\limits_{r,s} {\ensuremath{\boldsymbol{V}}}_{r,s} \frac{\partial \det({\ensuremath{\boldsymbol{Y}}})}{\partial [{\ensuremath{\boldsymbol{X}}}]_{r,s}} \label{eq:deriv1} \\ &= \sum\limits_{r,s} {\ensuremath{\boldsymbol{V}}}_{r,s} \sum\limits_{i,j} \frac{\partial \det({\ensuremath{\boldsymbol{Y}}})}{\partial [{\ensuremath{\boldsymbol{Y}}}]_{i,j}} \frac{\partial [{\ensuremath{\boldsymbol{Y}}}]_{i,j}}{\partial [{\ensuremath{\boldsymbol{X}}}]_{r,s}}.\end{aligned}$$ Then, using [(\[eq:Yij\])]{}, we get $$\begin{aligned} \lefteqn{ \frac{\partial \det({\ensuremath{\boldsymbol{Y}}})}{\partial {\ensuremath{\boldsymbol{X}}}} }\nonumber\\ &= \sum\limits_{r,s} {\ensuremath{\boldsymbol{V}}}_{r,s} \sum\limits_{i,j} \frac{\partial \det({\ensuremath{\boldsymbol{Y}}})}{\partial [{\ensuremath{\boldsymbol{Y}}}]_{i,j}} \Big( [{\ensuremath{\boldsymbol{C}}}]_{i,r} [{\ensuremath{\boldsymbol{D}}}]_{s,j} \delta_{r-s} \nonumber\\&\quad + [{\ensuremath{\boldsymbol{E}}}]_{i,r} [{\ensuremath{\boldsymbol{F}}}]_{s,j} \delta_{i-j} + [{\ensuremath{\boldsymbol{G}}}]_{i,j} \delta_{r-s} \Big) \\ &= \operatorname{diag}\left( {\ensuremath{\boldsymbol{D}}}\left(\frac{\partial\det{\ensuremath{\boldsymbol{Y}}}}{\partial {\ensuremath{\boldsymbol{Y}}}}\right){^{\textsf{T}}}\! {\ensuremath{\boldsymbol{C}}} \right) + \left( {\ensuremath{\boldsymbol{F}}}\operatorname{diag}\left(\frac{\partial\det{\ensuremath{\boldsymbol{Y}}}}{\partial {\ensuremath{\boldsymbol{Y}}}}\right){^{\textsf{T}}}\! {\ensuremath{\boldsymbol{E}}} \right){^{\textsf{T}}}\nonumber\\&\quad + \operatorname{sum}\left( {\ensuremath{\boldsymbol{G}}} \odot \left(\frac{\partial\det{\ensuremath{\boldsymbol{Y}}}}{\partial {\ensuremath{\boldsymbol{Y}}}} \right) \right){\ensuremath{\boldsymbol{I}}} \\ &= \det({\ensuremath{\boldsymbol{Y}}}) \Big( \operatorname{diag}\big( {\ensuremath{\boldsymbol{D}}}{\ensuremath{\boldsymbol{Y}}}^{-1}{\ensuremath{\boldsymbol{C}}} \big) + \big( {\ensuremath{\boldsymbol{F}}} \operatorname{diag}( {\ensuremath{\boldsymbol{Y}}}^{-1} ) {\ensuremath{\boldsymbol{E}}} \big){^{\textsf{T}}}\nonumber\\ & \quad + \operatorname{sum}\big( {\ensuremath{\boldsymbol{G}}} \odot ({\ensuremath{\boldsymbol{Y}}}^{-1}){^{\textsf{T}}}\big) {\ensuremath{\boldsymbol{I}}} \Big) \label{eq:deriv_end} ,\end{aligned}$$ where, for the last step, we used the fact that $\frac{\partial\det({\ensuremath{\boldsymbol{Y}}})}{\partial {\ensuremath{\boldsymbol{Y}}}} = \det ({\ensuremath{\boldsymbol{Y}}}) ( {\ensuremath{\boldsymbol{Y}}}^{-1} ){^{\textsf{T}}}\! $. Applying [(\[eq:deriv\_end\])]{} to [(\[eq:mutinfo\])]{}, we can obtain an expression for $\frac{\partial\underline{I}}{\partial{\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}[l]}$. To do so, we think of ${\ensuremath{\boldsymbol{Z}}}$ in [(\[eq:Ygrad\])]{} as representing the terms in $\underline{I}$ that have zero derivative with respect to ${\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}[l]$. Using ${\ensuremath{\boldsymbol{S}}}{_\textsf{d}}[l]$ and ${\ensuremath{\boldsymbol{S}}}{_\textsf{r}}[l]$ defined in [(\[eq:Sd\])]{}-[(\[eq:Sr\])]{}, and recalling the expression for ${\ensuremath{\Hat{\boldsymbol{\Sigma}}}}{_\textsf{d}}[l]$ in [(\[eq:sigma\])]{}, the result is given in [(\[eq:Gradient\])]{}, at the top of the page. Finally, using ${\ensuremath{\boldsymbol{G}}}{_\textsf{r}}[l] = 2 \big( \frac{\partial\underline{I}}{\partial {\ensuremath{\boldsymbol{Q}}}{_\textsf{r}}[l]} \big)^\ast$, and leveraging the fact that ${\ensuremath{\boldsymbol{S}}}{_\textsf{d}}[l]$, ${\ensuremath{\boldsymbol{S}}}{_\textsf{r}}[l]$, ${\ensuremath{\Hat{\boldsymbol{\Sigma}}}}{_\textsf{d}}[l]$, and ${\ensuremath{\Hat{\boldsymbol{\Sigma}}}}{_\textsf{r}}[l]$ are Hermitian matrices, we get the expression for ${\ensuremath{\boldsymbol{G}}}{_\textsf{r}}[l]$ in [(\[eq:G\])]{}. A similar expression results for ${\ensuremath{\boldsymbol{G}}}{_\textsf{s}}[l]$. [Brian P. Day]{} received the B.S. in Electrical and Computer Engineering from The Ohio State University in 2010. Since 2010, he has been working toward the Ph.D degree in Electrical and Computer Engineering at The Ohio State University. His primary research interests are full-duplex communication, signal processing, and optimization. [Adam R. Margetts]{} received a dual B.S. degree in Electrical Engineering and Mathematics from Utah State University, Logan, UT in 2000; and the M.S. and Ph.D. degrees in Electrical Engineering from The Ohio State University, Columbus, OH in 2002 and 2005, respectively. Dr. Margetts has been with MIT Lincoln Laboratory, Lexington, MA since 2005 and holds two patents in the area of signal processing for communications. His current research interests include distributed transmit beamforming, cooperative communications, full-duplex relay systems, space-time coding, and wireless networking. [Daniel W. Bliss]{} is a senior member of the technical staff at MIT Lincoln Laboratory in the Advanced Sensor Techniques group. Since 1997 he has been employed by MIT Lincoln Laboratory, where he focuses on adaptive signal processing, parameter estimation bounds, and information theoretic performance bounds for multisensor systems. His current research topics include multiple-input multiple-output (MIMO) wireless communications, MIMO radar, cognitive radios, radio network performance bounds, geolocation techniques, channel phenomenology, and signal processing and machine learning for anticipatory medical monitoring. Dan received his Ph.D. and M.S. in Physics from the University of California at San Diego (1997 and 1995), and his BSEE in Electrical Engineering from Arizona State University (1989). Employed by General Dynamics (1989-1991), he designed avionics for the Atlas-Centaur launch vehicle, and performed research and development of fault-tolerant avionics. As a member of the superconducting magnet group at General Dynamics (1991-1993), he performed magnetic field calculations and optimization for high-energy particle-accelerator superconducting magnets. His doctoral work (1993-1997) was in the area of high-energy particle physics, searching for bound states of gluons, studying the two-photon production of hadronic final states, and investigating innovative techniques for lattice-gauge-theory calculations. [Philip Schniter]{} received the B.S. and M.S. degrees in Electrical and Computer Engineering from the University of Illinois at Urbana-Champaign in 1992 and 1993, respectively. From 1993 to 1996 he was employed by Tektronix Inc. in Beaverton, OR as a systems engineer, and in 2000, he received the Ph.D. degree in Electrical Engineering from Cornell University in Ithaca, NY. Subsequently, he joined the Department of Electrical and Computer Engineering at The Ohio State University in Columbus, OH, where he is now an Associate Professor and a member of the Information Processing Systems (IPS) Lab. In 2003, he received the National Science Foundation CAREER Award, and in 2008-2009 he was a visiting professor at Eurecom (Sophia Antipolis, France) and Sup[é]{}lec (Gif-sur-Yvette, France). Dr. Schniter’s areas of interest include statistical signal processing, wireless communications and networks, and machine learning. [^1]: Brian Day and Philip Schniter are with the Department of Electrical and Computer Engineering, The Ohio State University, Columbus, OH. [^2]: Daniel Bliss and Adam Margetts are with the Advanced Sensor Techniques Group, MIT Lincoln Laboratory, Lexington, MA. [^3]: Please direct all correspondence to Prof. Philip Schniter, Dept. ECE, The Ohio State University, 2015 Neil Ave., Columbus OH 43210, e-mail: [email protected], phone 614.247.6488, fax 614.292.7596. [^4]: Manuscript received August 25, 2011; revised May 14, 2012. [^5]: This work was sponsored by the Defense Advanced Research Projects Agency under Air Force contract FA8721-05-C-0002. Opinions, interpretations, conclusions, and recommendations are those of the authors and are not necessarily endorsed by the United States Government. [^6]: Successful full-duplex communication has been recently demonstrated in the non-relay setting [@Jain:MOBICOM:11] and in the non-MIMO relay setting [@Everett:ASIL:11]. [^7]: In our transmission protocol, a single training epoch is followed by a large number of data epochs, and so the relative training overhead becomes negligible as the number of data epochs grows large. [^8]: Throughout the paper, we take “$\log$” to be base-2. [^9]: Because [(\[eq:opt\])]{} is generally non-convex, finding the global maximum can be difficult. Although GP is guaranteed only to find a local, and not global, maximum, our experience with different initializations suggests that GP is indeed finding the global maximum in our problem. [^10]: We note that *both* half-duplex and the proposed TCO-2-IC scheme could potentially benefit from allowing the relay to change the partitioning of antennas from transmission to reception across the data period $l\in\{1,2\}$. In half duplex mode, for example, it would be advantageous for the relay to use $(N{_\textsf{r}}[1],M{_\textsf{r}}[1])=(0,7)$ and $(N{_\textsf{r}}[2],M{_\textsf{r}}[2])=(7,0)$ as opposed to $(N{_\textsf{r}}[l],M{_\textsf{r}}[l])=(3,4)~\forall l$. We do not consider such antenna-swapping in this work, however. [^11]: The spatial correlation of ${\ensuremath{\boldsymbol{X}}}=[{\ensuremath{\boldsymbol{x}}}(1),\dots,{\ensuremath{\boldsymbol{x}}}(TN)]$ is $\operatorname{E}\{{\ensuremath{\boldsymbol{x}}}(t){\ensuremath{\boldsymbol{x}}}(t){^{\textsf{H}}}\} =\operatorname{E}\{\frac{1}{TN}\sum_{t=1}^{TN} {\ensuremath{\boldsymbol{x}}}(t){\ensuremath{\boldsymbol{x}}}(t){^{\textsf{H}}}\} = \operatorname{E}\{\frac{1}{T N}{\ensuremath{\boldsymbol{XX}}}{^{\textsf{H}}}\}$.
--- abstract: 'We discuss the magnetic phases of the Hubbard model for the honeycomb lattice both in two and three spatial dimensions. A ground state phase diagram is obtained depending on the interaction strength $U$ and electronic density $n$. We find a first order phase transition between ferromagnetic regions where the spin is maximally polarized (Nagaoka ferromagnetism) and regions with smaller magnetization (weak ferromagnetism). When taking into account the possibility of spiral states, we find that the lowest critical $U$ is obtained for an ordering momentum different from zero. The evolution of the ordering momentum with doping is discussed. The magnetic excitations (spin waves) in the antiferromagnetic insulating phase are calculated from the random-phase-approximation for the spin susceptibility. We also compute the spin fluctuation correction to the mean field magnetization by virtual emission/absorpion of spin waves. In the large $U$ limit, the renormalized magnetization agrees qualitatively with the Holstein-Primakoff theory of the Heisenberg antiferromagnet, although the latter approach produces a larger renormalization.' author: - 'N. M. R. Peres$^{1,2}$, M. A. N. Araújo$^{2,3}$ and Daniel Bozi$^{1,2}$' title: Phase diagram and magnetic collective excitations of the Hubbard model in graphene sheets and layers --- Introduction ============ The interest in strongly correlated systems in frustrated lattices has increased recently because of the possible realization of exotic magnetic states [@anderson], spin and charge separation in two dimensions [@matthew], and the discovery of superconductivity in Na$_x$CoO$_2$.$y$H$_2$O [@tanaka]. Many researchers have discussed superconductivity in non-Bravais lattices, mainly using self consistent spin fluctuation approaches to the problem [@kuroki; @onari; @moriya]. The honeycomb lattice, which is made of two inter-penetrating triangular lattices, has received special attention after the discovery of superconductivity in MgB$_2$ [@nagamatsu2001]. Additionally, the honeycomb lattice has been shown to stage many different types of exotic physical behaviors in magnetism and the growing experimental evidence of non-Fermi liquid behavior in graphite has led to the study of electron-electron correlations and quasi-particle lifetimes in graphite [@gonzalez]. Around a decade ago, Sorella and Tossatti [@sorella] found that the Hubbard model in the half-filled honeycomb lattice would exhibit a Mott-Hubbard transition at finite $U$. Their Monte Carlo results were confirmed by variational approaches and reproduced by other authors [@martelo; @furukawa]. As important as the existence of the Mott-Hubbard transition in strongly correlated electron systems is the possible realization of Nagaoka ferromagnetism. The triangular, the honeycomb and the Kagomé lattices were studied, but a strong tendency for a Nagaoka type ground state was found only in non-bipartite lattices (triangular and Kagome) [@hanisch]. On the other hand, the effect of [*long range*]{} interactions in half filled sheets of graphite was considered from a mean field point of view, using an extended Hubbard model. A large region of the phase diagram having a charge density wave ground state was found [@tchougreeff]. More recently, the existence of a new magnetic excitation in paramagnetic graphite has been claimed [@baskaran], but its existence was reanalyzed by two of the present authors [@peresI]. In this work the magnetic phases of the Hubbard model in the honeycomb lattice are studied. In addition to the two-dimensional problem we also address the three-dimensional system composed of stacked layers. The critical lines associated with instabilities of the paramagnetic phase are obtained in the $U,n$ plane (interaction versus particle density). Spiral spin phases are also considered. A ground state phase diagram containing ferro and antiferromagnetic order is obtained. Interestingly, we find ferromagnetic regions with fully polarized spin in the vicinity of regions with smaller magnetization. The transitions from one to the other are discontinuous. We also address the calculation of the magnetic excitations (spin waves) in the half-filled antiferromagnetic honeycomb layer within the random-phase-approximation (RPA). It is known that the Hartree-Fock-RPA theory of the half-filled Hubbard model is correct in both weak and strongly interacting limits: at strong coupling, the spin wave dispersion obtained in RPA agrees with the Holstein-Primakoff theory for the Heisenberg model; at intermediate interactions ($U/t\sim 6$), the RPA dispersion shows excellent agreement with experiment [@la2cuo4; @poznan]. The Hartree-Fock-RPA theory should, therefore, be considered as a usefull starting point to study the intermediate coupling regime. Starting from the spin wave spectrum obtained in RPA theory, we calculate the quantum fluctuations correction to the ground state magnetization arising from virtual emission/reabsorption of spin waves. In the strong coupling limit, we find a ground state magnetization which is about $67\%$ of full polarization. This is not so great a reduction as predicted by the Holstein-Primakoff theory of the Heisenberg model, which is about $48\%$. Our paper is organized as follows: in section \[hamilt\] we introduce the Hamiltonian and its mean field treatment. In section \[collective\], we discuss the possibility of a well defined magnetic excitation in the paramagnetic phase. In the ordered phase at half filling, the spin wave spectrum is computed and the effect of different hopping terms in the spin wave spectrum is discussed. In section \[instabilities\], the magnetic instability lines are obtained and the possibility of spiral spin phases for $n<1$ is discussed. The corresponding lowest critical $U$ is determined as function of the ordering wave-vector $\bm q$. Section \[phasediag\] is devoted to the phase diagram of the system, where two different types of ferromagnetism are found. The first order critical lines separating the three ordered phases are determined. Section \[fluctu\] contains a study of the renormalization of the electron’s spectral function and magnetization by the spin wave excitations. Model Hamiltonian {#hamilt} ================= The magnetic properties of the honeycomb lattice is discussed in the context of the Hubbard model, which is defined as $$\begin{aligned} \hat H=-\sum_{i,j,\sigma}t_{i,j}\hat c^{\dag}_{i,\sigma}\hat c_{j,\sigma}+ U\sum_i \hat c^{\dag}_{i,{\uparrow}}\hat c_{i,{\uparrow}}\hat c^{\dag}_{i,{\downarrow}}\hat c_{i,{\downarrow}}- \mu\sum_{i,\sigma}\hat c^{\dag}_{i,\sigma}\hat c_{i,\sigma}\,, \label{hubbard}\end{aligned}$$ where $t_{i,j}$ are hopping integrals, $U$ is the onsite repulsion and $\mu$ denotes the chemical potencial. The honeycomb lattice is not a Bravais lattice since there are two atoms per unit cell. Therefore, it is convenient to define two sublattices, $A$ and $B$, as shown in Figure \[honey\]. The expressions for the lattice vectors are $${\mathbf{a_1}}= \frac a 2 (3,\sqrt 3,0)\,, \hspace{1cm} {\mathbf{a_2}} = \frac a 2 (3,-\sqrt 3,0)\,, \hspace{1cm} {\mathbf{a_3}} = c (0,0,1)\,.$$ where $a$ is the length of the hexagon side and $c$ is the interlayer distance. The reciprocal lattice vectors are given by $${\mathbf{b_1}} = \frac {2\pi}{3a} (1,\sqrt 3,0)\,, \hspace{1cm} {\mathbf{b_2}} = \frac {2\pi}{3a} (1,-\sqrt 3,0)\,, \hspace{1cm} {\mathbf{b_3}} = \frac {2\pi}{c} (0,0,1)\,.$$ The nearest neighbors of an atom belonging to the $A$ sublattice are: $${\mathbf{\delta_1}} = \frac a 2 (1,\sqrt 3,0) \hspace{0.85cm} {\mathbf{\delta_2}} = \frac a 2 (1,-\sqrt 3,0) \hspace{0.85cm} {\mathbf{\delta_3}} = - a \hat {\bf x} \hspace{0.85cm} {\mathbf{\delta''}} = \pm c \hat {\bf z}$$ while the second nearest neighbors (in the plane) are: ${\bf \delta'_1}=\pm {\bf a_1}, {\bf \delta'_2}=\pm {\bf a_2}, {\bf \delta'_3}=\pm ({\bf a_2}-{\bf a_1})$. In a broken symmetry state, antiferromagnetic (AF) order is described by the average lattice site occupation: $$<\hat n_{j,\sigma}>=\frac{n}{2} \pm \frac{m}{2} \sigma \cos(cQ_z) \left\{\begin{array}{c} +,j\in A\\ -,j\in B \end{array}\right. \label{ocupacoes}$$ where the $z-$axis ordering vector ${\mathbf{Q}}=(0,0,Q_z)$ will be used when studying multi-layers, $n$ denotes the electron density, $m$ is the staggered magnetization, and $\sigma=\pm1$. We introduce field operators for each sublattice satisfying the usual Fourier transformations: $$\hat a^\dag_{i\in A,\sigma}=\frac {1}{\sqrt {N}} \sum_{\bm k}e^{i\bm k\cdot \bm {R_i}} \hat a^\dag_{\bm k\sigma}\,,\hspace{1cm} \hat b^\dag_{i\in B,\sigma}=\frac {1}{\sqrt {N}} \sum_{\bm k}e^{i\bm k\cdot \bm {R_i}} \hat b^\dag_{\bm k\sigma}\,$$ (where $N$ denotes the number of unit cells). Within a Hartree-Fock decoupling of the Hubbard interaction in (\[hubbard\]) we obtain an effective Hamiltonian matrix $$\hat H=\sum_{\bm k\sigma} [\hat a^\dag_{\bm k\sigma} \hat b^\dag_{\bm k\sigma}]\left[\begin{array}{cc} H_{11} & H_{12} \\ H_{21} & H_{22} \\\end{array}\right] \left[\begin{array}{c} \hat a_{\bm k\sigma} \\ \hat b_{\bm k\sigma} \\\end{array}\right], \label{hamiltHF}$$ with matrix elements given by $$H_{11}=D(\bm k)+U\frac {n-\sigma m} 2 , \hspace{0.5cm} H_{12}= \phi_{\bm k}=H^\ast_{21} , \hspace{0.5cm} H_{22}=D(\bm k)+U\frac {n+\sigma m} 2$$ where $$\phi_{\bm k}=-t\sum_{\bm\delta}e^{i\bm k\cdot \bm {\delta}} , \hspace{0.55cm} D(\bm k) =\phi'_{\bm k}-2t''\cos(ck_z)-\mu , \hspace{0.55cm} \phi'_{\bm k}= -t'\sum_{\bm{\delta'}}e^{i\bm k \cdot \bm {\delta'}} \,.$$ In the above equations $t$ and $t'$ are the first and second neighbor hopping integrals, respectively, while $t''$ describes interlayer hopping. The dispersion relation for the case where $t'=t''=0$ is $$\vert \phi_{\bm k} \vert=t\sqrt{3+2\cos(\sqrt 3 ak_y) + 4 \cos(3ak_x /2)\cos(\sqrt 3 ak_y /2)}.$$ Diagonalization of the effective Hamiltonian yields a two band spectrum. The band energies are: $$E_{\pm}(\bm k)=D(\bm k)+\frac U 2 n \pm \sqrt{\Big(\frac {Um} 2\Big)^2 +\vert \phi_{\bm k}\vert^2}.$$ Because there are two sublattices, the Matsubara Green’s function is a $2\times 2$ matrix whose elements are given by: $$\begin{aligned} {\cal G}_\sigma^{aa}(i\omega,{\bm k})&=& \sum_{j=\pm} \frac{|A_{\sigma,j}|^2}{i\omega - E_j({\bm k})} \label{gaa}\\ {\cal G}_\sigma^{ab}(i\omega,{\bm k})&=& \sum_{j=\pm} \frac{A_{\sigma,j}B_{\sigma,j}^*} {i\omega - E_j({\bm k})}\\ {\cal G}_\sigma^{ba}(i\omega,{\bm k})&=& \sum_{j=\pm} \frac{A_{\sigma,j}^*B_{\sigma,j}} {i\omega - E_j({\bm k})}\\ {\cal G}_\sigma^{bb}(i\omega,{\bm k})&=& \sum_{j=\pm} \frac{|B_{\sigma,j}|^2}{i\omega - E_j({\bm k})} \label{gbb}\end{aligned}$$ where the coherence factors are: $$\begin{aligned} |A_{\sigma,\pm}({\bf k})|^2 = \frac{1}{2} \Big[ 1-\frac{Um\sigma}{2E_\pm({\bf k})}\Big] &\qquad& |B_{\sigma,\pm}({\bf k})|^2 = \frac{1}{2} \Big[ 1+\frac{Um\sigma}{2E_\pm({\bf k})}\Big]\\ A_{\sigma,\pm}({\bf k})B_{\sigma,\pm}^*({\bf k}) &=&-\frac{\phi({\bf k})}{2E_\pm({\bf k})} \label{coerenc}\end{aligned}$$ In the ferromagnetic (F) phase, the site occupation is the same for both sublattices: $$<\hat n_{j,\sigma}>=\frac{n}{2} + \frac{m}{2} \sigma \qquad j\in A,B . \label{ferroocup}$$ In this case the quasiparticle energy bands are given by $$E^{\sigma}_{\pm}(\bm k)=D(\bm k)+\frac U 2 (n-\sigma m)\pm |\phi_{\bm k}|.$$ In the paramagnetic phase of the system the energies and propagators are simply obtained by setting $m=0$ in the equations above. The density of states of single electrons is shown in Figure \[dos2d\] against particle density and energy. In the two upper panels we have included a second-neighbor hopping while in the two lower panels only nearest neighbor coupling is considered. An important feature is that $\rho(\epsilon)$ vanishes linearly with $\epsilon$ as we approach the half filled limit, both for $t'=0$ and $t'\ne 0$. This is related to the $K$-points of the Brillouin Zone (see Figure \[honey\]), where the electron dispersion becomes linear: $$E({\bf k}) \approx \pm t \frac{3a}{2}|d{\bf k}|$$ ($d{\bf k}$ denotes the deviation from the $K$-point). This dispersion is called the “Dirac cone”. ![ Single particle density of states, $\rho(\epsilon)$, for independent electrons in an honeycomb lattice. The left and right panels show $\rho(\epsilon)$ as function of energy and electron density, respectively. The solid line refers to $t'=-0.2$ and the dashed line to $t'=0$.[]{data-label="dos2d"}](fig_2_danny.eps){width="7.5cm"} Collective excitations at half filling {#collective} ====================================== The magnetic excitations are obtained from the poles of the transverse spin susceptibility tensor, $\chi$, which is definded, in Matsubara form, as $$\chi^{i,j}_{+-}(\bm q,i\omega_{n})= \int_0^{1/T} d\tau e^{i\omega_{n}\tau}{\langle}T_{\tau} \hat S^{+}_i(\bm q,\tau)\hat S^{-}_j(-\bm q,0) {\rangle}$$ where $i,j=a,b$ label the two sublattices (not lattice points) and $S^{+}_i(\bm q),\ S^{-}_j(\bm q)$ denote the spin-raising and lowering operators for each sublattice. In the paramagnetic, F, or AF phases, the zero order susceptibility is just a simple bubble diagram with the Green’s functions given in equations (\[gaa\])-(\[gbb\]): $$\chi_{+-}^{(0)i,j}(\bm q,i\omega_n)= -\frac T N \sum_{\bm k ,\omega_m} {\cal G}_{{\uparrow}}^{ji}(\bm k,i\omega_{n}) {\cal G}_{{\downarrow}}^{ij}(\bm {k-q},i\omega_n - i\omega_m) \label{chi0}$$ Going beyond mean-field, the random-phase-approximation (RPA) result for the susceptibility tensor is obtained from the Dyson equation $$\chi = \chi^0 + U \chi^0 \chi\ \Rightarrow \ \chi = \Big[ \hat I - U\chi^0 \Big]^{-1} \chi^0$$ where $\hat I$ denotes the $2\times 2$ identity matrix. The poles of the susceptibility tensor, corresponding to the magnetic excitations, are then obtained from the condition: $${\rm Det} \Big[ \hat I - U\chi^0 \Big] = 0. \label{det}$$ We note that the tensorial nature of the spin susceptibility is a consequence of there being two sites per unit cell and is not related to the magnetic order in the system. Magnetic excitations in a single paramagnetic layer --------------------------------------------------- Here we discuss the possibility of existence of magnetic excitations in a single honeycomb paramagnetic layer. Our interest in this problem stems from a recent claim, by Baskaran and Jafari [@baskaran], who recently proposed the existence of a neutral spin collective mode in graphene sheets. In the calculations of Ref. [@baskaran] a half-filled Hubbard model in the honeycomb lattice (with $t'=t''=0$) was considered but the tensorial character of the susceptibility was neglected [@peresI]. Since inelastic neutron scattering can be used to study this spin collective mode in graphite, we decided to re-examine this problem taking into account the tensorial nature of the transverse spin susceptibility. Collective magnetic modes with frequency $\omega$ and momentum ${\bm q}$ are determined from the condition (\[det\]) after performing the analytic continuation $i\omega \rightarrow \omega + i0^+$. The determinant is given by $$D_{+-}(\bm q,\omega)=1-2U \chi_{+-}^{(0)aa}+U^2 \Big[ (\chi_{+-}^{(0)aa})^2-\chi_{+-}^{(0)ab}\chi_{+-}^{(0)ba}\Big], \label{pole}$$ where we have taken into account that in a paramagnetic system $\chi_{+-}^{(0)aa}=\chi_{+-}^{(0)bb}$. Below the particle-hole continuum of excitations, the spectral (delta-function contributions) part in $\chi^{(0)ij}_{+-}(\bm q,\omega +i0^+)$ vanishes and there is the additional relation $\chi_{+-}^{(0)ba}=(\chi_{+-}^{(0)ab})^\ast$. Collective modes are only well defined outside the particle-hole continuum (inside the continuum they become Landau damped). We searched[@peresI] for well defined magnetic modes, $\omega(\bm q)$, below the continuum of particle-hole excitations, and found no solutions for any value of the interaction $U$. In Figure 1 of Ref. [@peresI] we plot $D_{+-}(\bm q,\omega)$ for eight different $\bm q$-vectors and $\omega$ ranging from zero to the point where the particle-hole continuum begins. Our analysis reveals that the full tensorial structure of the Hubbard model’s RPA susceptibility in the honeycomb lattice does not predict a collective magnetic mode. Spin waves in the antiferromagnetic layer ----------------------------------------- The spin wave dispersion $\omega(\bm q)$ for the AF layer with one electron per site can be obtained from equations (\[chi0\]) and (\[det\]) using expressions (\[gaa\])-(\[gbb\]) for the propagators. Spin wave spectra, for different values of second-neighbor hopping, $t'$, are ploted in Figures \[afsw\] and \[afswtp\]. In the large $U$ limit, spin wave energies agree with those obtained from the Holstein-Primakoff theory of the Heisenberg model. We give an analytical derivation of this limit in Appendix \[appendsuslarge\]. The Holstein-Primakoff result for the Heisenberg model in the honeycomb lattice, which is derived in Appendix \[appendHP\], can be written as $$\omega_{HP}(\bm q)= JS\sqrt{z^2-\vert\phi(\bm q)\vert^2}\,.$$ This result can be mapped on the Hubbard model provided that $J=4t^2/U$ and $S=1/2$. Figure \[afsw\] shows the spin wave energies for the 2D lattice ($t'=0$) along a closed path in the Brillouin Zone. Energies in Figure \[afsw\] are normalized by the Holstein-Primakov result at the $K$-point, $\omega_{HP}(K)$ (see Figure \[honey\]). It can be seen that the results for $U=8$ are very close to the asymptotic behavior of the RPA, whereas, for smaller $U$, the spin wave energy is reduced. The effect of $t'$ on $\omega(\bm q)$ is depicted in Figure \[afswtp\]. It is of particular interest the fact that the dispersion along the $X-K$ direction is almost absent for $U\ge 4$. The presence of $t'$ does not change this effect. Magnetic instabilities {#instabilities} ====================== The magnetic instabilities in the paramagnetic phase can be obtained from the divergence of the RPA susceptibilities at critical values of the interaction, $U_c$, driving the system towards a magnetically ordered phase. At a given electron density $n$ we always find two instability solutions, one ferromagnetic and one antiferromagnetic. One of these solutions minimizes the free energy. Since $U_c$ is determined from $D_{+-}(\bm q,0)=0$, taking into account that $\chi_{+-}^{(0)aa}=\chi_{+-}^{(0)bb}$ and $\chi_{+-}^{(0)ab}=(\chi_{+-}^{(0)ba})^{\ast}$ in the paramagnetic phase, we obtain: $$U_c = \frac 1 {\chi_{+-}^{(0)aa}\pm \vert \chi_{+-}^{(0)ab} \vert}. \label{Ucrit}$$ Figure \[2Dcrit\] shows $U_c$ obtained from the static uniform susceptibilities ($\bm q =\bm 0$ and $\omega=0$), as a function of electron density for various values of $t'$. Detailed equations for the instability lines are given in Appendix \[appendsusuc\]. The left panel of Figure \[2Dcrit\] refers to the 2D case, corresponding to a single honeycomb layer, whereas the right panel refers to the 3D system with a constant interlayer hopping $t''=0.1$. The Van-Hove singularity (associated with the $X$ point) plays an important role at density $n=0.75$ in the 2D case, independently of $t'$. As we have already mentioned, the two solutions of Eq. (\[Ucrit\]) correspond to two different magnetic transitions, one between a paramagnetic phase and a ferromagnetic phase and another between a paramagnetic phase and an antiferromagnetic phase. That this is so can easily be confirmed by solving the self-consistent equations for the ferromagnetic and the antiferromagnetic magnetizations, respectively, derived from the HF Hamiltonian (\[hamiltHF\]). By minimizing the free energy with respect to magnetization, one finds the following expressions for ferro and antiferromagnetic magnetizations $$m_F=\frac 1 {2N} \sum_{\bm k \sigma} \sigma (f(E^{\sigma}_+) +f(E^{\sigma}_-))\,, \hspace{1.0cm} m_{AF} = \frac 1 N \sum_{\bm k} \frac {\vert \zeta_{\bm k} \vert} {\sqrt{1+\zeta^{2}_{\bm k}}} (f(E_-)-f(E_+))\,, \label{selfconsmagn}$$ where $f(x)$ is the Fermi function and $\zeta_{\bm k}=Um_{AF}/(2 \vert \phi_{\bm k} \vert)$. Letting both $m_F$ and $m_{AF}$ approach zero, one obtains the same lines as those in Figure \[2Dcrit\]. Generally speaking, for electron densities lower than $0.85$, the value of $U_c$ that saparates the paramagnetic region from the ferromagnetic region is lower than the corresponding value of $U_c$ separating the paramagnetic region from the antiferromagnetic region. The critical $U$ associated with the ferromagnetic instability increases with $t'$. The size of the paramagnetic region in Figure \[2Dcrit\] increases with $t'$. On the other hand, for $t'=0.2$, we see that the critical line for the ferromagnetic region is very close the critical line of the antiferromagnetic region. Therefore, the ferromagnetic region is progressively shrinking with increasing $t'$. If we now turn to densities larger than $0.85$, we find that the antiferromagnetic critical line is the one with lowest $U_c$. However, in contrast to lower densities, the antiferromagnetic critical line hardly changes when varying $t'$. This description applies equally well to the single honeycomb layer and weakly coupled layers, even though the quantitative functional dependence of $U_c$ on $n$ is different in the two cases, the main difference coming from the van Hove singulary present in the 2D case. At finite temperature the van Hove singularity is rounded off and the 2D phase diagram will be much more similar to the 3D case. We therefore, consider that a weak 3D inter-layer coupling does not qualitatively modify the conclusions valid for the 2D case. Besides collinear spin phases, the system may also present non-collinear – spiral – spin phases in some regions of the phase diagram. We now study what are the changes in the critical $U$ values determining the instability of the paramagnetic phase if we allow for non-collinear ground states, since it is well known that the Hubbard model on bipartite and non-bipartite lattices can have the lowest $U_c$ for spiral spin phases [@hanisch; @krishnamurthy; @kampf] for some electronic densities. In a spiral state, the spin expectation value at site $i$, belonging to sublattice $\nu=a,b$, is given by [@subir] $${\langle}\bm {S^{\nu}_i} {\rangle}= \frac {m_{\nu}} 2 (\cos(\bm q \cdot \bm {R^{\nu}_i}), \sin(\bm q \cdot \bm {R^{\nu}_i})).$$ If $\bm q \ne \bm 0$, the ferromagnetic and antiferromagnetic spin configurations become twisted. We shall refer to the twisted $\bm q \ne \bm 0$ configurations as ’$F_q$’ whenever $m_A=m_B$, and ’$AF_q$’ whenever $m_A=-m_B$. The criterion for choosing the $\bm q$-vectors is taken directly from the geometry of the lattice by requesting a constant angle between spins on neighboring sites, i.e. $\bm q \cdot \bm {\delta_{1}}=\bm q \cdot \bm {\delta_{2}}=-\bm q \cdot \bm {\delta_{3}}$. Unfortunately, however, this cannot be achieved in the honeycomb lattice with only one $\bm q$-vector. The closest one can get to a ’true’ spiraling state is by letting $\bm q \cdot \bm {\delta_{1}} =-\bm q \cdot \bm {\delta_{3}}$ (or equivalently, $\bm q \cdot \bm {\delta_{2}} =-\bm q \cdot \bm {\delta_{3}}$), which implies that $\bm q=(q_x,q_y) =q_x(1,\frac 1 {\sqrt{3}})$ ($\bm q =q_x(1,\frac {-1} {\sqrt{3}})$). For the moment we let $q_z$ be zero which means that we consider identical layers. The condition $\bm q \cdot \bm {\delta_{1}}=-\bm q \cdot \bm {\delta_{3}}$ means that the increase in spin angle between two lattice sites in the $-\bm {\delta_{3}}$ direction is the same as the increase in spin angle between two lattice sites in the $\bm {\delta_{1}}$ direction. There is no increase in the spin angle in the $\bm {\delta_{2}}$ direction. Examples of the spin-configurations obtained in this way are shown in Figures \[example\_1\_q=pi/6\] and \[example\_2\_q=2pi/3\]. Several notes are in order at this stage. First, although we do not have a ’true’ spiraling state over the whole lattice, we do have a spiraling configuration in the $-\bm {\delta_{3}}$ and $\bm {\delta_{1}}$ directions, as can be seen from the Figures \[example\_1\_q=pi/6\] and \[example\_2\_q=2pi/3\], going from the lower left to upper right. Secondly, when travelling along the $\bm {\delta_{2}}$ direction, the spin angles do not increase. Instead, neighboring spins in this direction are always aligned ferromagnetically when $m_A=m_B$, and antiferromagnetically when $m_A=-m_B$. However, two successive $\bm {\delta_{2}}$ bonds (’sliding down’ the lattice from left to right) have the same increase in spin angle as any two neighbors connected by $-\bm {\delta_{3}}$ or $\bm {\delta_{1}}$. The $\bm q$-vector (i.e. the spin configuration) that a system with a given density would prefer is the one with the lowest value of $U_c(\bm q)$. In Figure \[minangle\] we present a curve showing the $\bm q$ vectors that minimize $U_c (\bm q)$, as functions of particle density $n$. We consider discrete values $q_x=i \frac {\pi} {12}$ with $i=0,1,...,12$. The dependence on $t'$ is overall the same as that discussed for $\bm q = \bm 0$ (for example, the shrinking effect with increasing $t'$ is also seen here). There is no reason to restrict $\bm q$ to integer multiples of $\frac {\pi} {12}$, other than a pure computational one. By performing the same calculation with more $\bm q$-vectors, the ’step function’ like appearance of the lower graphs of Figure \[minangle\] can be smoothed out. Our analysis is sufficient, however, to get an insight into how the $\bm q$ vectors (which minimize $U_c$) vary with $n$. The solid line limiting the paramagnetic region is shown in the lower graphs of Figure \[minangle\]). We see that the behavior of $q_x$, as function of $n$, is almost the same for the 2D and 3D cases. As the system approaches half filling, the prefered spin configuration approaches that with $\bm q= \bm 0$. In a doped system, however, minimization of $U_c(\bm q)$ is attained for a non-zero $\bm q$. It is also seen that the dependence of $\bm q$ on $n$ is not monotonic. Either in 2D or 3D, $q_x$ goes all the way from $0$ (at $n=1$) to $\pi$, displaying two local maxima (and a local minimum in between) as $n$ ranges from $1$ towards $0$. The value of $q_x$ reaches a local minimum at $q_x=\frac {7\pi} {12}$, at $n=0.37$ (in 2D) or at $n=0.45$ (in 3D). For even lower densities, $q_x$ attains another maximum at $q_x=\pi$, which means that the spins of any two nearest neighbors, in the $-\bm {\delta_{3}}$ and $\bm {\delta_{1}}$ directions, point exactly in opposite directions to each other. The same type of behavior is seen also for the critical line separating magnetically ordered phases (dashed line). Again, the 2D and the 3D cases are very similar to each other. For densities around $0.30-0.35$ (2D) and $0.35-0.40$ (3D), we have $q_x=\frac {8\pi} {12}$ yielding the lowest $U_c$. Moreover, the solid and the dashed lines coincide, illustrating the previously mentioned ferromagnetic ’shrinking out’ effect. In other words, for $q_x=\frac {8\pi} {12}$, the two solutions of $U_c(\bm q)$ almost coincide for all $n$, leaving only a thin strip of ferromagnetism between the paramagnetic and the antiferromagnetic regions. Although this is true for all $n$, it is only for $n=0.37-0.40$ (2D case) and $n=0.35-0.37$ (3D case) that $U_c(q_x=\frac {8\pi} {12})$ is minimum. So far, our analysis has been restricted to $\bm q$-vectors lying in the $x-y$ spin plane. This means that two inter-layer neighbors have the same spin. If we now consider neighboring layers with opposite spin, we put $q_z=\pi$. At half filling, the lowest $U_c(0,0,\pi)=2.04$ limiting the paramagnetic region is lower than the corresponding $U_c(0,0,0)=2.35$, independently of $t'$. Moreover, for $n=1$, $U_c(q_x,\frac {q_x} {\sqrt{3}},\pi)$ is always lower than $U_c(q_x,\frac {q_x} {\sqrt{3}},0)$ for any $q_x$, showing that, at half filling, we should expect antiferromagnetic ordering along the $z$-direction. The study above was focused on the second order instability lines, both in the case of collinear and spiral spin phases, being clear that spiral states have a lower critical $U-$ value, over a large range electronic densities. It is instructive to compare our results with those of Ref. [@hanisch]. Looking at Fig. 2 of Ref.[@hanisch] we see that for the triangular lattice there are some finite regions where the more stable ground states correspond to spiral states. These regions are located at electronic densities smaller than 0.5 and larger than 0.8. Since the honeycomb lattice consists of two inter-penetrating triangular lattices we expect the same type behavior, at least at the qualitative level. That is, we do expect to have finite regions of the phase diagram where spiral phases have the lowest energy. Also, in Ref. [@hanisch] the authors do not discuss the full phase diagram of the Hubbard model in the honeycomb lattice, as we do in next section. They are primarily interested in the stability of the Nagaoka state. Their study is done using three different approaches (i) The Hartree single flip ansatz; (ii) the SKA Gutwiller ansatz; (iii) the Basile-Elser ansatz. A comparison can be established between the the Hartree single flip ansatz which roughly speaking, produces a straight line for all densities at the on-site Coulomb interaction $U\sim 5$, and our self consistent Hartree-Fock study. If we forget, for a moment, the van Hove singularity, both results are qualitatively the same for $n$ up to 0.8. Above this value our Hartree-Fock analysis, forgetting about the existence of the antiferromagnetic phase, predicts a very strong increase of the critical $U$ value (not shown in Fig. \[2Dcrit\], since the AF phase presents the lowest critical $U$-value), in agreement with the SKA ansatz. This behavior is not captured by the the Hartree single flip ansatz. It seems that our study interpolates between the Hartree single flip ansatz for low densities and the SKA ansatz for densities above 0.8. Quantitatively there are differences between the two studies, which are understandable on the basis of the different types of proposed ground states. Phase diagram {#phasediag} ============= As we mentioned in the previous section, the study of Ref. [@hanisch] is mainly concerned with the stability of the Nagaoka state, and in the previous section we studied the values of the Hubbard interaction associated with instabilities of the paramagnetic system. The transition from the paramagnetic to a magnetically ordered state is determined by the lowest $U_c$. Since we have found the possibility of having, at least, two (ferro and antiferro) different types of ground states, then in the case where interaction is stronger than both critical values, we need to address the problem of competition between the two ordered phases. The phase with the lowest free energy is the one prefered by the system. In this section we restrict ourselves to the study of a single layer but we shall consider different band structures. Spiral states will not be considered, since we are most interested in a weak ferromagnetic phase showing up in region of the phase diagram where the studies of Ref. [@hanisch] suggest that the collinear ferromagnetic (fully polarized) phase should be the most stable one. In the ferromagnetic phase we distinguished two types of ferromagnetic ground states: the Nagaoka ground state, with a maximally polarized spin ($m_F=n$), and a weak ferromagnetic state with $m_F<n$. The order parameter and free energies were obtained from the mean field Hamiltonian (\[hamiltHF\]). Figure \[phase\] shows the ground-state $(n,U)$-phase-diagram of the model. The effect of $t'$ on the phase diagram can be seen in right panel of Fig. \[phase\]. In Figure \[phase\] the dashed lines represent first-order phase transitions, where the order parameter do not vanish smoothly, while continuous lines represent second order transitions, where the order parameter vanishes smoothly, but its first derivative is discontinuous. In both cases ($t'=0$ and $t'\ne 0$) we find a finite region of weak ferromagnetism. In general the Nagaoka phase is more stable for large $U$. The weak ferromagnetic phase is separated from the Nagaoka phase by first or second order transition lines, depending on the path followed on $(U,n)$ diagram. The second order transition manifests itself through a discontinuity of the derivative of the magnetization with respect to $U$. At $n=0.75$ the instability line towards the ferromagnetic phase shows a dip (pronouced if $t'=0$), which is due to the logarithmic van-Hove singularity at $n=0.75$. A negative $t'$ produces two effects on the phase diagram: (i) the instability line towards the $F$ phase moves downwards; (ii) the point where the instability lines towards $F$ and $AF$ meet moves to larger $n$. Similarly to what was found in the previous section, the overall effect of $t'$ is to modify the ferromagnetic region of the phase diagram. Further, for negative $t'$ we expect collinear ferromagnetism to exist over a large phase of the phase diagram relatively to the case $t'\ge 0$, since it is well known that a negative $t'$ stabilizes the ferromagnetic phase. On the other hand we don’t expect the phase diagram presented in this section to be fully accurate for low densities, where the findings of Ref. 12 should apply. The first order critical lines do separate two different ferromagnetic (or ferromagnetic from antiferromagnetic) regions, in what concerns the total magnetization. In view of the results published in Ref. [@burgy], where a first order transition between the two competing phases is transformed by disorder into two second order phase transitions, we expect the same behavior to apply here, that is, disorder may change the order of the transition, since the arguments put forward in Ref. [@burgy] are of very general nature. It would be very interesting to study whether the introduction of disorder in the system could change the nature of the first order transitions. Quantum fluctuations {#fluctu} ==================== This section is devoted to the calculation of quantum fluctuation corrections to the magnetization. An analogous calculation for the Hubbard model in the square lattice in the $t/U\rightarrow 0$ limit was skecthed by Singh and Tešanović.[@singh] The computation of the renormalized staggered magnetization requires the evaluation of the Feynman diagram shown in Figure (\[trovao\]), which shows the second order (in the interaction $U$) contribution to the self-energy. The diagram describes the emission and later absorption of a spin wave by an up-spin electron. The emission and absorption processes are accompanied by electron spin reversal. This effect, consisting of virtual spin flips, is going to renormalize the staggered magnetization. The spin-${\uparrow}$ electron Green’s function is $${\cal G}_{\uparrow}({\bf p} , i\omega) = {\cal G}_{\uparrow}^0({\bf p} , i\omega) + {\cal G}^0_{\uparrow}({\bf p} , i\omega) \Sigma_{\uparrow}({\bf p} , i\omega) {\cal G}_{\uparrow}({\bf p} , i\omega)\,,$$ hence, ${\cal G}^{-1}=[{\cal G}^{(0)}]^{-1} - \Sigma^{-1}$. Here, ${\cal G}^0$ denotes the Hartree-Fock Green’s functions matrix appearing in equations (\[gaa\])-(\[gbb\]). The self-energy matrix is given by $$\Sigma_{{\uparrow}}^{ij}({\bf p} , i\omega) = U^2\frac{T}{N}\sum_{i\Omega,{\bf q}} {\cal G}_{\downarrow}^{(0)ij}({\bf p - q} , i\omega - i\Omega) \chi_{-+}^{(RPA)ij} ({\bf q} ,i\Omega)\,, \label{selfenergy}$$ where $i,j$ are sublattice indices. The self-energy for a ${\downarrow}$-spin electron would be similar to that in equation (\[selfenergy\]) with the ${\cal G}^{(0)ij}$-spin reversed and $\chi_{-+}$ repaced with $\chi_{+-}$. The renormalized staggered magnetization at $T=0$ is given by $$\bar m =- \frac 1 {N}\sum_{\bm k \sigma}\int_{-\infty}^0\frac{d {\omega}}{2\pi} \sigma [Im\,G^{aa}_{\sigma ,{\rm Ret}}(\bm k, {\omega})- Im\,G^{bb}_{\sigma ,{\rm Ret}}(\bm k,{\omega})]\,,$$ where $Im\,G^{ij}_{\sigma ,{\rm Ret}}(\bm k, {\omega})$ stands for the imaginay part of the retarded Green’s function for a spin $\sigma$ electron. The RPA susceptibility has poles corresponding to the spin waves calculated in section \[collective\], with energy $\approx \vert \phi({\bf k})\vert^2/U$, but it also has poles describing a particle-hole continuum of excitations at higher energies (of order $U$). In what follows we ignore this particle-hole continuum and take into account only the contribution from the spin wave poles to the selfenergy. Physically, this means that we shall calculate the magnetization renormalized by the spin waves. To this end, we start by replacing the susceptibility in equation (\[selfenergy\]) by the expression $$\chi^{(RPA)ij} ({\bf q} , i\omega)= \frac{R^{ij}[\omega({\bf q})]} {i\omega - \omega({\bf q})} + \frac{R^{ij}[-\omega({\bf q})]}{i\omega + \omega({\bf q})}\,, \label{aldrabice}$$ where $R^{ij}[\pm\omega({\bf q})]$ denotes the residue of $\chi_{-+}^{(RPA)ij}$ at the spin wave pole with dispersion $\omega({\bf q})$. Equation (\[aldrabice\]) describes an effective spin wave propagator. After performing the Matsubara frequency summation in equation (\[selfenergy\]) we obtain: $$\begin{aligned} \Sigma_{{\uparrow}}^{ij}({\bf p} , i\omega) = \frac{U^2}{N}\sum_{\bf q} &\Big[&\ \frac{{\rm num}\{ {\cal G}_{{\downarrow},-}^{(0)ij}({\bf p - q})\} R^{ij}[-\omega({\bf q})]} {i\omega+\omega({\bf q}) + E_+({\bf p - q})} - \frac{{\rm num}\{ {\cal G}_{{\downarrow},+}^{(0)ij}({\bf p - q})\} R^{ij}[\omega({\bf q})]} {i\omega-\omega({\bf q}) - E_+({\bf p - q})}\Big] \label{selfaprox}\end{aligned}$$ where we have introduced the notation ${\rm num}\{ {\cal G}_{\sigma,b}^{(0)ij}\}$ for the numerators of the Green’s functions, as expressed in equations (\[gaa\]) through (\[coerenc\]). Figure \[magnetizations\] we show the renormalized magnetization versus $U$. The Hartree-Fock magnetization is also shown in the Figure \[magnetizations\] for comparison. The calculation was performed for three different lattice sizes. It can be seen that convergence does not require a very large number of ${\bf k}$ points in the Brillouin Zone. This is not surprising because the Hartree-Fock magnetization itself already converges to the correct value in a 40$\times$40 lattice. We have also checked that the RPA propagators return the original electron density $n=1$, meaning that no spectral weight was lost in the used approximation for the self energy. In the large $U$ limit, the renormalized magnetization saturates at about $67\%$ of the (fully polarized) mean field value. This is in qualitative agreement with the Holstein-Primakoff result for the $S=1/2$ Heisenberg model in the honeycomb lattice, which predicts a ground state magnetization of $48\%$. We should remark, however, that the spin wave spectrum calculated within RPA theory has shown much better agreement with experimental results for Mott-Hubbard antiferromagnetic insulators than the Holstein-Primakoff theory [@la2cuo4; @poznan]. In Figure \[ImG\] we show the imaginary part of the electron’s Green’s function at negative frequencies, on both sublattices, for two different values of $U$. It is clear that, for strong couplings, part of the Hatree-Fock spectral weight is shifted to the bottom of the (negative) energy band. This shifting of the spectral weight is responsible for the renormalization of the staggered magnetization. It is interesting to see that for low $U$ the spectral weight is most significant at high energy, in the interval \[-2,0\[, with a much smaller weight in the interval \]-4,-2\[. At a stronger Hubbard interaction most of the high energy spectral weight (previously in the interval \[-2,0\[) has been displaced to lower energies and become localized around well defined energies, whereas the spectral weight at intermediate energy (in the interval \]-4,-2\[) remains essentially unchanged. Therefore, increasing Hubbard coupling has the efect of displacing the distribution of spectral weight from the top to the bottom of the energy band. Finally, a comment regarding approximation (\[aldrabice\]). The commutation relation between the spin raising and lowering operators, $$\sum_{{\bf p},{\bf p'}} [ \ {\hat a}_{{\bf p},{\downarrow}}^\dagger {\hat a}_{{\bf p}+{\bf q},{\uparrow}}, {\hat a}_{{\bf p'}+{\bf q},{\uparrow}}^\dagger {\hat a}_{{\bf p'},{\downarrow}}] = \sum_{\bf p} \Big({\hat a}_{{\bf p},{\downarrow}}^\dagger {\hat a}_{{\bf p},{\downarrow}} - {\hat a}_{{\bf p},{\uparrow}}^\dagger {\hat a}_{{\bf p},{\uparrow}}\Big) \,,$$ is equivalent to the following relation between the Hartree-Fock magnetization, $m$, and the transverse susceptibilities: $$\begin{aligned} \chi_{-+}^{aa} ({\bf q},\tau=0^+)-\chi_{-+}^{aa} ({\bf q},\tau=0^-) &=& \oint_{-i\infty}^{+i\infty} \frac{-idz}{2\pi} \chi_{-+}^{aa}(z)e^{-z 0^+} - \oint_{-i\infty}^{+i\infty} \frac{-idz}{2\pi} \chi_{-+}^{aa}(z)e^{z 0^+} \nonumber \\ &=& -m \,, \label{relation}\end{aligned}$$ at $T=0$. The integration of the term $e^{-z 0^+}$ ($e^{z 0^+}$) is performed along the semi-circular contour on the right (left) half of complex plane. Approximation (\[aldrabice\]) would predict $$R^{bb}[\omega(\vec q)] - R^{aa}[\omega(\vec q)] = m\,. \label{somaregra}$$ Indeed, we have checked that our numerical calculation of the residues satisfies (\[somaregra\]) to an accuracy of $1.3\%$. Final Remarks {#remarks} ============= In this paper we have studied the magnetic properties of the Hubbard model in honeycomb layers. Our study focused on the instabilities of the paramagnetic phase, on the magnetic phase diagram and on the collective excitations of the half filled phase. Of particular interest is the fact that it is not possible to describe a true spiraling state in the honeycomb lattice, as opposed to the usual cubic case. As a consequence, the magnetic spiral order follows a kind of one dimensional path over the 2D lattice. This kind of ordering, here studied at mean field level, may have important consequences to the study of spin charge separation in 2D lattices. Also interesting, was the identification of two types of ferromagnetic order, which have eluded previous studies. For moderate values of $U$ and electron densities not far from the half filled case, a region of weak ferromagnetism was found to have lower energy than the more usual Nagaoka ferromagnetic phase. The renormalization effect of the spin wave excitations on the Hartree Fock magnetization was also studied. However, our calculation does not take into account the renormalization of the mean field critical $U$. It is well known that quantum fluctuations should induce an increase the value of $U_c$. Our calculation cannot capture this effect, since it only takes into account the effect of well defined spin waves. We believe, however, that the calculation can be extended to include the effect of high-energy damped particle-hole processes leading to a renormalization of $U_c$, but this would require a modification of our numerical calculations and a significant increase of the computational time. Useful expressions for the $U_c$ critical lines at $\bm q=0$ {#appendsusuc} ============================================================ In this appendix, we derive the equations for the critical lines from the static susceptibilities ($\bm q=\bm 0$ and $\omega=0$). Our starting point is the zero order spin-spin susceptibility in equation (\[chi0\]). The Green’s functions in the paramagnetic region are obtained from equations (\[gaa\])-(\[gbb\]) after setting the magnetization to zero. Performing the Matsubara summations in (\[chi0\]), the analytical continuation and taking the zero frequency limit, we obtain $$\begin{aligned} \chi^{(0)aa}_{+-,0}(\bm q,0)&=&\frac 1 4 \sum_{\bm k} ( M_{++}(\bm k,\bm q)+M_{+-}(\bm k,\bm q)+ M_{-+}(\bm k,\bm q)+M_{--}(\bm k,\bm q) ) \\ \chi^{(0)ab}_{+-,0}(\bm q,0)&=&\frac 1 4 \sum_{\bm k} e^{i(\psi_{\bm k-\bm q}-\psi_{\bm k})} ( M_{++}(\bm k,\bm q)-M_{+-}(\bm k,\bm q)-M_{-+}(\bm k,\bm q)+M_{--}(\bm k,\bm q) ) \\ M_{\alpha,\beta}(\bm k,\bm q) &=&\frac {\theta(E_\alpha (\bm k))-\theta(E_\beta (\bm k -\bm q))} {E_\alpha (\bm k)-E_\beta (\bm k -\bm q)}\,,\end{aligned}$$ where $\psi_{\bm k}=\arg (\phi_{\bm k})$. The critical interaction strength, $U_c$ , is given by $U_c/ N = [\chi^{(0)aa}_{+-,0} \pm \vert \chi^{(0)ab}_{+-,0} \vert\ ]^{-1}$, in the limit $\bm q \rightarrow \bm 0$. Expanding all $\bm q$ dependent quantities around the point $\bm q = \bm 0$ up to first order, we obtain $$\begin{aligned} \chi^{(0)aa}_{+-,0}(\bm q,0)=\frac 1 4 \sum_{\bm k} \delta(E_{+}(\bm k))+\delta(E_{-}(\bm k)) +\frac {\theta( \vert \phi_{\bm k} \vert - \vert D(\bm k) \vert)} {\vert \phi_{\bm k} \vert}+\bm q \cdot ( ... )+... \\ \chi^{(0)ab}_{+-,0}(\bm q,0)=\frac 1 4 \sum_{\bm k} \delta(E_{+}(\bm k))+\delta(E_{-}(\bm k)) -\frac {\theta( \vert \phi_{\bm k} \vert - \vert D(\bm k) \vert)} {\vert \phi_{\bm k} \vert}+\bm q \cdot ( ... )+... &.\end{aligned}$$ Inserting this result in the expression for $U_c$ gives (for $\bm q=0$): $$\begin{aligned} \left(\frac {U_c} N\right)^{-1}_+&=&\frac 1 2 \sum_{\bm k} \{\delta[E_{+}(\bm k)]+\delta[E_{-}(\bm k)]\} \label{plus}\\ \left(\frac {U_c} N\right)^{-1}_-&=&\frac 1 2 \sum_{\bm k} \frac {\theta( \vert \phi_{\bm k} \vert - \vert D(\bm k) \vert)} {\vert \phi_{\bm k} \vert}\label{minus}.\end{aligned}$$ We recognize the density of states, $\rho(\epsilon)=\frac 1 N \sum_{\bm k} \{ \delta(E_{+}(\bm k)+\mu-\epsilon)+\delta(E_{-}(\bm k)+\mu-\epsilon) \}$, appearing in equation (\[plus\]), which is just the Stoner criterion. The critical interaction strengths are given by $$\begin{aligned} U_{c,+}&=&\frac 2 {\rho(\mu)} \\ U_{c,-}&=&\frac 2 {\frac 1 N \sum_{\bm k} \frac {\theta( \vert \phi_{\bm k} \vert - \vert D(\bm k) \vert)} {\vert \phi_{\bm k} \vert}}\,.\end{aligned}$$ Note that all $t'$ and $t''$ dependence is contained in $D(\bm k)$. Of course, these equations could also have been obtained by taking the limit $m_F, m_{AF} \rightarrow 0$ in equation (\[selfconsmagn\]). Large $U$ results for the susceptibilities and spin waves {#appendsuslarge} ========================================================= We give asymptotic expressions for the susceptibilities $\chi_{+-}^0(z,{\bf q})$ and spin wave dispersion for a half-filled honeycomb antiferromangetic layer with nearest neighbor hopping. In this case, the chemical potential $\mu=0$ and the two energy bands are given by $E({\bf k})_{\pm} = \pm \sqrt{\Big(\frac {Um} 2\Big)^2 +\vert \phi_{\bm k}\vert^2}$. The expressions for coherence factors appearing in the single electron propagators, expanded up to second order in $t/U$, are: $$\begin{aligned} |A_{{\uparrow},+}({\bf k})|^2 &=& |A_{{\downarrow},-}({\bf k})|^2 =|B_{{\downarrow},+}({\bf k})|^2 =|B_{{\uparrow},-}({\bf k})|^2 \approx \frac{|\phi({\bf k})|^2}{U^2m^2}\\ |A_{{\uparrow},-}({\bf k})|^2 &=& |A_{{\downarrow},+}({\bf k})|^2 =|B_{{\downarrow},-}({\bf k})|^2 =|B_{{\uparrow},+}({\bf k})|^2 \approx 1-\frac{|\phi({\bf k})|^2}{U^2m^2}\\ A_{{\downarrow},-}^*({\bf k})B_{{\downarrow},-}({\bf k})&=&-A_{{\uparrow},+}^*({\bf k})B_{{\uparrow},+}({\bf k}) \nonumber \\ &=& A_{{\uparrow},-}^*({\bf k})B_{{\uparrow},-}({\bf k})=-A_{{\downarrow},+}^*({\bf k})B_{{\downarrow},+}({\bf k}) \approx \frac{\phi^*({\bf k})}{Um}\end{aligned}$$ We therefore may use the aproximate expressions for the $\chi_{+-}^0$ susceptibilities: $$\begin{aligned} \chi^{(0)aa}(z,{\bf q})&\approx& -\frac{1}{N} \sum_{\bf k} \frac{1}{z- E({\bf k}) - E({\bf k}+{\bf q})}\Big(1- \frac{|\phi({\bf k})|^2+|\phi({\bf k}+{\bf q})|^2}{U^2 m^2}\Big)\label{chaa}\\ \chi^{(0)bb}(z,{\bf q})&\approx& \frac{1}{N} \sum_{\bf k} \frac{1}{z+ E({\bf k}) + E({\bf k}+{\bf q})}\Big(1- \frac{|\phi({\bf k})|^2+|\phi({\bf k}+{\bf q})|^2}{U^2 m^2}\Big)\\ \chi^{(0)ba}(z,{\bf q})&\approx& \frac{1}{N} \sum_{\bf k}\Big( \frac{1}{z- E({\bf k}) - E({\bf k}+{\bf q})}-\frac{1}{z+ E({\bf k}) + E({\bf k}+{\bf q})}\Big) \frac{\phi({\bf k})\ \phi^*({\bf k}+{\bf q})}{U^2 m^2}\\ \chi^{(0)ab}(z,{\bf q})&\approx& \frac{1}{N} \sum_{\bf k}\Big( \frac{1}{z- E({\bf k}) - E({\bf k}+{\bf q})}-\frac{1}{z+ E({\bf k}) + E({\bf k}+{\bf q})}\Big) \frac{\phi^*({\bf k})\ \phi({\bf k}+{\bf q})}{U^2 m^2}\label{chab}\,.\end{aligned}$$ We anticipate that the spin wave energies are of order $z\approx t^2/U$ so that we may use the expansion $$\frac{1}{z+ E({\bf k}) + E({\bf k}+{\bf q})} \approx \frac{1}{Um} \Big[1-\frac{z}{Um}-\frac{|\phi({\bf k})|^2 +|\phi({\bf k}+{\bf q})|^2}{U^2m^2} + ...\Big]$$ in equations (\[chaa\])-(\[chab\]). The condition (\[det\]) for the spin wave dispersion now takes the form: $$\frac{z^2}{U^2m^4} = \Big[ 1-\frac{1}{m} +\frac{4}{U^2m^3N}\Big(\sum_{\bf p} |\phi({\bf p})|^2\Big) \Big]^2 -\frac{4}{U^2m^6} \Big|\frac{1}{N} \sum_{\bf p}\phi^*({\bf p})\phi({\bf p}+{\bf q})\Big|^2\,. \label{abc}$$ But we must take into account that the self-consistent equation for the Hatree-Fock magnetization, expanded to second order in $t/U$, is $$1-\frac{1}{m} \approx - \frac{2}{U^2m^3N}\Big(\sum_{\bf p}|\phi({\bf p})|^2\Big) \label{1menos1m}$$ Introducing (\[1menos1m\]) in (\[abc\]) we finally obtain the spin wave dispersion: $$z=\omega({\bf q})\approx \frac{2}{Um} \sqrt{\Big(\frac{1}{N} \sum_{\bf p}|\phi({\bf p})|^2\Big)^2 - \Big|\frac{1}{N} \sum_{\bf p}\phi^*({\bf p})\phi({\bf p}+{\bf q})\Big|^2}\,, \label{omegaprox}$$ which agrees with the result predicted by the Holstein-Primakoff theory. Holstein-Primakoff analysis of the Heisenberg model {#appendHP} =================================================== The Heisenberg Hamiltonian in the honeycomb lattice is given by $$H=\frac J 2 \sum_{i\in A,\bm \delta}[S^z_iS^z_{i+\bm\delta}+\frac 12 (S^+_iS^-_{i+\bm \delta}+ S^-_iS^+_{i+\bm \delta})]+ \frac J 2 \sum_{i\in B,\bm \delta}[\tilde S^z_i\tilde S^z_{i+\bm\delta} +\frac 12 (\tilde S^+_i\tilde S^-_{i+\bm \delta}+ \tilde S^-_i\tilde S^+_{i+\bm \delta})]\,.$$ We introduce two sets of operators $$S^z_i=-a^\dag_i a_i+S\,,\hspace{0.5cm}S^+_i=\sqrt{2S-a^\dag_i a_i}\,a_i\,, \hspace{0.5cm}S^+_i=a_i^\dag\sqrt{2S-a^\dag_i a_i}\,,$$ and $$\tilde S^z_i=-b^\dag_i b_i+S\,,\hspace{0.5cm} \tilde S^+_i=\sqrt{2S-b^\dag_i b_i}\,b_i\,, \hspace{0.5cm}\tilde S^+_i=b_i^\dag\sqrt{2S-b^\dag_i b_i}\,.$$ Making the usual linear expansion and introducing the momentum representation for the bosonic operators, the Hamiltonian can be written as $$H=-JN_AzS^2+JzS\sum_{\bm k}(a^\dag_{\bm k}a_{\bm k}+b^\dag_{\bm k}b_{\bm k}) +JS\sum_{\bm k}(\phi(\bm k)a_{\bm k}b_{-\bm k}+\phi^\ast(\bm k) b^\dag_{-\bm k}a^\dag_{\bm k})\,.$$ Next we introduce a set of quasiparticle operators defined by $$a^\dag_{\bm k}=u_{\bm k}\gamma^\dag_{1,\bm k}-v_{\bm k}^\ast\gamma_{2,\bm k}\,, \hspace{1cm} b^\dag_{-\bm k}=u_{\bm k}\gamma^\dag_{2,\bm k}-v_{\bm k}^\ast \gamma_{1,\bm k}\,,$$ where the coherence factors obey $\vert u_{\bm k}\vert ^2- \vert v_{\bm k}\vert ^2=1$. After introducing the above transformations in the Hamiltonian we find $$\begin{aligned} H&=&-JN_AzS^2+\sum_{\bm k} (2JzS \vert v_{\bm k}\vert ^2 - JS\phi(\bm k) v_{\bm k}u_{\bm k}^\ast- JS\phi^\ast(\bm k) v_{\bm k}^\ast u_{\bm k})\nonumber\\ &+& \sum_{\bm k;i=1,2}[JzS(\vert u_{\bm k}\vert ^2+ \vert v_{\bm k}\vert ^2) - JS\phi(\bm k) v_{\bm k}u_{\bm k}^\ast- JS\phi^\ast(\bm k) v_{\bm k}^\ast u_{\bm k})]\gamma^\dag_{i,\bm k} \gamma_{i,\bm k}\nonumber\\ &+& \sum_{\bm k}[(-2JzS v_{\bm k} u_{\bm k}+JS\phi(\bm k)v_{\bm k} v_{\bm k} + JS\phi^\ast(\bm k)u_{\bm k} u_{\bm k})\gamma^\dag_{1,\bm k} \gamma^\dag_{2,\bm k}+H.c.]\,,\end{aligned}$$ which implies the conditions $$\begin{aligned} JzS(\vert u_{\bm k}\vert ^2+ \vert v_{\bm k}\vert ^2) - JS\phi(\bm k) v_{\bm k}u_{\bm k}^\ast- JS\phi^\ast(\bm k) v_{\bm k}^\ast u_{\bm k})&=&\omega(\bm k)\,,\nonumber\\ -2JzS v_{\bm k} u_{\bm k}+JS\phi(\bm k)v_{\bm k} v_{\bm k} + JS\phi^\ast(\bm k)u_{\bm k} u_{\bm k}&=&0\,.\end{aligned}$$ The second condition reveals that we can choose $u_{\bm k}$ to be real and $v_{\bm k}^\ast=\phi(\bm k)\alpha(\bm k)$, with $\alpha(\bm k)$ real. After some straightforward manipulations we find $$\omega(\bm k) = JS\sqrt{z^2 -\vert \phi_{\bm k}\vert ^2}\,, \hspace{1cm} \alpha^2(\bm k)=-\frac 1 {2 \vert \phi_{\bm k}\vert ^2} +\frac z{2 \vert \phi_{\bm k}\vert ^2}\frac {JS}{\omega(\bm k)}\,.$$ The stagered magnetization is given by $$m=S-\frac 1 {2N_A}\sum_{\bm k}{\langle}a^\dag_{\bm k}a_{\bm k} +b^\dag_{\bm k}b_{\bm k}{\rangle}=S-\frac 1 {N_A}\sum_{\bm k} \left(-\frac 1 2 +\frac 1 2 \frac {z } {\sqrt{z^2-\vert \phi_{\bm k}\vert ^2}}\right) -\frac 1 {N_A}\sum_{\bm k} \frac {z n_B[\omega(\bm k)]} {\sqrt{z^2-\vert \phi_{\bm k}\vert ^2}}\,,$$ and at zero temperature we assume $n_B[\omega(\bm k)]=0$. Computing the integral gives a magnetization value of $0.24$, that is about 50% the Néel value $\frac 1 2$. [99]{} P. W. Anderson, Science [**235**]{}, 1196 (1987). L. Balents, M. P. A. Fisher, and S. M. Girvin, Phys. Rev. B [**65**]{}, 224412 (2002). K. Takada, H. Dakurai, E. Takayama-Muromachi, F. Izumi, R. A. Dilinian, and T. Sasaki , Nature [**422**]{}, 53 (2003). K. Kuroki and R. Arita, Phys. Rev. B [**63**]{}, 174507 (2001). S. Onari, K. Kuroki, R. Arita, and H. Aoki , Phys. Rev. B [**65**]{}, 184525 (2002). See Tôru Moriya, Acta Phys. Pol. B [**34**]{}, 287 (2003); cond-mat/0207669 for a recent review on spin fluctuations and superconductivity. J. Nagamatsu, N. Nakagawa, T. Muranaka, Y. Zenitani, J. Akimitsu, Nature [**410**]{}, 63 (2001). J. Gonzalez, F. Guinea, and M. A. H. Vozmediano, Nucl. Phys. B [**424**]{}, 595 (1994) J. Gonzalez, F. Guinea, and M. A. H. Vozmediano, Phys. Rev. Lett. [**77**]{}, 3589 (1996). S. Sorella and E. Tosatti, Europhys. Lett. Vol [**19**]{}, 699 (1992). L. M. Martelo, M. Dzierzawa, L. Siffert, and D. Baeriswyl, Z. Physik B [**103**]{}, 335 (1997). N. Furukawa, J. Phys. Soc. Jpn. [**70**]{}, 1483 (2001). T. Hanisch, B. Kleine, A. Ritzl, and E. Müller-Hartmann, Ann. Physik [**4**]{}, 303 (1995). A. L. Tchougreeff and R. Hoffmann, J. Phys. Chem. [**96**]{}, 8933 (1992). G. Baskaran and S. A. Jafari, Phys. Rev. Lett. [**89**]{}, 016402 (2002). N. M. R. Peres, M. A. N. Araújo, and A. H. C. Neto Phys. Rev. Lett. 92, 199701 (2004); G. Baskaran and S. A. Jafari Phys. Rev. Lett. 92, 199702 (2004) N. M. R. Peres and M. A. N. Araújo Phys. Rev. B [**65**]{}, 1324404 (2002) N. M. R. Peres and M. A. N. Araújo Physica Status Solidi [**236**]{}, 523 (2003) H. R. Krishnamurthy, C. Jayaprakash, S. Sarker, and W. Wenzel Phys. Rev. Lett. [**64**]{}, 950 (1990) A. P. Kampf, Phys. Rev. B [**53**]{}, 747 (1996). More general spin states than those discussed in this work are considered by Subir Sachdev in Rev. Mod. Phys. [**75**]{}, 913 (2003), but these are outside the scope of our treatment. However the states we consider in this paper are included in the analysis of the above reference. J. Burgy, M. Mayr, V. Martin-Mayor, A. Moreo, and E. Dagotto, Phys. Rev. Lett. 87, 277202 (2001). A. Singh and Z. Tesanovic Phys. Rev. B [**41**]{}, 11457 (1990); Phys. Rev. B [**41**]{}, 11604 (1990); Phys. Rev. B [**45**]{}, 7258 (1992) The calculations presented in this table were performed at the GCEP cluster, in the Center of Physics of the University of Minho.
--- abstract: 'Let $\UT_n(q)$ denote the unitriangular group of unipotent $n\times n$ upper triangular matrices over a finite field with cardinality $q$ and prime characteristic $p$. It has been known for some time that when $p$ is fixed and $n$ is sufficiently large, $\UT_n(q)$ has “exotic” irreducible characters taking values outside the cyclotomic field ${\mathbb{Q}}(\zeta_p)$. However, all proofs of this fact to date have been both non-constructive and computer dependent. In the preliminary work [@supp0], we defined a family of orthogonal characters decomposing the supercharacters of an arbitrary algebra group. By applying this construction to the unitriangular group, we are able to derive by hand an explicit description of a family of characters of $\UT_n(q)$ taking values in arbitrarily large cyclotomic fields. In particular, we prove that if $r$ is a positive integer power of $p$ and $n>6r$, then $\UT_n(q)$ has an irreducible character of degree $q^{5r^2-2r}$ which takes values outside ${\mathbb{Q}}(\zeta_{pr})$. By the same techniques, we are also able to construct explicit Kirillov functions which fail to be characters of $\UT_n(q)$ when $n>12$ and $q$ is arbitrary.' author: - | Eric Marberg[^1]\ Department of Mathematics\ Massachusetts Institute of Technology\ title: Exotic characters of unitriangular matrix groups --- Introduction ============ Let ${\mathbb{F}}_q$ be a finite field with $q$ elements and write $\UT_n(q)$ to the denote the unitriangular group of $n\times n$ upper triangular matrices over ${\mathbb{F}}_q$ with all diagonal entries equal to 1. This is a $p$-Sylow subgroup of the general linear group $\GL(n,{\mathbb{F}}_q)$, where $p>0$ is the characteristic of ${\mathbb{F}}_q$. Researchers have known for some time that for large values of $n$, there exist “exotic” irreducible characters of $\UT_n(q)$ which have values outside the cyclotomic field ${\mathbb{Q}}(\zeta_p)$, where $\zeta_p$ is a primitive $p$th root of unity. However, proofs of this fact to date have been largely both nonconstructive and computer dependent. For example, Isaacs and Karaguezian showed indirectly that $\UT_n(2)$ has a nonreal character for $n>12$ by writing down a matrix in $\UT_{13}(2)$ not conjugate to its inverse [@IK1; @IK2]. The same authors later gave a different computational proof by implementing algorithms to compute the character degrees and involutions of $\UT_n(2)$ [@IK05]. More generally, Vera-López and Arregi have shown through some detailed calculations involving the help of a computer algebra system that for $q$ prime, $\UT_n(q)$ has an element not conjugate to its $(q+1)$th power for all $n>6q$ [@VeraLopez2004]. By standard results in character theory (see [@Isaacs Chapter 6]), these conjugacy properties imply the existence of our exotic characters, but do not shed much light on any of their attributes. One obstacle to providing more constructive proofs of these facts comes from our incomplete understanding of the representations of $\UT_n(q)$ when the characteristic of ${\mathbb{F}}_q$ is small compared to $n$. Combatting this problem, we describe in [@supp0] a generic method of constructing characters of algebra groups such as the untriangular group, building on combined work of André [@Andre1], Yan [@Yan], and Diaconis and Isaacs [@DI]. The primary intent of this work is to use this new construction to identify certain irreducible characters of $\UT_n(q)$ and then to prove by hand that these characters take values outside various cyclotomic fields. Our techniques shed light on how “exotic” characters of $\UT_n(q)$ arise, and provide a computer independent proof of the following theorem. Suppose ${\mathbb{F}}_q$ has characteristic $p$ and let $r=p^e$ for an integer $e>0$. If $n > 6r$, then $\UT_n(q)$ has an irreducible character of degree $q^{5r^2-2r}$ whose set of values is contained in ${\mathbb{Q}}(\zeta_{pr})$ but not ${\mathbb{Q}}(\zeta_r)$. These methods have another application concerning the Kirillov functions of $\UT_n(q)$. Let $\fkt_n(q)$ denote the algebra of $n\times n$ upper triangular matrices over ${\mathbb{F}}_q$ with all diagonal entries equal to 0. There is a coadjoint action of $\UT_n(q)$ on the irreducible characters of $\fkt_n(q)$ viewed as an abelian group, given by $g : \vartheta \mapsto \vartheta \circ \Ad(g)^{-1}$ where $\Ad(g)(X) = gXg^{-1}$ for $g \in \UT_n(q)$ and $X \in \fkt_n(q)$. If $\Omega$ is a coadjoint orbit, then the corresponding *Kirillov function* $\psi : \UT_n(q) \to {\mathbb{Q}}(\zeta_p)$ is the complex-valued function $$\psi(g) =|\Omega|^{-1/2} \sum_{\vartheta \in \Omega} \vartheta(g-1),\qquad\text{for }g\in \UT_n(q).$$ Kirillov [@K] conjectured that these functions comprise all the irreducible characters of $\UT_n(q)$, and we observed in [@supp0] that a recent calculation of Evseev [@E] shows non-constructively that this conjecture holds if and only if $n\leq 12$. Here we will be able to give a constructive proof of the “only if” direction of this result; in particular we shall identify a Kirillov function of degree $q^{16}$ which is not a character of $\UT_n(q)$ when $n> 12$. Our methods also shed some light on a different type of Kirillov function. If $\exp : \fkt_n(q) \to \UT_n(q)$ denotes the truncated exponential map $\exp(X) = 1+X + \frac{1}{2}X^2 + \dots + \frac{1}{(p-1)!} X^{p-1}$, then an *exponential Kirillov function* $\psi^\exp : \UT_n(q) \to {\mathbb{Q}}(\zeta_p)$ is a function defined by $$\psi^\exp\( \exp(X) \) = \psi(1+X),\qquad\text{for }X \in \fkt_n(q)$$ for some Kirillov function $\psi$. Sangroniz [@Sangroniz] has shown that every irreducible character of $\UT_n(q)$ is an exponential Kirillov function if $n < 2p$, and an indirect consequence of Vera-Lopez and Arregi’s work [@VeraLopez2004] is that there exist exponential Kirillov functions which are not characters when $q$ is prime and $n>6q$. We extend this result to arbitrary finite fields by identifying an exponential Kirillov function of degree $q^{5p^2-p}$ which is not a character of $\UT_n(q)$ when $n>6p$. It seems conceivable that our construction of this function is the simplest possible, so we conjecture the following. If $p>0$ is the characteristic of ${\mathbb{F}}_q$, then the irreducible characters and exponential Kirillov functions of $\UT_n(q)$ coincide if and only if $n\leq 6p$. While one can verify this statement using computers for $p=2$ (see [@E]), examining even the case $p=3$ exits the realm of currently feasible calculations. Also, there is no proof to date that the characters of $\UT_n(q)$ are necessarily ${\mathbb{Q}}(\zeta_p)$-valued if $n\leq 6p$. Nevertheless, it seems possible, just from the experience of proving the results herein, that $n=6p$ is the “breaking point” after which the algebra groups $\UT_n(q)$ can manifest irreducible characters which are not exponential Kirillov functions. We mention that computer experiments suggest that one might be able to use a clever combinatorial argument$-$combining Lemmas 9, 10, and 11 in [@Sangroniz] with Theorem \[structural\] and Lemma \[monomial\] below to show that the “if” direction of this conjecture holds for $n\leq 3p$. Improving this bound to $n\leq 6p$ probably will require more robust techniques, however. Preliminaries {#prelim-sect} ============= Here we briefly establish our notational conventions, then describe several different functions on $\UT_n(q)$ which will be of interest in our later computations. We present this material from the more general standpoint of algebra groups, although our applications shall primarily concern $\UT_n(q)$. Conventions and notation ------------------------ Given a finite group $G$, we let $\langle\cdot,\cdot\rangle_G$ denote the standard inner product on the complex vector space of functions $G \to {\mathbb{C}}$ defined by $ \langle f,g\rangle_G = \frac{1}{|G|} \sum_{x \in G} f(x) \overline{g(x)}$, and write $\Irr(G)$ to denote the set of complex irreducible characters of $G$, or equivalently the set of characters $\chi$ of $G$ with $\langle \chi,\chi \rangle_G = 1$. A function $ G\to {\mathbb{C}}$ is then a character if and only if it is a nonzero sum of irreducible characters with nonnegative integer coefficients. If $f: S \to T$ is a map and $S'\subset S$, then we write $f\downarrow S'$ to denote the restricted map $S'\to T$. For functions on groups, we may also write $\Res_H^G(\chi) = \chi\downarrow H$ to denote the restriction of $\chi : G\to {\mathbb{C}}$ to a subgroup $H$. If $\chi$ is any complex valued function whose domain includes the subgroup $H\subset G$, then we define the induced function $\Ind_{H}^G(\chi):G \to {\mathbb{C}}$ by the formula \[frob\] \_H\^G()(g) = \_ (xgx\^[-1]{}),gG.We recall that restriction takes characters of $G$ to characters of $H$ and induction take characters of $H$ to characters of $G$. Throughout, $q>1$ is some fixed prime power and ${\mathbb{F}}_q$ is a finite field with $q$ elements. We write ${\mathbb{F}}_q^+$ to denote the additive group of the field and ${\mathbb{F}}_q^\times$ to denote the multiplicative group of nonzero elements. For any positive integer $r$ we let $\zeta_r$ denote the primitive $r$th root of unity $\zeta_r = e^{2\pi i / r}$, and write ${\mathbb{Q}}(\zeta_r)$ to denote the $r$th cyclotomic field, given by adjoining $\zeta_r$ to ${\mathbb{Q}}$. For integers $m,n$ we let $$[m,n] = \{ t \in \ZZ : m\leq t \leq n\}\qquad\text{and}\qquad [n] = [1,n] = \{ 1,2,\dots,n\}.$$ Given integers $1\leq i<j \leq n$ we let $$\ba e_{ij} & = \text{the matrix in $\fkt_n(q)$ with 1 in position $(i,j)$ and zeros elsewhere,}\\ e_{ij}^* &=\text{the ${\mathbb{F}}_q$-linear map $\fkt_n(q)\to {\mathbb{F}}_q$ given by $e_{ij}^*(X) = X_{ij}$.}\ea$$ These matrices and maps are then dual bases of $\fkt_n(q)$ and its dual space $\fkt_n(q)^*$. Algebra groups {#alg} -------------- Let ${\mathfrak{n}}$ be a (finite-dimensional, associative) nilpotent ${\mathbb{F}}_q$-algebra, and ${\mathfrak{n}}^*$ its dual space of ${\mathbb{F}}_q$-linear maps ${\mathfrak{n}}\to {\mathbb{F}}_q$. Write $G = 1+{\mathfrak{n}}$ to denote the corresponding *algebra group*; this is the set of formal sums $1+X $ with $X \in {\mathfrak{n}}$, made a group via the multiplication $$(1+X)(1+Y) = 1+X+Y+XY.$$ As prototypical examples, we take ${\mathfrak{n}}$ to be the algebra $\fkt_n(q)$ of strictly upper triangular $n\times n$ matrices over ${\mathbb{F}}_q$ and $G$ to be the *unitriangular group* $\UT_n(q) = 1 + \fkt_n(q)$. A considerable literature exists on algebra groups and their representations, of which the reader might take [@I95] as a starting point. We call a subgroup of $G=1+{\mathfrak{n}}$ of the form $H = 1+{\mathfrak{h}}$ where ${\mathfrak{h}}\subset {\mathfrak{n}}$ is a subalgebra an *algebra subgroup*. If ${\mathfrak{h}}\subset {\mathfrak{n}}$ is a two-sided ideal then $H$ is a normal algebra subgroup of $G$, and the map $gH \mapsto 1+(X+{\mathfrak{h}})$ for $g=1+X \in G$ gives an isomorphism $G/H \cong 1 + {\mathfrak{n}}/{\mathfrak{h}}$. In practice we shall usually identify the quotient $G/H$ with the algebra group $1 + {\mathfrak{n}}/{\mathfrak{h}}$ by way of this canonical map. Kirillov functions $\psi_\lambda$ and $\logpsi_\lambda$ ------------------------------------------------------- For the duration of this work, $\theta : {\mathbb{F}}_q^+\to {\mathbb{C}}^\times$ denotes a fixed, nontrivial homomorphism from the additive group of ${\mathbb{F}}_q$ to the multiplicative group of nonzero complex numbers. Observe that $\theta$ takes values in the cyclotomic field ${\mathbb{Q}}(\zeta_p)$, where $p>0$ is the characteristic of ${\mathbb{F}}_q$. For each $\lambda \in {\mathfrak{n}}^*$, we define $\theta_\lambda : G \to {\mathbb{Q}}(\zeta_p)$ as the function with $$\theta_\lambda(g) = \theta\circ \lambda(g-1),\qquad\text{for }g \in G.$$ The maps $\theta\circ \lambda : {\mathfrak{n}}\to {\mathbb{C}}$ are the distinct irreducible characters of the abelian group ${\mathfrak{n}}$, and from this it follows that the functions $\theta_\lambda : G\to {\mathbb{C}}$ are an orthonormal basis (with respect to $\langle\cdot,\cdot\rangle_G$) for all functions on the group. The most generic methods we have at our disposal for constructing characters of algebra groups involve summing the functions $\theta_\lambda$ over orbits in ${\mathfrak{n}}^*$ under an appropriate action of $G$. Kirillov functions provide perhaps the natural example of such a construction. Their definition relies on the *coadjoint* action of $G$ on ${\mathfrak{n}}^*$, by which we mean the right action $(\lambda,g) \mapsto \lambda^{g}$ where we define $$ \lambda^g(X) = \lambda(g X g^{-1}),\qquad\text{for } \lambda \in {\mathfrak{n}}^*,\ g\in G,\ X \in {\mathfrak{n}}.$$ Denote the coadjoint orbit of $\lambda \in {\mathfrak{n}}^*$ by $\lambda ^G$. The *Kirillov function* $\psi_\lambda$ is then the map $G \to {\mathbb{Q}}(\zeta_p)$ defined by \[kirillov-def\] \_= \_[\^G]{} \_.The size of $\lambda^G$ is a power of $q$ to an even integer [@DI Lemma 4.4] and so $\psi_\lambda(1) = \sqrt{|\lambda^G|}$ is a nonnegative integer power of $q$. We have $\psi_\lambda = \psi_\mu$ if and only if $\mu \in \lambda^G$, and the distinct Kirillov functions on $G$ form an orthonormal basis (with respect to $\langle\cdot,\cdot\rangle_G$) for the class functions on the group. Kirillov functions may fail to be characters, however, and one purpose of this work is to demonstrate how one can use the results in [@supp0] to prove this failure directly. Our definition of a Kirillov function attempts to attach a coadjoint orbit in ${\mathfrak{n}}^*$ to an irreducible character of $G$ by way of the bijection ${\mathfrak{n}}\to G$ given by $X \mapsto 1+X$. This is Kirillov’s orbit method in the context of finite groups as described in [@K]. In practice, one is more successful in developing a correspondence between coadjoint orbits and irreducible characters if a different bijection ${\mathfrak{n}}\to G$ is used. Specifically, let $\tobedecided \exp : {\mathfrak{n}}\to G$ be the truncated exponential map $$\ba & \barr{l}\tobedecided \exp (X) = 1 + X + \frac{1}{2}X^2 + \frac{1}{6}X^3 + \dots + \frac{1}{(p-1)!}X^{p-1}. \earr\ea$$ This map is always a bijection (as is any polynomial map with constant term one), so we may define the *exponential Kirillov function* $\logpsi_\lambda : G\to {\mathbb{Q}}(\zeta_p)$ by $$\logpsi_\lambda\(\exp(X)\) = \psi_\lambda(1+X),\qquad\text{for }X \in {\mathfrak{n}}.$$ Exponential Kirillov functions have all the same properties are ordinary Kirillov functions; in particular, they also form an orthonormal basis for the class functions on $G$ and coincide with ordinary Kirillov functions in characteristic two. They are more often irreducible, however. In particular, if $p$ is the characteristic of ${\mathbb{F}}_q$ then 1. $\Irr(G) =\left \{ \logpsi_\lambda : \lambda \in {\mathfrak{n}}^*\right\}$ if ${\mathfrak{n}}^p=0$ [@Sangroniz Corollary 3]. 2. More strongly, $\Irr(\UT_n(q)) = \left\{ \logpsi_\lambda : \lambda \in \fkt_n(q)^*\right\}$ if $n < 2p$ [@Sangroniz Corollary 12]. By contrast, $\Irr(\UT_n(q)) =\left \{ \psi_\lambda : \lambda \in \fkt_n(q)^*\right\}$ if and only if $n \leq 12$ [@supp0 Theorem 4.1]. As mentioned in the introduction, the bound $n<2p$ in (2) seems unlikely to be optimal. In Section \[constructions\] we will be able to identify exponential Kirillov functions which are not characters when $n>6p$; our construction seems “morally” like the simplest possible, and this motivates the following conjecture. $\Irr(\UT_n(q)) = \left\{ \logpsi_\lambda : \lambda \in \fkt_n(q)^*\right\}$ if and only if $n \leq 6p$. Supercharacters $\chi_\lambda$ ------------------------------ While Kirillov functions provide an accessible orthonormal basis for the class functions of an algebra group, supercharacters alternatively provide an accessible family of orthogonal characters. André [@Andre1] first defined these characters in the special case $G=\UT_n(q)$ as a practical substitute for the group’s unknown irreducible characters. Several years later, Yan [@Yan] showed how one could replace André’s definition with a more elementary construction, which Diaconis and Isaacs [@DI] subsequently generalized to algebra groups. We define the supercharacters of $G=1+{\mathfrak{n}}$ in a way analogous to Kirillov functions, but using left and right actions of $G$ on ${\mathfrak{n}}^*$ in place of the coadjoint action. In detail, the group $G$ acts on the left and right on ${\mathfrak{n}}$ by multiplication, and on ${\mathfrak{n}}^*$ by $(g , \lambda) \mapsto g\lambda$ and $( \lambda,g) \mapsto \lambda g$ where we define $$g\lambda(X) = \lambda(g^{-1}X)\qquad\text{and}\qquad \lambda g(X) = \lambda(Xg^{-1}),\qquad\text{for }\lambda \in {\mathfrak{n}}^*,\ g \in G,\ X \in {\mathfrak{n}}.$$ These actions commute, in the sense that $(g\lambda) h = g(\lambda h)$ for $g,h \in G$, so there is no ambiguity in removing all parentheses and writing expressions like $g\lambda h$. We denote the left, right, and two-sided orbits of $\lambda \in {\mathfrak{n}}^*$ by $G\lambda$, $\lambda G$, and $G\lambda G$ . Notably, $G\lambda$ and $\lambda G$ have the same cardinality and $|G\lambda G| = \frac{|G\lambda||\lambda G|}{|G\lambda \cap \lambda G|}$ [@DI Lemmas 3.1 and 4.2]. The *supercharacter* $\chi_\lambda$ is then the function $G \to {\mathbb{Q}}(\zeta_p)$ defined by \[superchar-def\] \_= \_[G G]{} \_. Supercharacters are always characers but often reducible. We have $\chi_\lambda = \chi_\mu$ if and only if $\mu \in G\lambda G$, and every irreducible character of $G$ appears as a constituent of a unique supercharacter. The orthogonality of the functions $\theta_\mu$ implies that $$\langle \chi_\lambda,\chi_\mu\rangle_G = \left\{\barr{ll} |G\lambda \cap \lambda G|, &\text{if }\mu \in G \lambda G , \\ 0,&\text{otherwise,}\earr\right. \qquad\text{for }\lambda,\mu \in {\mathfrak{n}}^*.$$ Furthermore, $\frac{|G\lambda|}{|G\lambda \cap \lambda G|} \chi_\lambda$ is the character of a two-sided ideal in ${\mathbb{C}}G$. The supercharacters of an algebra group rarely give us all irreducible charactrs. For example, every irreducible character of $\UT_n(q)$ is a supercharacter only when $n\leq 3$. Supercharacters nevertheless provide a useful starting point for constructing $\Irr(G)$. In addition, there are many interesting connections between the supercharacters of $\UT_n(q)$ and the combinatorics of set partitions, which we will touch upon briefly in Section \[pattern\]. Supercharacter constituents $\xi_\lambda$ {#xi} ----------------------------------------- Here we describe the character construction given in [@supp0]. This is a process for decomposing supercharacters into smaller constituents $\xi_\lambda$, obtained by inducing linear characters of certain algebra subgroups. The characters $\xi_\lambda$ will not give our “exotic” characters of $\UT_n(q)$ directly, but will possess in special cases a transparent decomposition into such characters. As with Kirillov functions and supercharacters, the characters of present interest are indexed by elements of the dual space ${\mathfrak{n}}^*$. The relevant definition is somewhat more involved, however, and goes as follows. For each $\lambda \in {\mathfrak{n}}^*$, define two sequences of subspaces $\fk l_\lambda^i, \fk s_\lambda^i \subset {\mathfrak{n}}$ for $i\geq 0$ by the inductive formulas $$\ba \fk l_\lambda ^0 &= 0, \\ \fk s_\lambda^0 &= {\mathfrak{n}}, \ea \qquad \text{and}\qquad \ba \fk l_\lambda^{i+1} & = \left\{ X \in \fk s_\lambda^i : \lambda(XY) = 0 \text{ for all }Y \in \fk s_\lambda^i \right\}, \\ \fk s_\lambda^{i+1} & =\left \{ X \in \fk s_\lambda^i : \lambda(XY) = 0 \text{ for all }Y \in \fk l_\lambda^{i+1}\right \}. \ea$$ If $B_\lambda : {\mathfrak{n}}\times {\mathfrak{n}}\to {\mathbb{F}}_q$ denotes the bilinear form $(X,Y) \mapsto \lambda(XY)$, then we may alternatively define the subspaces $\fk l_\lambda^i$, $\fk s_\lambda^i$ for $i>0$ by $$\ba \fk l_\lambda^i & =\text{the left kernel of the restriction of $B_\lambda$ to $\fk s_\lambda^{i-1}\times \fk s_{\lambda}^{i-1}$,} \\ \fk s_\lambda^i &=\text{the left kernel of the restriction of $B_\lambda$ to $\fk s_\lambda^{i-1} \times \fk l_\lambda^i$.}\ea$$ The degree of the supercharacter $\chi_\lambda$ is $$\chi_\lambda(1) = |G\lambda| =|\lambda G|= |{\mathfrak{n}}| / |\fk l _\lambda^1|\qquad\text{and in addition}\qquad \langle \chi_\lambda,\chi_\lambda\rangle_G = |G\lambda \cap \lambda G| = |\fk s_\lambda^1| / |\fk l_\lambda^1|.$$ We then have an ascending and descending chain of subspaces \[chain\] 0 =l\_\^0 l\_\^1 l\_\^2 s\_\^2 s\_\^1 s\_\^0 = with the following properties: 1. Each $\fk s_\lambda^{i+1}$ is a subalgebra of $\fk s_\lambda^i$. 2. Each $\fk l_\lambda^{i+1}$ is a right ideal of $\fk s_\lambda^i$ and a two-sided ideal of $\fk s_\lambda^{i+1}$. 3. If $\fk s_\lambda^{d-1} = \fk s_\lambda^{d}$ for some $d\geq 1$ then $\fk l_\lambda^{d+i} = \fk l_\lambda^{d}$ and $\fk s_\lambda^{d+i} = \fk s_\lambda^d$ for all $i\geq 0$. Let $d\geq 1$ be an integer such that (c) holds$-$by dimensional considerations, some such $d$ exists$-$and define the subalgebras $\olfkl_\lambda,\olfks_\lambda\subset{\mathfrak{n}}$ and algebra subgroups $\oll_\lambda,\ols_\lambda\subset G$ by $$\olfkl_\lambda = \fk l_\lambda^{d}, \qquad \olfks_\lambda = \fk s_\lambda^{d}\qquad\text{and} \qquad \oll_\lambda = 1+\olfkl_\lambda, \qquad \ols_\lambda = 1+\olfks_\lambda.$$ Observe that $\olfkl_\lambda$ and $\olfks_\lambda$ are just the terminal elements in the ascending and descending chains (\[chain\]). The function $\theta_\lambda: G \to {\mathbb{C}}$ restricts to a linear character of $\oll_\lambda$ and we define the character $\xi_\lambda$ of $G$ by $$\xi_\lambda = \Ind_{\oll_\lambda}^G(\theta_\lambda).$$ This character is a possibly reducible constituent with degree $|G|/|\oll_\lambda|$ of the supercharacter $\chi_\lambda$ of $G$. Notably, if $\ols_\lambda =G$ then $\xi_\lambda = \chi_\lambda$. To give these functions a formula, define $$\Xi_\lambda =\left\{ g\lambda sg^{-1} : g \in G,\ s \in \ols_\lambda\right\} \subset {\mathfrak{n}}^*.$$ One can show that $\Xi_\mu = \Xi_\lambda$ if $\mu \in \Xi_\lambda$, and so the sets $\Xi_\lambda$ thus partition ${\mathfrak{n}}^*$ into unions of coadjoint orbits. The following result combines Corollary 3.1, Proposition 3.1, Theorem 3.3, Corollary 4.1, and Proposition 4.2 in [@supp0] and enumerates the properties of the characters $\xi_\lambda$ which will be of use in our applications. \[structural\] Let ${\mathfrak{n}}$ be a finite-dimensional nilpotent ${\mathbb{F}}_q$-algebra and write $G=1+{\mathfrak{n}}$. If $\lambda,\mu \in {\mathfrak{n}}^*$, then 1. $\displaystyle \xi_\lambda =\frac{|\ols_\lambda|}{\left|G\right|} \sum_{\nu \in \Xi_\lambda} \theta_\nu $ and $\displaystyle |\Xi_\lambda| = \frac{|G|^2}{| \oll_\lambda|| \ols_\lambda|}. $ 2. $\xi_\lambda = \xi_{\mu}$ if and only if $\mu \in \Xi_\lambda$, and if $\mu \notin \Xi_\lambda$ then $\xi_\lambda$ and $\xi_{\mu}$ share no irreducible constituents. In particular, $$\left\langle \xi_\lambda, \xi_{\mu} \right\rangle_G = \left\{\ba & |\ols_\lambda| / |\oll_\lambda|,&&\text{if }\mu \in \Xi_\lambda, \\& 0,&&\text{otherwise},\ea\right.$$ and $\xi_\lambda$ is irreducible if and only if $\oll_\lambda =\ols_\lambda$, in which case $\xi_\lambda = \psi_\lambda = \logpsi_\lambda$. 3. The irreducible constituents of the characters $\{ \xi_\lambda : \lambda \in {\mathfrak{n}}^*\}$ partition $\Irr(G)$, and the map $$\barr{ccc} \Irr\(\ols_\lambda,\Ind_{\oll_\lambda}^{\ols_\lambda}(\theta_\lambda)\) & \to & \Irr(G,\xi_\lambda) \\ \psi & \mapsto & \Ind_{\ol S_\lambda}^{G}(\psi) \earr$$ is a bijection. Furthermore, if $\mu \in ( \olfks_\lambda )^*$ is given by restricting $\lambda$ to $\olfks_\lambda$, then the Kirillov function $\psi_\mu$ (respectively, $\logpsi_\mu$) is a character of $\ols_\lambda$ if and only if $\psi_\lambda$ (respectively, $\logpsi_\lambda$) is a character of $G$. 4. If $(\olfks_\lambda)^p \subset \olfkl_\lambda \cap \ker \lambda$ where $p>0$ is the characteristic of ${\mathbb{F}}_q$, then $\logpsi_\lambda \in \Irr(G)$. Let $p>0$ be the characteristic of ${\mathbb{F}}_q$ and suppose $r$ is a power of $p$ for which ${\mathbb{Q}}(\zeta_r)$ is a splitting field for $G$. If $\alpha \in \Gal\({\mathbb{Q}}(\zeta_r) / {\mathbb{Q}}(\zeta_p)\)$, then clearly $\alpha\circ \xi_\lambda = \xi_\lambda$. Therefore, if $\chi $ is an irreducible constituent of $\xi_\lambda$ then $\alpha \circ \chi$ is as well. In light of part (3) of the preceding statement, it follows that $$\label{rmk} \alpha \circ \psi = \psi\quad\text{if and only if}\quad \alpha \circ \Ind_{\ols_\lambda}^G(\psi) = \Ind_{\ols_\lambda}^G(\psi),\qquad\text{for }\psi \in \Irr\(\ols_\lambda,\Ind_{\oll_\lambda}^{\ols_\lambda}(\theta_\lambda)\) .$$ Now, for each $1\leq i \leq \log_p(r)$ there exists $\alpha \in \Gal\({\mathbb{Q}}(\zeta_r) / {\mathbb{Q}}(\zeta_p)\)$ whose fixed field is ${\mathbb{Q}}(\zeta_{p^i})$; namely, one can take $\alpha$ to be the unique automorphism with $\alpha(\zeta_r) = \zeta_r^{1+p^i}$. Consequently, if $\psi$ takes values outside the field ${\mathbb{Q}}(\zeta_{p^i})$, so that $\alpha \circ\psi \neq \psi$, then the same is true of the irreducible constituent $\Ind_{\ols_\lambda}^G(\psi)$ of $\xi_\lambda$. Here is an easy example of how one can use the constructions in this section to directly decompose a supercharacter. Consider the supercharacter $\chi_\lambda$ of $\UT_6(q)$ indexed by $$\lambda = e_{13}^* + e_{24}^* + e_{35}^* + e_{36}^* \in \fkt_6(q)^*.$$ It is an instructive exercise to compute $$\ba \fk l_\lambda^1 &= \{ X \in \fkt_6(q) : X_{12} = X_{23} = X_{34} = X_{45} =0\}, \\ \fk l^2_\lambda & = \{ X \in \fkt_6(q) : X_{12} = X_{23} = X_{45} = 0\}, \\ \fk l_\lambda^3 &= \fk s_\lambda^2\ea \qquad \ba \fk s_\lambda^1 &= \{ X \in \fkt_6(q) : X_{45} = 0\}, \\ \fk s^2_\lambda & = \{ X \in \fkt_6(q): X_{23} = X_{45} = 0\}, \\ \fk s^3_\lambda&= \fk s^2_\lambda, \ea$$ so $\chi_\lambda$ has degree $q^4$ and $\xi_\lambda$ is irreducible with degree $q^2$ since $\oll_\lambda = \ols_\lambda$. The elements in the two-sided $\UT_6(q)$-orbit of $\lambda$ are those of the form $\mu = \lambda + \sum_{i} a_i e_{i,i+1}^*$, so $\mu(XY) = \lambda(XY)$ for all $X,Y \in \fkt_6(q)$. This means by definition that $\oll_\lambda = \oll_\mu = \ols_\lambda = \ols_\mu$ and hence that each $\xi_\mu$ is irreducible with degree $q^2$. As $\chi_\lambda$ is a linear combination of such characters $\xi_\mu$, it follows that every irreducible constituent of $\chi_\lambda$ has degree $q^2$. The irreducible constituents of a supercharacter which have the same degree also have the same multiplicity, since a constant times $\chi_\lambda$ is the character of a two-sided ideal in the group algebra of $\UT_n(q)$. The constant in general is $|{\mathfrak{n}}|/|\fk s_\lambda^1|$ and is in this case $q$, so since $\chi_\lambda(1) = q^4$ it follows that $\chi_\lambda$ decomposes as a sum of $q$ distinct irreducible characters of degree $q^2$, each appearing with multiplicity $q$. Inflation from quotients by algebra subgroups {#infl-sec} --------------------------------------------- We will make use in the next sections of the following result concerning the effect of inflation on the functions $\psi_\lambda$, $\logpsi_\lambda$, $\chi_\lambda$, $\xi_\lambda$. Suppose ${\mathfrak{n}}$ is a nilpotent ${\mathbb{F}}_q$-algebra with a two-sided ideal ${\mathfrak{h}}$. Let $\q={\mathfrak{n}}/{\mathfrak{h}}$ be the quotient algebra, and write $G = 1+{\mathfrak{n}}$ and $Q = 1+\q$. Also, let $\wt \pi : {\mathfrak{n}}\to \q$ be the quotient map, and define $\pi :G\to Q$ by $\pi(1+X) = 1+\wt\pi(X)$; both of these are surjective homomorphisms, of algebras and groups, respectively. The following is proved as Observation 4.1 in [@supp0]. \[inflation\] If $\lambda \in {\mathfrak{n}}^*$ has $\ker \lambda \supset {\mathfrak{h}}$, then there exists a unique $\mu \in \q^*$ with $\lambda = \mu\circ \wt\pi$, and $$\psi_\lambda = \psi_{\mu} \circ \pi, \qquad \logpsi_\lambda = \logpsi_{\mu} \circ \pi, \qquad \chi_\lambda = \chi_{\mu } \circ\pi, \qquad\text{and} \qquad \xi_\lambda = \xi_{\mu} \circ \pi.$$ Furthermore, $\oll_\lambda = \pi^{-1}(\oll_{\mu})$ and $\ols_\lambda = \pi^{-1}(\ols_\mu) $. Since $\chi \mapsto \chi\circ \pi$ defines an injection $\Irr(Q)\to \Irr(G)$, by [@Isaacs Lemma 2.22] for example, if $\psi_\mu$ or $\logpsi_\mu$ are characters in this setup then the same is true of $\psi_\lambda$ or $\logpsi_\lambda$, respectively. We will appeal to this observation most often in the special case when ${\mathfrak{n}}$ has a vector space decomposition \[above\] = a , {&\ &.Write $G = 1+{\mathfrak{n}}$, $A = 1+{\mathfrak{a}}$, and $H = 1+{\mathfrak{h}}$. In this situation, we may identify $Q = 1+{\mathfrak{n}}/{\mathfrak{h}}$ with the algebra subgroup $A$; Observation \[inflation\] then holds where $\wt \pi :{\mathfrak{n}}\to {\mathfrak{a}}$ and $\pi :G\to A$ are the projection maps $$\wt \pi(a+h) = a\qquad\text{and}\qquad \pi(1+a+h)=1+a,\qquad\text{for }a\in {\mathfrak{a}},\ h\in {\mathfrak{h}}.$$ Also, if $\lambda \in {\mathfrak{n}}^*$ then the unique $\mu \in {\mathfrak{a}}^*$ with $\lambda = \mu\circ \wt \pi$ is given by the restriction $\mu = \lambda\downarrow {\mathfrak{a}}$. Applications to the unitriangular group $\UT_n(q)$ {#appl} ================================================== In this section we construct characters of the unitriangular group which take values in ${\mathbb{Q}}(\zeta_{p^i})$ for any $i\geq 0$. Approaching this goal, we first establish some general results concerning pattern groups, an accessible family of pattern groups, and then work to identify characters with large-field values in a specific algebra group, which will turn out to be a quotient of $\olfks_\lambda$ for a certain $\lambda \in \fkt_n(q)^*$. Following this preliminary work, we identify our characters of interest as irreducible constituents of one of the characters $\xi_\lambda$ of $\UT_n(q)$. Pattern groups {#pattern} -------------- A *pattern algebra* is any subalgebra of $\fkt_n(q)$ spanned over ${\mathbb{F}}_q$ by a set of elementary matrices $e_{ij}$, and a *pattern group* is an algebra group corresponding to a pattern algebra. Pattern groups provide the most accessible examples of algebra groups, and much can be said about their supercharacters and class functions; see, for example, [@DT; @I; @MT; @M2; @TV]. Given a subset of positions in an upper triangular matrix ${\mathcal{P}}\subset \{ (i,j) : 1\leq i< j \leq n\}$, define \[form\] \_[n,]{}(q) = { X \_n(q) : X\_[ij]{} = 0 (i,j) }\_[n,]{}(q) = 1+\_[n,]{}(q).It is not difficult to show that $\fkt_{n,{\mathcal{P}}}(q)$ is a subalgebra if and only if ${\mathcal{P}}$ is *closed*, by which we mean that $(i,j),(j,k) \in {\mathcal{P}}$ implies $(i,k) \in {\mathcal{P}}$. Every pattern algebra and pattern group is thus of the form (\[form\]) for a closed set of positions ${\mathcal{P}}$. Such closed sets ${\mathcal{P}}$ are naturally in bijection with partial orderings of $[n]$ which are subordinate to the standard linear ordering $1<2<\dots<n$. Specifically, ${\mathcal{P}}$ corresponds to the ordering $\prec$ defined by $i\prec j$ if and only if $(i,j) \in{\mathcal{P}}$; the closed condition on ${\mathcal{P}}$ corresponds to the the transitivity condition on $\prec$. The group $\UT_n(q)$ is the pattern group corresponding to the set of positions ${\mathcal{P}}= \{ (i,j) : 1\leq i<j\leq n\}$ and the standard linear ordering of $[n]$. Recall that a matrix is monomial if it has exactly one nonzero entry in each row and column. Following [@Sangroniz], we define a matrix to be *quasi-monomial* if it has at most one nonzero entry in each row and column. If ${\mathfrak{n}}= \fkt_{n,{\mathcal{P}}}(q)$ is a pattern algebra, then there is a natural isomorphism ${\mathfrak{n}}\cong {\mathfrak{n}}^*$ given by associating $X \in {\mathfrak{n}}$ to the map $Y \mapsto \tr(X^TY) = \sum_{(i,j) \in {\mathcal{P}}} X_{ij} Y_{ij}$ in ${\mathfrak{n}}^*$. We say that $\lambda \in {\mathfrak{n}}^*$ is *quasi-monomial* if $\lambda$ corresponds to a quasi-monomial matrix in ${\mathfrak{n}}$ under this isomorphism. Equivalently, if we define $$\lambda_{ij} = \left\{\ba & \lambda(e_{ij}),&&\text{if }(i,j) \in {\mathcal{P}}, \\ & 0,&&\text{if }(i,j)\notin{\mathcal{P}},\ea\right.$$then $\lambda$ is quasi-monomial if $\lambda_{ij}\neq 0$ for at most one position $(i,j) \in {\mathcal{P}}$ in each row and column. The following easy lemma will be of great use in the calcuations we undertake in Section \[constructions\]. [[\^]{}]{} \[monomial\] Suppose ${\mathfrak{n}}= \fkt_{n,{\mathcal{P}}}(q)$ is a pattern algebra and $\lambda \in {\mathfrak{n}}^*$ is quasi-monomial. If we define $$\ba \cL_\lambda &=\left \{ (i,j) \in {\mathcal{P}}:\exists k\in [n]\text{ with }\lambda_{ik}\neq 0\text{ and }(j,k) \in {\mathcal{P}}\right \}, \\ {\mathcal{S}}_\lambda &= \left \{ (i,j) \in{\mathcal{P}}: \exists k\in [n]\text{ with }\lambda_{ik}\neq 0\text{ and }(j,k) \in{\mathcal{P}}\text{ and } (j,k) \notin \cL_\lambda\right\}, \ea$$ then $\lambda G = \lambda + {\mathbb{F}}_q\spanning\left\{ e_{ij}^* : (i,j) \in \cL_\lambda\right\}$ and for all $\mu \in \lambda G$ we have $$\fk l_\mu^1 = \left\{ X \in \fkt_{n,{\mathcal{P}}}(q) : X_{ij} =0\text{ if }(i,j) \in \cL_\lambda\right\}\quad\text{and}\quad \fk s_\mu^1 = \left\{ X \in \fkt_{n,{\mathcal{P}}}(q) : X_{ij} =0\text{ if }(i,j) \in {\mathcal{S}}_\lambda\right\}.$$ Observe that if ${\mathfrak{n}}= \fkt_n(q)$, so that ${\mathcal{P}}= \{ (i,j) : i,j \in [n],\ i<j\}$, then $\cL_\lambda$ consists of all upper triangular positions strictly to the left of positions $(i,j)$ with $\lambda_{ij}\neq 0$, and \[monomial-note\] \_\_= { (i,j): k,l 1i&lt;j&lt;k&lt;l n\_[ik]{},\_[jl]{}0}. We have $\fk l^1_\mu = \fk l^1_\lambda$ and $\fk s_\mu^1 = \fk s_\lambda^1$ by [@supp0 Lemma 3.1]. Let $Y \in \fkt_{n,{\mathcal{P}}}(q)$ and $(i,j) \in {\mathcal{P}}$, so that $\lambda(e_{ij}Y)=\sum_{k \in [n]} \lambda_{ik} Y_{jk}$. Since $\lambda$ is quasi-monomial, this is nonzero only if there exists $k \in [n]$ such that $\lambda_{ik} \neq 0$ and $Y_{jk} \neq 0$, in which case $(i,j) \in \cL_\lambda$. It follows that \[monomial-eq\] l\_\^1 { X \_[n,]{}(q) : X\_[ij]{} =0(i,j) \_}.On the other hand, if $X \in {\mathbb{F}}_q\spanning \{ e_{ij} : (i,j) \in{\mathcal{P}}\setminus \cL_\lambda\}$ then either $X= 0$ or $X_{ij} \neq 0$ for some $(i,j) \in {\mathcal{P}}$ for which there exists $k \in [n]$ with $\lambda_{ij} \neq 0$ and $(j,k) \in {\mathcal{P}}$. In this case $\lambda(XY)=\lambda_{ik}Y_{ij} \neq 0$ for $Y= e_{jk} \in \fkt_{n,{\mathcal{P}}}(q)$ since $\lambda$ is quasi-monomial and $XY$ has nonzero entries only in one column. It follows that (\[monomial-eq\]) must be an equality. The proof of our characterization of $\fk s_\mu^1 = \fk s_\lambda^1$ proceeds by an almost identical argument, and our description of $\lambda G$ is immediate from [@DI Lemma 4.2(c)]. Our next result is a slight generalization of Theorem 7.2 in [@AndreAdjoint]. It is noteworthy mostly for the almost trivial proof we can give using the preceding lemma. If ${\mathfrak{n}}= \fkt_{n,{\mathcal{P}}}(q)$ is a pattern algebra and $\lambda \in {\mathfrak{n}}^*$ is quasi-monomial then $\xi_\lambda = \psi_\lambda$; i.e., the Kirillov function $\psi_\lambda$ is a well-induced, irreducible character. We note that if ${\mathfrak{n}}=\fkt_{n,{\mathcal{P}}}$ is a pattern algebra and $\lambda \in {\mathfrak{n}}^*$ is quasi-monomial, then $\fk l_\lambda^1$, $\fk s_\lambda^1$ are also pattern algebras and $\lambda$ restricts to a quasi-monomial map $\fk s_\lambda^1\to {\mathbb{F}}_q$. Noting the chain (\[chain\]), it suffices therefore to show that if $\lambda \in {\mathfrak{n}}^*$ is quasi-monomial, then either $\fk l_\lambda^1 = \fk s_\lambda^1 $ or $\fk s_\lambda^1 \subsetneq {\mathfrak{n}}$. This will force $\olfkl_\lambda = \olfks_\lambda$ by dimensional considerations, and thus $\xi_\lambda = \psi_\lambda$ by Theorem \[structural\]. To this end, suppose $\fk s_\lambda^1 = {\mathfrak{n}}$ so that ${\mathcal{S}}_\lambda = \varnothing$. If $(i,j) \in \cL_\lambda$ so that there exists $k\in [n]$ with $\lambda_{ik}\neq0$ and $(j,k) \in {\mathcal{P}}$, then $(j,k) \in \cL_\lambda$ since otherwise $(i,j) \in {\mathcal{S}}_\lambda$. Choosing $(i,j) \in \cL_\lambda$ with $j$ maximal and applying this argument thus gives a contradiction. Therefore $\cL_\lambda=\varnothing$ so $\fk l_\lambda^1 = \fk s_\lambda^1$. Before exiting this section, we discuss the following important fact due originally to André [@Andre1] and Yan [@Yan]: the quasi-monomial maps $\lambda \in \fkt_n(q)^*$ index the distinct supercharacters of $\UT_n(q)$; i.e., the map $$\barr{ccc} \bigl\{{ \text{Quasi-mononomial maps } \lambda \in \fkt_n(q)^*}\bigr \} &\to&\bigl\{{\text{Supercharacters of $\UT_n(q)$}}\bigr\} \\ \lambda&\mapsto& \chi_\lambda\earr$$ is a bijection. Furthermore, each quasi-monomial $\lambda$ naturally corresponds to a set partition of $[n]$. This lends an interesting combinatorial interpretation to many of the representation theoretic properties of the supercharacters $\chi_\lambda$. In more detail, we recall that a *set partition* of $[n]$ is a set $\Lambda = \{\Lambda_1,\dots,\Lambda_k\}$ of disjoint nonempty sets $\Lambda_i$ whose union is $[n]$. We call the sets $\Lambda_i$ the *parts* of $\Lambda$ and write $\Lambda \vdash[n]$ to indicate that $[n]$ is the union of the parts of $\Lambda$. We define the *(unlabeled) shape* of a quasi-monomial $\lambda \in \fkt_n(q)^*$ as the finest set partition of $[n]$ in which $i,j$ belong to the same part whenever $\lambda_{ij} \neq 0$. Alternatively, the shape of $\lambda$ is the set partition whose parts are the vertex sets of the weakly connected components of the (weighted, directed) graph whose adjacency matrix is $\(\lambda_{ij}\)$. For example, if $a,b,c \in {\mathbb{F}}_q^\times$ then $$\lambda= ae_{1,3}^* +b e_{2,4}^* +ce_{3,5}^* \in \fkt_6(q)^* \qquad\text{has shape}\qquad \{\{1,3,5\},\{2,4\}, \{6\}\}\vdash[6].$$ The shape of a supercharacter $\chi$ of $\UT_n(q)$ is by definition the shape of the unique quasi-monomial $\lambda \in \fkt_n(q)^*$ with $\chi = \chi_\lambda$. We introduce this terminology largely so that we can succinctly refer to the supercharacters of $\UT_n(q)$ which house our exotic irreducible characters as constituents. The shape of a supercharacter encapsulates a good deal of less than obvious information about its irreducible constituents, however, a theme which we explore in greater detail in [@supp2], Complex characters of algebra groups {#cmplx-chars} ------------------------------------ Our strategy to construct exotic characters of $\UT_n(q)$ is to construct them instead for the smaller algebra group $\ols_\lambda$ for some $\lambda\in \fkt_n(q)^*$. To accomplish this, we must of course have some examples of algebra groups whose characters have values in large cyclotomic fields. We provide a sort of quintessential construction here. Fix a positive integer $n>1$ and define ${\mathfrak{a}}_n(q)$ as the nilpotent ${\mathbb{F}}_q$-algebra \[fka-1\] \_n(q) = { X \_n(q) : X\_[i+1,j+1]{} =X\_[i,j]{}1i&lt;j &lt; n},so that ${\mathfrak{a}}_n(q)$ consists of all $n\times n$-matrices over ${\mathbb{F}}_q$ of the form $$X = \(\barr{cccccc} 0 & a_2 & a_3 & \cdots & a_n \\ & 0& a_2 & \ddots & \vdots \\ & & 0 & \ddots & a_3 \\ && & \ddots & a_2 \\ & & & & 0 \earr\),\qquad\text{with $a_i \in {\mathbb{F}}_q$ and zeros below the diagonal}.$$ Let $\mA=1+{\mathfrak{a}}_n(q)$ be the corresponding algebra group. If $X$ is any such matrix with $a_2\neq 0$ then the elements $X,X^2,\dots,X^{n-1}$ form a basis for ${\mathfrak{a}}_n(q)$ over ${\mathbb{F}}_q$, and it follows that ${\mathfrak{a}}_n(q)$ is commutative and $\mA$ is abelian. Let $\kappa \in {\mathfrak{a}}_n(q)^*$ be the linear map defined by \[kappa\] (X) =X\_[1,n]{},It is not difficult to see directly from the definitions in Section \[xi\] that $ \olfkl_\kappa = {\mathbb{F}}_q\spanning\{e_{1,n}\} $ and $\olfks_\kappa = \fk a_n(q)$. Thus $\ols_\kappa = \mA$ so $\xi_\kappa$ coincides with the supercharacter $\chi_\kappa$, and we have \[formula\] \_(g) = \_[\_]{}\^(\_)(g) = { q\^[n-2]{}(g\_[1,n]{}),&g L\_,\ 0,&,.g.Since $\langle \chi_\kappa,\chi_\kappa\rangle_{\mA} = \chi_\kappa(1) = q^{n-2}$, the supercharacter $\chi_\kappa$ decomposes as a sum of $q^{n-2}$ distinct linear characters. We can say more about these constituents: \[cmplx-constits\] Fix an integer $n>1$ and define $\kappa \in {\mathfrak{a}}_n(q)^*$ by (\[kappa\]) as above. Let $p>0$ be the characteristic of ${\mathbb{F}}_q$ and suppose $p^i$ is the largest power of $p$ less than $n$. The following then hold: 1. The values of the irreducible constituents of $\chi_\kappa$ are contained in the cyclotomic field ${\mathbb{Q}}(\zeta_{p^{i+1}})$, and some irreducible constituent of $\chi_\kappa$ takes as values every $p^{i+1}$th root of unity. 2. The Kirillov function $\psi_\kappa$ is a character of $\mA$ if and only if $n= 2$. 3. The exponential Kirillov function $\logpsi_\kappa$ is a character of $\mA$ if and only if $n\leq p$. The normalization of $\kappa \in {\mathfrak{a}}_n(q)^*$ is not important, as all holds still if one replaces $\kappa$ with $t\cdot \kappa$ for any nonzero $t \in {\mathbb{F}}_q$; this just corresponds to a different choice of $\theta : {\mathbb{F}}_q^+ \to {\mathbb{C}}^\times$. Also, regarding (3) we note that $\logpsi_{\nu}$ is a character for every $\nu\in {\mathfrak{a}}_n(q)^*$ if $p\leq n$ by [@Sangroniz Corollary 3]. Write ${\mathfrak{a}}= {\mathfrak{a}}_n(q)$ and $G = \mA$, and let $X \in {\mathfrak{a}}$ be the matrix with $X_{1,2}= X_{2,3}=\dots=X_{n-1,n}=1$ and all other entries zero. One checks that $X^i=0$ if and only if $i\geq n$, and hence that $Y^n=0$ for all $Y \in {\mathfrak{a}}$ since powers of $X$ span ${\mathfrak{a}}$. Let $r$ denote the largest power of $p$ less than $n$. Then $pr\geq n$ so $y^{pr}=1+(y-1)^{pr}=1$ for all $y \in G$ and as $G$ is an abelian $p$-group this implies the first half of (1). Observe that $x = 1+X \in G$ has order $pr$ since $x^{p^i} = 1+X^{p^i} \neq 1$ for $1\leq i \leq \log_p (r)$. Let $H = \langle x \rangle$, and choose $\vartheta \in \Irr(H)$ such that $\vartheta(x)$ is a primitive $pr^{\mathrm{th}}$ root of unity with $\vartheta(x)^r = \theta(1)$. Working from the definitions, it is straightforward to show that $\left\langle \chi_\kappa,\Ind_H^{G}(\vartheta) \right\rangle_{G}>0$ so at least one irreducible constituent of $\Ind_H^G(\vartheta) $ appears in $\chi_\kappa$. Since $G$ is abelian we have $\Ind_H^G(\vartheta)(x) = \frac{|G|}{|H|} \vartheta(x)$, and it follows that every irreducible constituent $\psi$ of $\Ind_H^G(\vartheta)$ has $\psi(x) = \vartheta(x)$, completing the proof of (1). A function $G \to {\mathbb{C}}$ is a character if and only if it defines a homomorphism $G\to {\mathbb{C}}^\times$. Since ${\mathfrak{a}}$ is commutative we have $\kappa^G = \{\kappa\}$ so $\psi_\kappa = \theta_\kappa$. This is a homomorphism if and only if $n= 2$, as one can check by considering its values at powers of $x=1+X$. If $n\leq p$ then $\logpsi_\kappa$ is a character by [@Sangroniz Corollary 3]. Suppose $n>p$ and let $Z = X^{n-1}$. Since $Z$ annihilates ${\mathfrak{a}}$, we have $\exp(Y+Z) = \exp(Y)+Z$ for all $Y \in {\mathfrak{a}}$. If $n=p+1$ then for $x = \exp(X)$ and $z=1+Z = \exp(Z)$ we have $x^{n-1} =z$, but $\logpsi_\kappa(x)^{n-1} =\logpsi_\kappa(x)=1\neq \logpsi_\kappa(z) = \theta(1)$; thus $\logpsi_\kappa$ is not a homomorphism. If $n\geq p+2$, one checks that if $a = \exp(X^{n-p})$ and $b = \exp(X)$ then $$\logpsi_\kappa(a) = \logpsi_\kappa(b) = 1\qquad\text{but}\qquad \logpsi_\kappa(ab) = \psi_\kappa(1+X^{n-p} +X-Z) = \theta(-1)\neq 1,$$ so $\logpsi_\kappa$ is again not a homomorphism, proving (3). Exotic characters of $\UT_n(q)$ {#constructions} ------------------------------- The goal of this section is to prove the following theorem promised in the introduction. \[main\] Let $p>0$ be the characteristic of ${\mathbb{F}}_q$ and let $r=p^e$ for any integer $e>0$. If $n > 6r$, then $\UT_n(q)$ has an irreducible character of degree $q^{5r^2-2r}$ whose set of values is contained in ${\mathbb{Q}}(\zeta_{pr})$ but not ${\mathbb{Q}}(\zeta_r)$. Such a character in fact occurs as an irreducible constituent of each supercharacter of $\UT_n(q)$ whose shape is the set partition of $[n]$ whose parts are the $n-4r-1$ sets $$\ba \{ 1,2r+1,3r+1,4r+1,6r+1\}& \\ \{ i,2r+i,3r+i,5r+i \}&\text{ for }1< i \leq r\\ \{ i, 4r+1+i\} &\text{ for }r+1\leq i \leq 2r \\ \{ i\}&\text{ for }6r+1<i\leq n. \ea$$ Setting $r=2$ and $n=13$ in this statement proves Conjecture 4.1 in [@IK05]. Figure \[fig0\] below illustrates the set partition in the theorem when $r=2,4,8,16$ and $n=6r+1$. $$\barr{c} \setlength{\unitlength}{4.5cm} \begin{picture}(1.00, 1.00) \put(0.3803,0.9855){\line(-29,-77){0.2918}} \put(0.3803,0.9855){\circle*{0.015}} \put(0.1684,0.8743){\line(10,-82){0.0992}} \put(0.1684,0.8743){\circle*{0.015}} \put(0.0325,0.6773){\line(96,-24){0.9639}} \put(0.0325,0.6773){\circle*{0.015}} \put(0.0036,0.4397){\line(96,24){0.9639}} \put(0.0036,0.4397){\circle*{0.015}} \put(0.0885,0.2160){\line(41,-22){0.4115}} \put(0.0885,0.2160){\circle*{0.015}} \put(0.2676,0.0573){\line(46,0){0.4647}} \put(0.2676,0.0573){\circle*{0.015}} \put(0.5000,0.0000){\line(41,22){0.4115}} \put(0.5000,0.0000){\circle*{0.015}} \put(0.7324,0.0573){\line(10,82){0.0992}} \put(0.7324,0.0573){\circle*{0.015}} \put(0.9115,0.2160){\line(-29,77){0.2918}} \put(0.9115,0.2160){\circle*{0.015}} \put(0.9964,0.4397){\circle*{0.015}} \put(0.9675,0.6773){\circle*{0.015}} \put(0.8316,0.8743){\circle*{0.015}} \put(0.6197,0.9855){\circle*{0.015}} \end{picture} \qquad\qquad \setlength{\unitlength}{4.5cm} \begin{picture}(1.00, 1.00) \put(0.4373,0.9961){\line(-36,-76){0.3595}} \put(0.4373,0.9961){\circle*{0.015}} \put(0.3159,0.9649){\line(-16,-83){0.1582}} \put(0.3159,0.9649){\circle*{0.015}} \put(0.2061,0.9045){\line(5,-84){0.051}} \put(0.2061,0.9045){\circle*{0.015}} \put(0.1147,0.8187){\line(26,-80){0.2609}} \put(0.1147,0.8187){\circle*{0.015}} \put(0.0476,0.7129){\line(93,-37){0.9279}} \put(0.0476,0.7129){\circle*{0.015}} \put(0.0089,0.5937){\line(99,-13){0.9902}} \put(0.0089,0.5937){\circle*{0.015}} \put(0.0010,0.4686){\line(99,13){0.9902}} \put(0.0010,0.4686){\circle*{0.015}} \put(0.0245,0.3455){\line(93,37){0.9279}} \put(0.0245,0.3455){\circle*{0.015}} \put(0.0778,0.2321){\line(42,-23){0.4222}} \put(0.0778,0.2321){\circle*{0.015}} \put(0.1577,0.1355){\line(47,-12){0.4666}} \put(0.1577,0.1355){\circle*{0.015}} \put(0.2591,0.0618){\line(48,0){0.4818}} \put(0.2591,0.0618){\circle*{0.015}} \put(0.3757,0.0157){\line(47,12){0.4666}} \put(0.3757,0.0157){\circle*{0.015}} \put(0.5000,0.0000){\line(42,23){0.4222}} \put(0.5000,0.0000){\circle*{0.015}} \put(0.6243,0.0157){\line(26,80){0.2609}} \put(0.6243,0.0157){\circle*{0.015}} \put(0.7409,0.0618){\line(5,84){0.051}} \put(0.7409,0.0618){\circle*{0.015}} \put(0.8423,0.1355){\line(-16,83){0.1582}} \put(0.8423,0.1355){\circle*{0.015}} \put(0.9222,0.2321){\line(-36,76){0.3595}} \put(0.9222,0.2321){\circle*{0.015}} \put(0.9755,0.3455){\circle*{0.015}} \put(0.9990,0.4686){\circle*{0.015}} \put(0.9911,0.5937){\circle*{0.015}} \put(0.9524,0.7129){\circle*{0.015}} \put(0.8853,0.8187){\circle*{0.015}} \put(0.7939,0.9045){\circle*{0.015}} \put(0.6841,0.9649){\circle*{0.015}} \put(0.5627,0.9961){\circle*{0.015}} \end{picture} \\ \\ \setlength{\unitlength}{4.5cm} \begin{picture}(1.00, 1.00) \put(0.483809,0.999738){\line(-41,-75){0.414098}} \put(0.4838,0.9997){\circle*{0.015}} \put(0.451495,0.997642){\line(-36,-78){0.364397}} \put(0.4515,0.9976){\circle*{0.015}} \put(0.419385,0.993458){\line(-31,-80){0.313168}} \put(0.4194,0.9935){\circle*{0.015}} \put(0.387612,0.987205){\line(-26,-82){0.260626}} \put(0.3876,0.9872){\circle*{0.015}} \put(0.356311,0.978909){\line(-21,-84){0.206990}} \put(0.3563,0.9789){\circle*{0.015}} \put(0.325613,0.968603){\line(-15,-85){0.15}} \put(0.3256,0.9686){\circle*{0.015}} \put(0.295646,0.956333){\line(-10,-86){0.099}} \put(0.2956,0.9563){\circle*{0.015}} \put(0.266536,0.942148){\line(-4,-86){0.040}} \put(0.2665,0.9421){\circle*{0.015}} \put(0.238406,0.928108){\line(1,-86){0.0104}} \put(0.2384,0.9261){\circle*{0.015}} \put(0.211372,0.908282){\line(7,-86){0.069604}} \put(0.2114,0.9083){\circle*{0.015}} \put(0.185550,0.888743){\line(12,-85){0.12}} \put(0.1855,0.8887){\circle*{0.015}} \put(0.161046,0.867573){\line(18,-84){0.179833}} \put(0.1610,0.8676){\circle*{0.015}} \put(0.137964,0.844862){\line(23,-83){0.23}} \put(0.1380,0.8449){\circle*{0.015}} \put(0.116400,0.820704){\line(29,-81){0.287048}} \put(0.1164,0.8207){\circle*{0.015}} \put(0.096445,0.795201){\line(34,-79){0.338960}} \put(0.0964,0.7952){\circle*{0.015}} \put(0.078183,0.768460){\line(39,-77){0.389452}} \put(0.0782,0.7685){\circle*{0.015}} \put(0.061691,0.740593){\line(88,-47){0.884180}} \put(0.0617,0.7406){\circle*{0.015}} \put(0.047036,0.711717){\line(91,-41){0.912545}} \put(0.0470,0.7117){\circle*{0.015}} \put(0.034282,0.681952){\line(94,-35){0.937084}} \put(0.0343,0.6820){\circle*{0.015}} \put(0.023481,0.651425){\line(96,-29){0.957692}} \put(0.0235,0.6514){\circle*{0.015}} \put(0.014679,0.620262){\line(97,-22){0.974283}} \put(0.0147,0.6203){\circle*{0.015}} \put(0.007912,0.588595){\line(99,-16){0.986787}} \put(0.0079,0.5886){\circle*{0.015}} \put(0.003209,0.556557){\line(100,-10){0.995153}} \put(0.0032,0.5566){\circle*{0.015}} \put(0.000590,0.524281){\line(100,-3){0.999345}} \put(0.0006,0.5243){\circle*{0.015}} \put(0.000066,0.491903){\line(100,3){0.999345}} \put(0.0001,0.4919){\circle*{0.015}} \put(0.001638,0.459560){\line(100,10){0.995153}} \put(0.0016,0.4596){\circle*{0.015}} \put(0.005301,0.427386){\line(99,16){0.986787}} \put(0.0053,0.4274){\circle*{0.015}} \put(0.011039,0.395516){\line(97,22){0.974283}} \put(0.0110,0.3955){\circle*{0.015}} \put(0.018827,0.364085){\line(96,29){0.957692}} \put(0.0188,0.3641){\circle*{0.015}} \put(0.028634,0.333224){\line(94,35){0.937084}} \put(0.0286,0.3332){\circle*{0.015}} \put(0.040418,0.303062){\line(91,41){0.912545}} \put(0.0404,0.3031){\circle*{0.015}} \put(0.054130,0.273726){\line(88,47){0.884180}} \put(0.0541,0.2737){\circle*{0.015}} \put(0.069711,0.245340){\line(43,-25){0.430289}} \put(0.0697,0.2453){\circle*{0.015}} \put(0.087098,0.218021){\line(45,-22){0.445267}} \put(0.0871,0.2180){\circle*{0.015}} \put(0.106216,0.191886){\line(46,-19){0.458378}} \put(0.1062,0.1919){\circle*{0.015}} \put(0.126986,0.167042){\line(47,-16){0.469566}} \put(0.1270,0.1670){\circle*{0.015}} \put(0.149321,0.143596){\line(48,-13){0.478785}} \put(0.1493,0.1436){\circle*{0.015}} \put(0.173126,0.121644){\line(49,-10){0.485995}} \put(0.1731,0.1216){\circle*{0.015}} \put(0.198303,0.101279){\line(49,-6){0.491167}} \put(0.1983,0.1013){\circle*{0.015}} \put(0.224745,0.082586){\line(49,-3){0.494279}} \put(0.2247,0.0826){\circle*{0.015}} \put(0.252341,0.065644){\line(50,0){0.495318}} \put(0.2523,0.0656){\circle*{0.015}} \put(0.280976,0.050524){\line(49,3){0.494279}} \put(0.2810,0.0505){\circle*{0.015}} \put(0.310530,0.037289){\line(49,6){0.491167}} \put(0.3105,0.0373){\circle*{0.015}} \put(0.340879,0.025995){\line(49,10){0.485995}} \put(0.3409,0.0260){\circle*{0.015}} \put(0.371894,0.016690){\line(48,13){0.478785}} \put(0.3719,0.0167){\circle*{0.015}} \put(0.403448,0.009411){\line(47,16){0.469566}} \put(0.4034,0.0094){\circle*{0.015}} \put(0.435406,0.004190){\line(46,19){0.458378}} \put(0.4354,0.0042){\circle*{0.015}} \put(0.467635,0.001049){\line(45,22){0.445267}} \put(0.4676,0.0010){\circle*{0.015}} \put(0.500000,0.000000){\line(43,25){0.430289}} \put(0.5000,0.0000){\circle*{0.015}} \put(0.532365,0.001049){\line(39,77){0.389452}} \put(0.5324,0.0010){\circle*{0.015}} \put(0.564594,0.004190){\line(34,79){0.338960}} \put(0.5646,0.0042){\circle*{0.015}} \put(0.596552,0.009411){\line(29,81){0.287048}} \put(0.5966,0.0094){\circle*{0.015}} \put(0.628106,0.016690){\line(23,83){0.23}} \put(0.6281,0.0167){\circle*{0.015}} \put(0.659121,0.025995){\line(18,84){0.179833}} \put(0.6591,0.0260){\circle*{0.015}} \put(0.689470,0.037289){\line(12,85){0.12}} \put(0.6895,0.0373){\circle*{0.015}} \put(0.719024,0.050524){\line(7,86){0.069604}} \put(0.7190,0.0505){\circle*{0.015}} \put(0.757659,0.928644){\line(-1,-86){0.0102}} \put(0.7477,0.0656){\circle*{0.015}} \put(0.775255,0.082586){\line(-4,86){0.040}} \put(0.7753,0.0826){\circle*{0.015}} \put(0.801697,0.101279){\line(-10,86){0.10}} \put(0.8017,0.1013){\circle*{0.015}} \put(0.826874,0.121644){\line(-15,85){0.15}} \put(0.8269,0.1216){\circle*{0.015}} \put(0.850679,0.143596){\line(-21,84){0.206990}} \put(0.8507,0.1436){\circle*{0.015}} \put(0.873014,0.167042){\line(-26,82){0.260626}} \put(0.8730,0.1670){\circle*{0.015}} \put(0.893784,0.191886){\line(-31,80){0.313168}} \put(0.8938,0.1919){\circle*{0.015}} \put(0.912902,0.218021){\line(-36,78){0.364397}} \put(0.9129,0.2180){\circle*{0.015}} \put(0.930289,0.245340){\line(-41,75){0.414098}} \put(0.9303,0.2453){\circle*{0.015}} \put(0.9459,0.2737){\circle*{0.015}} \put(0.9596,0.3031){\circle*{0.015}} \put(0.9714,0.3332){\circle*{0.015}} \put(0.9812,0.3641){\circle*{0.015}} \put(0.9890,0.3955){\circle*{0.015}} \put(0.9947,0.4274){\circle*{0.015}} \put(0.9984,0.4596){\circle*{0.015}} \put(0.9999,0.4919){\circle*{0.015}} \put(0.9994,0.5243){\circle*{0.015}} \put(0.9968,0.5566){\circle*{0.015}} \put(0.9921,0.5886){\circle*{0.015}} \put(0.9853,0.6203){\circle*{0.015}} \put(0.9765,0.6514){\circle*{0.015}} \put(0.9657,0.6820){\circle*{0.015}} \put(0.9530,0.7117){\circle*{0.015}} \put(0.9383,0.7406){\circle*{0.015}} \put(0.9218,0.7685){\circle*{0.015}} \put(0.9036,0.7952){\circle*{0.015}} \put(0.8836,0.8207){\circle*{0.015}} \put(0.8620,0.8449){\circle*{0.015}} \put(0.8390,0.8676){\circle*{0.015}} \put(0.8145,0.8887){\circle*{0.015}} \put(0.7886,0.9083){\circle*{0.015}} \put(0.7616,0.9261){\circle*{0.015}} \put(0.7335,0.9421){\circle*{0.015}} \put(0.7044,0.9563){\circle*{0.015}} \put(0.6744,0.9686){\circle*{0.015}} \put(0.6437,0.9789){\circle*{0.015}} \put(0.6124,0.9872){\circle*{0.015}} \put(0.5806,0.9935){\circle*{0.015}} \put(0.5485,0.9976){\circle*{0.015}} \put(0.5162,0.9997){\circle*{0.015}} \end{picture} \qquad\qquad \setlength{\unitlength}{4.5cm} \begin{picture}(1.00, 1.00) \put(0.467965,0.998973){\line(-40,-76){0.395536}} \put(0.4680,0.9990){\circle*{0.015}} \put(0.404421,0.990780){\line(-30,-80){0.30}} \put(0.4044,0.9908){\circle*{0.015}} \put(0.342446,0.974528){\line(-19,-83){0.190287}} \put(0.3424,0.9745){\circle*{0.015}} \put(0.283058,0.950484){\line(-8,-85){0.081}} \put(0.2831,0.9505){\circle*{0.015}} \put(0.227233,0.919044){\line(3,-85){0.0305}} \put(0.2272,0.9190){\circle*{0.015}} \put(0.175886,0.880723){\line(14,-84){0.14}} \put(0.1759,0.8807){\circle*{0.015}} \put(0.129861,0.836150){\line(24,-82){0.243312}} \put(0.1299,0.8362){\circle*{0.015}} \put(0.089914,0.786058){\line(35,-78){0.346148}} \put(0.0899,0.7861){\circle*{0.015}} \put(0.056700,0.731269){\line(90,-43){0.900506}} \put(0.0567,0.7313){\circle*{0.015}} \put(0.030766,0.672683){\line(95,-31){0.948568}} \put(0.0308,0.6727){\circle*{0.015}} \put(0.012536,0.611260){\line(98,-19){0.981055}} \put(0.0125,0.6113){\circle*{0.015}} \put(0.002310,0.548012){\line(100,-6){0.997433}} \put(0.0023,0.5480){\circle*{0.015}} \put(0.000257,0.483974){\line(100,6){0.997433}} \put(0.0003,0.4840){\circle*{0.015}} \put(0.006409,0.420200){\line(98,19){0.981055}} \put(0.0064,0.4202){\circle*{0.015}} \put(0.020666,0.357736){\line(95,31){0.948568}} \put(0.0207,0.3577){\circle*{0.015}} \put(0.042794,0.297608){\line(90,43){0.900506}} \put(0.0428,0.2976){\circle*{0.015}} \put(0.072429,0.240804){\line(43,-24){0.427571}} \put(0.0724,0.2408){\circle*{0.015}} \put(0.109084,0.188255){\line(45,-18){0.454854}} \put(0.1091,0.1883){\circle*{0.015}} \put(0.152159,0.140825){\line(47,-12){0.474669}} \put(0.1522,0.1408){\circle*{0.015}} \put(0.200945,0.099293){\line(49,-6){0.486689}} \put(0.2009,0.0993){\circle*{0.015}} \put(0.254641,0.064341){\line(49,0){0.490718}} \put(0.2546,0.0643){\circle*{0.015}} \put(0.312366,0.036542){\line(49,6){0.486689}} \put(0.3124,0.0365){\circle*{0.015}} \put(0.373173,0.016353){\line(47,12){0.474669}} \put(0.3732,0.0164){\circle*{0.015}} \put(0.436061,0.004105){\line(45,18){0.454854}} \put(0.4361,0.0041){\circle*{0.015}} \put(0.500000,0.000000){\line(43,24){0.427571}} \put(0.5000,0.0000){\circle*{0.015}} \put(0.563939,0.004105){\line(35,78){0.346148}} \put(0.5639,0.0041){\circle*{0.015}} \put(0.626827,0.016353){\line(24,82){0.243312}} \put(0.6268,0.0164){\circle*{0.015}} \put(0.687634,0.036542){\line(14,84){0.14}} \put(0.6876,0.0365){\circle*{0.015}} \put(0.745359,0.064341){\line(3,85){0.0305}} \put(0.7454,0.0643){\circle*{0.015}} \put(0.799055,0.099293){\line(-8,85){0.081}} \put(0.7991,0.0993){\circle*{0.015}} \put(0.847841,0.140825){\line(-19,83){0.190287}} \put(0.8478,0.1408){\circle*{0.015}} \put(0.890916,0.188255){\line(-30,80){0.30}} \put(0.8909,0.1883){\circle*{0.015}} \put(0.927571,0.240804){\line(-40,76){0.395536}} \put(0.9276,0.2408){\circle*{0.015}} \put(0.9572,0.2976){\circle*{0.015}} \put(0.9793,0.3577){\circle*{0.015}} \put(0.9936,0.4202){\circle*{0.015}} \put(0.9997,0.4840){\circle*{0.015}} \put(0.9977,0.5480){\circle*{0.015}} \put(0.9875,0.6113){\circle*{0.015}} \put(0.9692,0.6727){\circle*{0.015}} \put(0.9433,0.7313){\circle*{0.015}} \put(0.9101,0.7861){\circle*{0.015}} \put(0.8701,0.8362){\circle*{0.015}} \put(0.8241,0.8807){\circle*{0.015}} \put(0.7728,0.9190){\circle*{0.015}} \put(0.7169,0.9505){\circle*{0.015}} \put(0.6576,0.9745){\circle*{0.015}} \put(0.5956,0.9908){\circle*{0.015}} \put(0.5320,0.9990){\circle*{0.015}} \end{picture} \earr$$ For the rest of this section, we fix an integer $r>1$ (not necessarily a prime power) and let $\Gtmp = \UT_{6r+1}(q)$ and $\fkntmp = \fkt_{6r+1}(q)$. We shall prove the theorem by the following steps: 1. First, we will compute $\olfks_\lambda$ for a certain map $\lambda \in \fkntmp^*$. 2. We will then identify a quotient of $\ols_\lambda$ isomorphic to the group $\mathrm{A}_{r+1}(q)$ defined in Section \[cmplx-chars\]. 3. We will then demonstrate that the supercharacter of $\ols_\lambda$ indexed by the restriction $\mu= \lambda \downarrow \olfks_\lambda$ is equal to the product of a linear supercharacter and a character obtained by inflating the supercharacter of $\mathrm{A}_{r+1}(q)$ indexed by the map $\kappa \in {\mathfrak{a}}_{r+1}(q)^*$ defined by (\[kappa\]). This will show that we can view the characters in the set $\Irr(\ols_\lambda,\chi_\mu)$ as products of a linear supercharacter with the $q^{r-1}$ linear constituents of $\chi_\kappa$ whose values are discussed in Proposition \[cmplx-constits\]. These characters become the irreducible constituents of $\xi_\lambda$ on induction to $\Gtmp$ by Theorem \[structural\], and the remarks following that corollary together with Proposition \[cmplx-constits\] imply that some of the induced characters have values which lie in ${\mathbb{Q}}(\zeta_{pr})$ but not ${\mathbb{Q}}(\zeta_r)$. To begin this program, let us define the map $\lambda \in \fkntmp^*$ of interest. If we view $\lambda$ as the matrix whose $(i,j)$th entry is $\lambda_{ij}\overset{\mathrm{def}}=\lambda(e_{ij})$, then $\lambda$ informally corresponds to the picture in Figure \[fig1\]. This diagram is meant to illustrate a $(6r+1)\times (6r+1)$ upper triangular matrix; the dark diagonal lines mark the positions $(i,j)$ where $\lambda_{ij} \neq 0$. To achieve our result these nonzero entries can be arbitrary, but to make our computations neater we will set them all to be $\pm 1$. $$\barr{c} \\ {\setlength{\unitlength}{8cm} \begin{picture}(1.01, 1.01) \color{hellgrau} \put(0.08,1.015){$_r$} \put(0.24,1.015){$_r$} \put(0.33,1.015){$_1$} \put(0.43,1.015){$_r$} \put(0.59,1.015){$_r$} \put(0.75,1.015){$_r$} \put(0.91,1.015){$_r$} \put(1.015,0.085){$_r$} \put(1.015,0.245){$_r$} \put(1.015,0.405){$_r$} \put(1.015,0.565){$_r$} \put(1.015,0.655){$_1$} \put(1.015,0.755){$_r$} \put(1.015,0.915){$_r$} \thinlines \put(1, 0){\line(-1, 1){1}} \put(0, 1){\line(1, 0){1}} \put(1, 0){\line(0, 1){1}} \put(0.16,0.84){\line(0,1){0.16}} \put(0.16,0.84){\line(1,0){0.84}} \put(0.32,0.68){\line(0,1){0.32}} \put(0.32,0.68){\line(1,0){0.68}} \put(0.36,0.64){\line(1,0){0.64}} \put(0.36,0.64){\line(0,1){0.36}} \put(0.52,0.48){\line(0,1){0.52}} \put(0.52,0.48){\line(1,0){0.48}} \put(0.68,0.32){\line(0,1){0.68}} \put(0.68,0.32){\line(1,0){0.32}} \put(0.84,0.16){\line(0,1){0.84}} \put(0.84,0.16){\line(1,0){0.16}} \color{black} \thicklines \put(0.68,.84){\line(1,-1){0.16}} \put(0.68,.84){\line(1,0){0.004}} \put(0.68,.84){\line(0,-1){0.004}} \put(0.684,.84){\line(1,-1){0.156}} \put(0.68,.836){\line(1,-1){0.156}} \put(0.84,.68){\line(-1,0){0.004}} \put(0.84,.68){\line(0,1){0.004}} \put(0.52,.84){\line(1,-1){0.16}} \put(0.52,.84){\line(1,0){0.004}} \put(0.52,.84){\line(0,-1){0.004}} \put(0.524,.84){\line(1,-1){0.156}} \put(0.52,.836){\line(1,-1){0.156}} \put(0.68,.68){\line(-1,0){0.004}} \put(0.68,.68){\line(0,1){0.004}} \put(0.32,1){\line(1,-1){0.16}} \put(0.32,1){\line(1,0){0.004}} \put(0.32,1){\line(0,-1){0.004}} \put(0.324,1){\line(1,-1){0.156}} \put(0.32,.996){\line(1,-1){0.156}} \put(0.48,.84){\line(-1,0){0.004}} \put(0.48,.84){\line(0,1){0.004}} \put(0.16,1){\line(1,-1){0.16}} \put(0.16,1){\line(1,0){0.004}} \put(0.16,1){\line(0,-1){0.004}} \put(0.164,1){\line(1,-1){0.156}} \put(0.16,.996){\line(1,-1){0.156}} \put(0.32,.84){\line(-1,0){0.004}} \put(0.32,.84){\line(0,1){0.004}} \put(0.48,.68){\line(1,-1){0.20}} \put(0.48,.68){\line(1,0){0.004}} \put(0.48,.68){\line(0,-1){0.004}} \put(0.484,.68){\line(1,-1){0.196}} \put(0.48,.676){\line(1,-1){0.196}} \put(0.68,.48){\line(-1,0){0.004}} \put(0.68,.48){\line(0,1){0.004}} \put(0.84,.48){\line(1,-1){0.16}} \put(0.84,.48){\line(1,0){0.004}} \put(0.84,.48){\line(0,-1){0.004}} \put(0.844,.48){\line(1,-1){0.156}} \put(0.84,.476){\line(1,-1){0.156}} \put(1,.32){\line(-1,0){0.004}} \put(1,.32){\line(0,1){0.004}} \end{picture} } \earr$$ To give a more precise definition, we briefly use the notation $$\sigma(n;i,j) \overset{\mathrm{def}}= \sum_{k=1}^n e_{i+k,j+k}^* \in \fkntmp^*.$$ For the duration of this section, we define $\lambda \in \fkntmp^*$ by $ \lambda =\lambda' - \lambda''$ where $$\ba \lambda ' &= \sigma(r;0,2r) + \sigma(r;r,4r+1) + \sigma(r;3r+1,5r+1)+ \sigma(r+1;2r,3r) , \\ \lambda'' &= \sigma(r;0,r) + \sigma(r;r,3r+1) . \ea$$ Note that $\lambda'$ is quasi-monomial with shape given by the set partition in Theorem \[main\], and that $\lambda \in \lambda'\Gtmp$ by Lemma \[monomial\]. Comparing this definition to Figure \[fig1\], one observes that $\lambda$ has six “pieces” corresponding to subsets of positions on various diagonals; exactly one such subset has $r+1$ positions and the rest have $r$ positions. It is not difficult to check that this definition may be recast as the piecewise formula \[lambda\] \_[jk]{}=(e\_[jk]{}) = { &-1,&&j=k-r\ &1,&&j=k-2r\ &-1,&&j=k-2r-1\ &1,&&j=k-3r-1\ &1,&&j=k-r\ &1,&&j=k-2r\ &0,&&. . This last identity is what we will use to actually compute $\lambda(X)$ for $X \in \fkntmp$; our arguments depend much more intuitively, however, on the visual representation of $\lambda$ given in Figure \[fig1\]. To compute and describe the subalgebras $\olfkl_\lambda$ and $\olfks_\lambda$ we require several technical definitions referring to subsets of positions in an upper triangular matrix. We begin this lexicon by defining $\J$ as the set of all such positions: $$\J = \{ (i,j) : i,j \in [6r+1] : i<j\}.$$ Our task is now to define the eleven subsets $\A,\B,\C,\D,\Z_1,\dots,\Z_7\subset \J$ corresponding to regions in a $(6r+1)\times(6r+1)$ upper triangular matrix highlighted in Figure \[fig2\] below. $$ \barr{c}\\ {\setlength{\unitlength}{8cm} \begin{picture}(1.01, 1.01) \color{hellgrau} \put(0.08,1.015){$_r$} \put(0.24,1.015){$_r$} \put(0.33,1.015){$_1$} \put(0.43,1.015){$_r$} \put(0.59,1.015){$_r$} \put(0.75,1.015){$_r$} \put(0.91,1.015){$_r$} \put(1.015,0.085){$_r$} \put(1.015,0.245){$_r$} \put(1.015,0.405){$_r$} \put(1.015,0.565){$_r$} \put(1.015,0.655){$_1$} \put(1.015,0.755){$_r$} \put(1.015,0.915){$_r$} \thinlines \put(1, 0){\line(-1, 1){1}} \put(0, 1){\line(1, 0){1}} \put(1, 0){\line(0, 1){1}} \put(0.16,0.84){\line(0,1){0.16}} \put(0.16,0.84){\line(1,0){0.84}} \put(0.32,0.68){\line(0,1){0.32}} \put(0.32,0.68){\line(1,0){0.68}} \put(0.36,0.64){\line(1,0){0.64}} \put(0.36,0.64){\line(0,1){0.36}} \put(0.52,0.48){\line(0,1){0.52}} \put(0.52,0.48){\line(1,0){0.48}} \put(0.68,0.32){\line(0,1){0.68}} \put(0.68,0.32){\line(1,0){0.32}} \put(0.84,0.16){\line(0,1){0.84}} \put(0.84,0.16){\line(1,0){0.16}} \color{black} \thicklines \put(0.40, 0.75){$\Z_1$} \put(0.32,.84){\line(1,0){0.20}} \put(0.32,.84){\line(0,-1){0.16}} \put(0.32,0.68){\line(1,0){0.20}} \put(0.52,.68){\line(0,1){0.16}} \put(0.74, 0.39){$\Z_3$} \put(0.68,.48){\line(1,0){0.16}} \put(0.68,.32){\line(0,1){0.16}} \put(0.68,.32){\line(1,0){0.16}} \put(0.84,.32){\line(0,1){0.16}} \put(0.705,.72){$\Z_2$} \put(0.68,.84){\line(1,-1){0.16}} \put(0.68,.68){\line(1,0){0.16}} \put(0.865,.36){$\Z_4$} \put(0.84,.48){\line(0,-1){0.16}} \put(0.84,.48){\line(1,-1){0.16}} \put(0.84,.32){\line(1,0){0.16}} \put(0.615,.42){$\Z_7$} \put(0.68,.48){\line(0,-1){0.16}} \put(0.52,.48){\line(1,-1){0.16}} \put(0.52,.48){\line(1,0){0.16}} \put(0.615,.78){$\Z_6$} \put(0.68,.84){\line(0,-1){0.16}} \put(0.52,.84){\line(1,-1){0.16}} \put(0.52,.84){\line(1,0){0.16}} \put(0.255,.94){$\Z_5$} \put(0.32,1){\line(0,-1){0.16}} \put(0.16,1){\line(1,-1){0.16}} \put(0.16,1){\line(1,0){0.16}} \put(0.545,.72){$\A$} \put(0.52,.84){\line(1,-1){0.16}} \put(0.52,.68){\line(0,1){0.16}} \put(0.52,.68){\line(1,0){0.16}} \put(0.255,.78){$\B$} \put(0.32,.84){\line(0,-1){0.16}} \put(0.16,.84){\line(1,-1){0.16}} \put(0.16,.84){\line(1,0){0.16}} \put(0.185,.88){$\C$} \put(0.16,1){\line(1,-1){0.16}} \put(0.16,.84){\line(0,1){0.16}} \put(0.16,0.84){\line(1,0){0.16}} \put(0.095,.94){$\D$} \put(0.16,1){\line(0,-1){0.16}} \put(0.0,1){\line(1,-1){0.16}} \put(0.0,1){\line(1,0){0.16}} \put(0.328,0.90){$_\D$} \put(0.36,.96){\line(-1,1){0.04}} \put(0.32,.96){\line(0,-1){0.12}} \put(0.36,.96){\line(0,-1){0.12}} \put(0.32,.84){\line(1,0){0.04}} \put(0.545,.52){$\A'$} \put(0.52,.64){\line(1,-1){0.16}} \put(0.52,.48){\line(1,0){0.16}} \put(0.455,.58){$\B'$} \put(0.52,.64){\line(0,-1){0.16}} \put(0.36,.64){\line(1,-1){0.16}} \put(0.36,.64){\line(1,0){0.16}} \put(0.385,.86){$\C'$} \put(0.36,.96){\line(1,-1){0.12}} \put(0.36,.84){\line(0,1){0.12}} \put(0.36,0.84){\line(1,0){0.12}} \put(0.415,.66){$_{\C'}$} \put(0.36,0.64){\line(0,1){0.04}} \put(0.48,0.68){\line(1,-1){0.04}} \put(0.36,0.68){\line(1,0){0.12}} \put(0.36,0.64){\line(1,0){0.12}} \end{picture} } \earr$$ As indicated by our picture, these subsets for the most part correspond to blocks of adjacent positions which lie inside triangles or rectangles. This diagram is somewhat imprecise; among other deficiencies, it does not clearly indicate how the sets in question include positions on various diagonals. However, this picture will serve as a valuable heuristic in what follows. To state our definitions in more adequate detail, we adopt the following notation: let $$\ba \block(n;x,y) &= \{ (x+i,y+j) : i,j \in [n] \}, \\ \lt(n;x,y) &= \{ (x+j,y+i) : i,j \in [n],\ i<j \}, \\ \ut(n;x,y) &= \{ (x+i,y+j) : i,j \in [n],\ i<j\},\ea\qquad\text{for nonnegative integers $n,x,y$}.$$ Thus $\block(n;x,y)$ is the $n$-by-$n$ square of positions containing $(x+1,y+1)$ and $(x+n,y+n)$, and $\lt(n;x,y)$ and $\ut(n;x,y) $ are the subsets of $\block(n;x,y)$ consisting of the positions strictly below the diagonal and strictly above the diagonal. Using these notations, we define $$\ba &\ba \A&= \lt(r;r,3r+1), \\ \B&= \ut(r;r,r), \\ \C&=\lt(r;0,r), \\ \ea\qquad\ba \A'&=\lt(r;2r+1,3r+1),\\ \B'&=\ut(r;2r+1,2r+1),\\ \C' &= \lt(r-1;1,2r+1) \cup \{ (2r+1,2r+i): i \in [2,r]\},\ea \\ \\[-10pt] &\ba \D&=\ut(r;0,0) \cup \{ (i,2r+1) : i \in [2,r]\}, \ea \ea$$ and $$\ba & \ba \Z_1 & = \block(r;r,2r) \cup \{ (2r+i,3r+1) : i \in [r]\}, \\ \Z_2 & = \lt(r;r,4r+1),\\ \Z_3 &= \block(r;3r+1,4r+1), \ea\qquad\ba \Z_5 &=\block(r;0,r) \setminus \C, \\ \Z_6 & = \block(r;r,3r+1) \setminus \A, \\ \Z_7 &= \ut(r;3r+1,3r+1).\ea \\ &\ba \Z_4 &= \lt(r;3r+1,5r+1), \\ \ea\ea$$ These formulas at first glance appear forbiddingly technical, but the definitions are easily interpreted with the aid of our picture above. In particular, one observes that the sets are all disjoint and correspond to the regions in Figure \[fig2\]. Let $$\Z = \Z_1 \cup \Z_2 \cup \Z_3 \cup \Z_4\qquad\text{and}\qquad \Z' = \Z_5\cup \Z_6 \cup \Z_7.$$ It is apparent from Figures 1 and 2 that the sets $\cL_{\lambda'}$ and ${\mathcal{S}}_{\lambda'}$ defined in Lemma \[monomial\] are given by \[monomial-app\]\_[’]{} &= ’ ’ ’ ’,\ \_[’]{} &= .Thus we may immediately compute $\fk l_\lambda^1$ and $\fk s_\lambda^1$ from Lemma \[monomial\]. The subalgebras $\fk l_\lambda^i,\fk s_\lambda^i$ defined in Section \[xi\] will consist of elements $X \in \fkntmp$ such that $X_\alpha =0$ for certain positions $\alpha \in \J$ and such that $X_{\alpha_1} = \dots = X_{\alpha_k}$ for certain positions $\alpha_1,\dots,\alpha_k \in \J$. A succinct way of stating conditions of the second type is to define a map $\imap: \J \to \J$ and then stipulate that $X_\alpha = X_{\imap(\alpha)}$ for all $\alpha \in \J$. This motivates our next and last definition: let $\imap : \A\cup \B \cup \C \cup \D \to \A'\cup\B'\cup\C'\cup \D$ be the map given by $$\ba \imap(i,j) &= (i+r+1,j),&&\qquad\text{for }(i,j) \in \A, \\ \imap(i,j) &= (i+r+1,j+r+1),&&\qquad\text{for }(i,j) \in \B, \\ \imap(i,j) &= \left\{\ba & (i+1,j+r+1),&&\quad\text{if }i<r, \\ & (2r+1,j+r+1),&&\quad\text{if }i=r, \ea\right. & & \qquad\text{for }(i,j) \in \C, \\ \imap(i,j) &= \left\{\ba & (i+1,j+1),&&\quad\text{if }j<r, \\ & (i+1,2r+1),&&\quad\text{if }j=r, \\ & (1,r+2-i),&&\quad\text{if }j=2r+1, \ea\right. & & \qquad\text{for }(i,j) \in \D. \ea$$ Comparing this formula with Figure \[fig2\] makes things much more comprehensible. We in particular note the following. 1. 2. Observe that $\imap$ is injective with $\imap(\A) =\A'$, $\imap(\B)=\B'$, $\imap(\C)=\C'$, and $\imap(\D) = \D$. 3. Note further that $\imap$ is “orientation-preserving” on $\A\cup \B\cup\C$, in the sense that $\tau :\X \to \X'$ for $\X = \A,\B,\C$ is the unique bijection which preserves the relative locations of any two positions (so that if $(i,j) \in \A$ is to the left of $(k,\ell)\in\A$ then $\tau(i,j)\in\A'$ is to the left of $\tau(k,\ell)\in\A'$, for example). 4. If $X \in \fkntmp$ has $X_{\alpha} = X_{\imap(\alpha)}$ for all $\alpha \in {\mathcal{D}}$ then $X_{i,2r+1} = X_{i-1,r} =X_{i-2,r-1}= \dots = X_{1,r-i+2}$ for all $i\in[2,r]$. Thus, if ${\mathfrak{a}}\subset {\mathfrak{n}}$ is the subspace \[fka\] = { X : X\_ = X\_[()]{} X\_=0} \_q{ e\_[1,2r+1]{}}then ${\mathfrak{a}}$ is a subalgebra naturally isomorphic to the algebra ${\mathfrak{a}}_{r+1}(q)$ defined in Section \[cmplx-chars\]. The map ${\mathfrak{a}}\to {\mathfrak{a}}_{r+1}(q)$ defined by $X \mapsto Y$ where $$Y_{ij} =\left\{\barr{ll} X_{ij},&\text{if $j \leq r,$} \\ X_{i,2r+1},&\text{if }j=r,\earr\right.\qquad\text{for $i,j \in [r+1]$}$$ gives an isomorphism. § With these definitions and remarks, we may now state the following technical lemma. The proof of this is tedious but, with proper organization, is not as difficult as it might appear. Notably our proof does not require a computer, and this makes the results in this section apparently the first statements concerning exotic character values of $\UT_n(q)$ (see, for example, [@E; @IK1; @IK2; @IK05; @VeraLopez2004]) which can be derived entirely by hand. \[technical\] Fix an integer $r>1$ and let ${\mathfrak{n}}= \fkt_{6r+1}(q)$. If $\lambda \in \fkntmp^*$ is defined by (\[lambda\]), then $$\ba \fk l_\lambda^1 &= \{ X \in {\mathfrak{n}}: X_{\alpha} = 0\text{ if } \alpha \in \A\cup\A'\cup \B\cup \B' \cup \C\cup\C' \cup \D \cup \Z \cup \Z' \}, \\ \fk l_\lambda^2 &= \{ X \in {\mathfrak{n}}: X_\alpha = X_{\imap(\alpha)} \text{ if }\alpha \in \A \text{ and }X_{\alpha} = 0\text{ if } \alpha \in \B\cup\B' \cup \C \cup\C'\cup \D \cup \Z \}, \\ \olfkl_\lambda=\fk l_\lambda^3 &= \{ X \in {\mathfrak{n}}: X_\alpha = X_{\imap(\alpha)} \text{ if }\alpha \in \A\cup\B\cup\C \text{ and }X_{\alpha} = 0\text{ if } \alpha \in \D \cup \Z \}, \\[-10pt] \\ \fk s_\lambda^1 &= \{ X \in {\mathfrak{n}}: X_\alpha = 0\text{ if }\alpha \in \Z \}, \\ \fk s_\lambda^2 &= \{ X \in {\mathfrak{n}}: X_{\alpha} = X_{\imap(\alpha)}\text{ if }\alpha \in \A\cup \B \text{ and } X_\alpha = 0\text{ if }\alpha \in \Z \}, \\ \olfks_\lambda=\fk s_\lambda^3 &= \{ X \in {\mathfrak{n}}: X_\alpha = X_{\imap(\alpha)} \text{ if }\alpha \in \A\cup \B\cup\C\cup\D \text{ and } X_{\alpha} = 0\text{ if } \alpha \in \Z \}. \ea$$ Also, $(\lambda -e_{1,2r+1}^*)(XY) =0$ for all $X,Y \in \olfks_\lambda$. It may be helpful to observe that one could also write $\olfkl_\lambda = \{ X \in \olfks_\lambda : X_\alpha =0\text{ for }\alpha \in \D\}$, and that for $r=2$ and $n=13$ we are claiming $$\olfks_\lambda ={\small \left\{ \(\barr{ccccccccccccc} 0 & d & * & * & * & * & * & * & * & * & * & * & *\\ & 0 & c & * & d & * & * & * & * & * & * & * & *\\ & & 0 & b & 0 & 0 & 0 & * & * & * & * & * & *\\ & & & 0 & 0 & 0 & 0 & a & * & 0 & * & * & *\\ & & & & 0 & c & * & * & * & * & * & * & *\\ & & & & & 0 & b & * & * & * & * & * & *\\ & & & & & & 0 & a & * & * & * & * & *\\ & & & & & & & 0 & * & 0 & 0 & * & *\\ & & & & & & & & 0 & 0 & 0 & 0 & *\\ & & & & & & & & & 0 & *& * & *\\ & & & & & & & & & &0 & * & *\\ & & & & & & & & & & & 0 & *\\ & & & & & & & & & & & & 0 \earr\)\right\},}$$ where the parameters $a,b,c,d$ correspond to the regions of the same letter. Call the right hand sets above $\fk l_i$ and $\fk s_i$ for $i\leq 3$ and define $\fk l_4 = \fk l_3$ and $\fk s_4= \fk s_3$. Recalling the discussion in Section \[xi\], the lemma then becomes the claim that $\fk l_i = \fk l^i_\lambda$ and $\fk s_i=\fk s^i_\lambda$ for $i\leq 4$. It is immediate from Lemma \[monomial\] and the observation (\[monomial-app\]) that $\fk l_1 = \fk l_\lambda^1$ and $\fk s_1 = \fk s_\lambda^1$. For the other cases, define the following subspaces in $\fkntmp = \fkt_n(q)$: $$\ba \fk l_2^c &= {\mathbb{F}}_q\spanning \left\{ e_\alpha : \alpha \in \A\cup \B \cup \B' \cup \C \cup \C' \cup \D\right\}, \\ \fk l_3^c &={\mathbb{F}}_q\spanning \left\{ e_\alpha : \alpha \in \C \cup \D \right\}, \\ \fk l_4^c &= \left\{X \in {\mathfrak{n}}: X_{\alpha} = X_{\imap(\alpha)}\text{ if }\alpha \in \D \text{ and }X_\alpha =0\text{ if }\alpha\notin\D\right\}, \\[-10pt] \\ \fk s_2^c &= {\mathbb{F}}_q\spanning \left\{ e_\alpha : \alpha \in \A \cup \B\right\}, \\ \fk s_3^c &={\mathbb{F}}_q\spanning\left\{ e_\alpha : \alpha \in \C \cup \ut(r;0,0) \right\}, \\ \fk s_4^c & = 0, \\[-10pt] \\ \fk l_2' &={\mathbb{F}}_q\spanning \left\{ e_\alpha+e_{\imap(\alpha)} : \alpha \in \A\right\} \oplus {\mathbb{F}}_q\spanning\left\{ e_\alpha : \alpha \in \Z'\right \}, \\ \fk l_3' &= {\mathbb{F}}_q\spanning\left\{ e_\alpha + e_{\imap(\alpha)} : \alpha \in \B \cup \C\right\}, \\ \fk l_4' &= 0. \ea$$ One checks that $ \fk s_{i-1} = \fk l_{i} \oplus \fk l_i^c = \fk s_i \oplus \fk s_i^c $ and $\fk l_i = \fk l_i' \oplus \fk l_{i-1}$ for $i \in \{2,3,4\}$. To prove the lemma, it thus suffices to show that if $i \in \{2,3,4\}$ then 1. For each $X \in \fk l_i'$ we have $\lambda(XY) = 0$ for all $Y \in \fk s_{i-1}$. 2. For each nonzero $X \in \fk l_i^c$ we have $\lambda(XY) \neq 0$ for some $Y \in \fk s_{i-1}$. 3. For each $X \in \fk s_i$ we have $\lambda(XY) = 0$ for all $Y \in \fk l_i'$. 4. For each nonzero $X \in \fk s_i^c$ we have $\lambda(XY) \neq 0$ for some $Y \in \fk l_i'$. In particular, (a) and (b) together imply that $\fk l_i$ is the left kernel of the bilinear form $B_\lambda : (X,Y)\mapsto \lambda(XY)$ restricted to $\fk s_{i-1} \times \fk s_{i-1}$, and (c) and (d) together imply that $\fk s_i$ is the left kernel of $B_\lambda$ restricted to $\fk s_{i-1}\times \fk l_i$, and these statements mean that $\fk l_i = \fk l_\lambda^i$ and $\fk s_i = \fk s_\lambda^i$ as required. We have three cases ($i=2,3,4$) which we treat in turn; the first is by far the most technical, but all are handled by straightforward, elementary considerations. 1. Let $(j,k) \in \J-\Z$ and set $Y = e_{jk} \in \fk s_1$. Elements of this form span $\fk s_1$, so to show that (a) holds we need only prove that $\lambda(XY) =0$ for all elements $X$ in a basis for $\fk l_2'$. For this, we have two cases: 1. Suppose $X = e_\alpha +e_{\imap(\alpha)} \in \fk l_2'$ for some $\alpha \in \A$. Then $XY= 0$ unless $\alpha = (i,j)$ for some $1\leq i < j$, and in this case we have by (\[lambda\]) that \[A-case\](XY) =\_[ik]{} + \_[i+r+1,k]{} = { 1-0=1,&k=i+3r+1,\ -1+1=0,&k=i+2r+1,\ 0-0=0,&..The case $k=i+3r+1$ does not occur, since then $(i,j) \in \A$ would imply that $j \in 3r+1+[r]$ and $k \in 4r+1+[r]$, whence $(j,k) \in \Z_3 \subset \Z$, a contradiction. 2. Suppose $X= e_\alpha \in \fk l_2'$ for some $\alpha \in \Z'$. It follows from (\[lambda\]) that if $\lambda_{ik} \neq 0$ then either $(i,j) \notin \Z'$ or $(j,k) \in \Z$, and this observation suffices to show that $\lambda(XY)=0$. The elements $X$ in (i) and (ii) span $\fk l_2'$, so (a) holds. 2. Suppose $X \in \fk l_2^c$ is nonzero, so that $X_\alpha \neq 0$ for some $\alpha = (i,j) \in \A \cup \B \cup \B' \cup \C \cup \C' \cup \D$. Let $Y = e_\beta$ where \[beta\] = { (j,i+2r+1),&,\ \ (j,i+r),&’ \_[’ ]{} \_,\ \ (j, i+2r),&\_[’ ]{} \_ .\ .One checks that $\beta \in \Z_7 \cup \A$ in the first case; $\beta \in \C' \cup \B \cup \B' \cup \C$ in the second case; and $\beta \in \B' \cup \C' $ in the third case, respectively. Hence $Y \in \fk s_1$. Since $X$ has no nonzero entries in the positions $\Z_1 \cup \A' $, it follows from (\[lambda\]) that $\lambda(XY) = \pm X_\alpha \neq 0$, as required. 3. To show that (c) holds, we have again two cases: 1. Suppose $Y = e_\alpha +e_{\imap(\alpha)} \in \fk l_2'$ for some $\alpha \in \A$. If $(i,j) \in \J$ then it follows from (\[lambda\]) that $$\lambda(e_{ij}Y) = \left\{\barr{rl} -1,&\text{if }\alpha = (j,i+2r+1), \\ 1,&\text{if }\alpha = (j-r-1,i+2r+1), \\ 1,&\text{if }\alpha = (j-r-1,i+r), \\ 0,&\text{otherwise}.\earr\right.$$ The first case occurs only if $(i,j) \in \B$; the second case occurs only if $(i,j) \in \Z_1$; and the third case occurs only if $(i,j) \in \B' $. Furthermore, noting the definition of $\imap$ on $\B$, one finds using this identity that if $\beta \in \B$ then $\lambda(e_\beta Y) =-1$ if and only if $\lambda(e_{\imap(\beta)}Y)=1$. Since every $X \in \fk s_2$ has $X_\beta = 0$ if $\beta \in \Z$ and $X_\beta = X_{\imap(\beta)}$ if $\beta \in \B$, we have $\lambda(XY) = 0$ for all $X \in \fk s_2$. 2. If $Y = e_\alpha \in \fk l_2'$ for some $\alpha \in \Z_5 \cup \Z_6 $, then it is easy to see that $\lambda(XY)=0$ for all $X \in {\mathfrak{n}}$. If $Y = e_\alpha \in \fk l_2'$ for some $\alpha \in \Z_7$ and $(i,j) \in \J$ then $$\lambda(e_{ij}Y) = \left\{ \barr{rl} -1,&\text{if }\alpha= (j,i+2r+1), \\ 1,&\text{if }\alpha = (j,i+r), \\ 0,&\text{otherwise}.\earr\right.$$ The first case occurs only if $(i,j) \in \A$ and the second case occurs only if $(i,j) \in \A' $. Furthermore, it follows from this identity that if $\beta \in \A$ then $\lambda(e_\beta Y) =-1$ if and only if $\lambda(e_{\imap(\beta)}Y) = 1$. Since every $X \in \fk s_2$ has $X_\beta = X_{\imap(\beta)}$ for $\beta \in \A$, we again have $\lambda(XY) = 0$ for all $X \in \fk s_2$. The elements $Y$ in (i) and (ii) span $\fk l_2'$, so this suffices to prove (c). 4. Suppose $X \in \fk s_2^c$ is nonzero, so that $X_\alpha \neq 0$ for some $\alpha =(i,j) \in \A \cup \B$. Let $Y = e_\beta$ with $\beta =(j,i+2r+1)\in \J$. If $\alpha \in \A$ then $\beta \in \Z_7$ and if $\alpha \in \B$ then $\beta \in \A$, so $Y \in \fk l_2'$. Since $X$ has nonzero entries only in $\A \cup \B$, it follows from (\[lambda\]) that $\lambda(XY) = \pm X_\alpha \neq 0$, as required. <!-- --> 1. It suffices to show that if $X = e_\alpha +e_{\imap(\alpha)} \in \fk l_3'$ for some $\alpha \in \A$, then $\lambda(XY) = 0$ for all $Y \in \fk s_2$. Since $Y_{\beta} = 0$ for all $\beta \in \Z$ if $Y \in \fk s_2$, this follows directly from (\[A-case\]), which shows that $\lambda(X e_{\beta}) = 0$ if $\beta \notin \Z_3\subset \Z$. 2. Suppose $X \in \fk l_3^c$ is nonzero, so that $X_\alpha \neq 0$ for some $\alpha=(i,j) \in \C\cup \D$. Define $$Y = \left\{\barr{lll} e_\beta + e_{\imap(\beta)},&\text{ for }\beta = (j,i+r) \in \B,&\text{if }\alpha \in \D\text{ and }j\neq 2r+1, \\ e_\beta,&\text{ for }\beta = (j,i+r) \in \C,&\text{if }\alpha \in \C, \\ e_\beta,&\text{ for }\beta=(2r+1,2r+i) \in \C' ,&\text{if }\alpha \in \D\text{ and }j = 2r+1.\earr\right.$$In each case we have $Y \in \fk s_2$ and one checks that $\lambda(XY) = \pm X_\alpha \neq 0$, as required. 3. To see that (c) holds we have two cases: 1. Suppose $Y = e_\beta +e_{ \imap(\beta)} \in \fk l_3'$ for some $\beta=(j,k) \in \B$. Let $\gamma = (k-r,j)$ and observe that $\beta \in \B$ implies $\gamma \in \C$. If $X \in {\mathfrak{n}}$ then $$\lambda(XY) = \left\{\barr{rl} -X_{k-r,j} + X_{k-r+1,j+r+1},&\text{if }k\in r + [r-1], \\ -X_{r,j} + X_{2r+1,j+r+1},&\text{if }k=2r.\earr\right.$$ It follows from the definition of $\imap$ that $ \lambda(XY) = X_{\imap(\gamma)}-X_{\gamma}$. If $X \in \fk s_3'$ then $X_{\gamma} =X_{\imap(\gamma)}$, which implies that $\lambda(XY) = 0$. 2. Alternatively suppose $Y = e_\gamma + \imap(e_\gamma) \in \fk l_3'$ for some $\gamma=(j,k) \in \C$. Let $\delta = (k-r,j)$ and observe that $\gamma \in \C$ implies $\delta \in \D$. If $X \in \fkntmp$ then $$\lambda(XY) = \left\{\barr{rl} -X_{k-r,j} + X_{k-r+1,j+1},&\text{if }j\in[ r-1] , \\ -X_{k-r,r} + X_{k-r+1,2r+1},&\text{if }j=r.\earr\right.$$ It again follows from the definition of $\imap$ that $ \lambda(XY) = X_{\imap(\delta)}-X_{\delta}$. If $X \in \fk s_3$ then by definition $X_{\delta} =X_{\imap(\delta)}$, which implies that $\lambda(XY) = 0$. The elements $Y$ in (i) and (ii) span $\fk l_3'$, so (c) holds. 4. Suppose $X \in \fk s_3^c$ is nonzero, so that $X_\alpha \neq 0$ for some $\alpha=(i,j) \in \C\cup \ut^r_{0,0}$. We may assume without loss of generality that $X$ has no nonzero entries to the right of column $j$. Let $Y=e_\beta + e_{\imap(\beta)}$ where $\beta = (j,i+r)$. If $\alpha \in \C$ then $\beta \in \B$ and if $\alpha \in \ut_{0,0}^r$, then $\beta \in \C$. Thus in either case $Y \in \fk l_3'$ and $\imap(\beta)$ occurs in a row strictly below the row of $\beta$. Since we assume that $X$ has no nonzero entries in any column to the right of $\alpha$, it follows that $XY = Xe_\beta$ so $\lambda(XY) =-X_\alpha \neq 0$ by (\[lambda\]), as required. <!-- --> 1. Since $\fk l_4'=0$, (a) holds trivially. 2. If $X \in \fk l_4^c$ is nonzero then $X_{1i}\neq 0$ for some $i\in [2,r]$. If we take $Y \in \fk s_3$ to be the element $$Y = e_{i,2r+1} + e_{i-1,r} + e_{i-2,r-1} + \dots + e_{1,r-i+2}$$ then $\lambda(XY) = -X_{1i} \neq 0.$ 3. Since $\fk l_4'=0$, (c) also holds trivially. 4. Since $\fk s_4^c=0$, (d) holds vacuously. This analysis suffices to conclude all but the lemma’s final claim, that $(\lambda - e_{1,2r+1}^*)(XY) = 0$ for all $X,Y \in \fk s_3$. For this, recall that $\fk s_3 = \fk l_4 \oplus \fk l_4^c$. By definition $\lambda(XY) =0$ for all $X \in \fk l_4$ and $Y \in \fk s_3$, and since $\fk s_3 = \fk s_4$, we also have by definition that $\lambda(XY)=0$ for all $X \in \fk l_4^c \subset \fk s_4$ and $Y \in \fk l_4$. It is not difficult to see that the preceding sentence remains true if $\lambda$ is replaced by $e_{1,2r+1}^*$. Thus if $X_1,Y_1 \in \fk l_4$ and $X_2,Y_2 \in \fk l_4^c$ and $X=X_1+X_2$ and $Y=Y_1+Y_2$ then $$(\lambda-e_{1,2r+1}^*)(XY) = (\lambda-e_{1,2r+1}^*)(X_2Y_2).$$ Let ${\mathfrak{a}}= \fk l_4^c\oplus {\mathbb{F}}_q \spanning\{e_{1,2r+1}\}$ as in (\[fka\]); this is a subalgebra so $X_2Y_2 \in {\mathfrak{a}}$, and the final part of the lemma follows by noting that $\lambda \downarrow {\mathfrak{a}}= e_{1,2r+1}^* \downarrow {\mathfrak{a}}$. We are now prepared to discuss the irreducible constituents of the character $\xi_\lambda$ is detail. The following proposition proves almost all of Theorem \[main\] by example. \[main-prop\] Choose an integer $r>1$ and let $n>6r$. If ${\mathfrak{n}}= \fkt_{n}(q)$ and $\lambda \in {\mathfrak{n}}^*$ is defined by (\[lambda\]), then the following hold: 1. The character $\xi_\lambda$ of $\UT_{n}(q)$ is the sum of $q^{r-1}$ distinct irreducible characters of degree $q^{5r^2-2r}$, each of which is induced from a linear character of $\ols_\lambda$. Furthermore, $$\ba \olfkl_\lambda &=\left \{ X \in {\mathfrak{n}}: X_\alpha = X_{\imap(\alpha)} \text{ if }\alpha \in \A\cup \B\cup \C \text{ and }X_{\alpha} = 0\text{ if } \alpha \in \D \cup \Z \right\}, \\ \olfks_\lambda &= \left\{ X \in {\mathfrak{n}}: X_\alpha = X_{\imap(\alpha)} \text{ if }\alpha \in \A\cup\B\cup\C\cup\D \text{ and } X_{\alpha} = 0\text{ if } \alpha \in \Z \right\}, \ea$$ and $\xi_\lambda(1) =q^{5r^2-r-1}$ and $ \langle \xi_\lambda, \xi_\lambda \rangle_{\UT_n(q)} = q^{r-1}$. 2. If $p>0$ is the characteristic of ${\mathbb{F}}_q$ and $p^i$ is the largest power of $p$ less than or equal to $r$, then all irreducible constituents of $\xi_\lambda$ take values in ${\mathbb{Q}}(\zeta_{p^{i+1}})$, but some irreducible constituents have values which are not in ${\mathbb{Q}}(\zeta_{p^i})$. 3. The Kirillov functions $\psi_\lambda$ and $\logpsi_\lambda$ have degree $q^{5r^2-2r}$, and $\psi_\lambda$ is never a character of $\UT_n(q)$ while the exponential Kirillov function $\logpsi_\lambda$ is a character if and only if $r<p$. A character of an algebra group is *well-induced* if it is induced from a linear supercharacter of an algebra subgroup; see Section 4 in [@supp0]. One can adapt our arguments to prove that the Kirillov function $\psi_\mu$ is not a character for every $\mu\in \Xi_\lambda$, and given this, Proposition 4.1 in [@supp0] implies that none of the $q^{r-1}$ irreducible constituents of $\xi_\lambda$ are well-induced. Moreover, one can show that all of our statements, except the descriptions of $\olfkl_\lambda$ and $\olfks_\lambda$ which change somewhat, hold verbatim if the $6r+1$ nonzero values of $\lambda_{ij}$ are replaced by arbitrary elements of ${\mathbb{F}}_q^\times$, and that each of these $(q-1)^{6r+1}$ choices of $\lambda$ yields a distinct character $\xi_\lambda$. For $r=2$ this gives rise to the $q(q-1)^{13}$ irreducible characters of $\UT_{13}(q)$ identified by Evseev in [@E Theorem 2.7] which are not well-induced. By Observation \[inflation\] we may assume without loss of generality that $n=6r+1$, since if $n>6r+1$ then we have a vector space decomposition ${\mathfrak{n}}= \fkt_{6r+1}(q)\oplus {\mathfrak{h}}$ where ${\mathfrak{h}}$ is the two-sided ideal of matrices in ${\mathfrak{n}}$ with zeros in the first $6r+1$ columns. Let $\fk l = \olfkl_\lambda$ and $\fk s = \olfks_\lambda$ and $L = \oll_\lambda$ and $S = \ols_\lambda$ and $G = \UT_{n}(q)$. Our descriptions of $\fk l$ and $\fk s$ are immediate from Lemma \[technical\]. Let $\mu = e_{1,2r+1}^* \downarrow \fk s$ and $\nu = (\lambda - e_{1,2r+1}^*) \downarrow \fk s$, so that $\lambda \downarrow \fk s = \mu + \nu$. By definition, $\xi_\lambda = \Ind_L^G( \theta_\lambda ) = \Ind_S^G (\chi_{\mu+\nu})$, and by Theorem \[structural\] the irreducible constituents of $\xi_\lambda$ are in bijection with the irreducible constituents of the fully ramified supercharacter $\chi_{\mu+\nu}$ of $S$. Also by Theorem \[structural\], we have $\langle \xi_\lambda,\xi_\lambda\rangle_G = |L|/|S| =|\fk s / \fk l|= q^{r-1}$ and $ \xi_\lambda(1) = |G|/|L|=|{\mathfrak{n}}/\fk l| =q^{|\A| + |\B| +|\C| + |\D| +|\Z|} = q^{5r^2-r-1}$ since $$\barr{c} |\A| = |\B| = |\C| = \frac{1}{2}r^2 - \frac{1}{2} r,\qquad |\D| =\frac{1}{2} r^2 +\frac{1}{2}r - 1,\qquad |\Z| = 3r^2.\earr$$ The last part of Lemma \[technical\] states that $\nu(XY) = 0$ for all $X,Y \in \fk s$, and this implies that $g\nu h = \nu$ and $g(\mu+\nu)h = g \mu h + \nu$ for all $g,h \in S$. Because of this property, it follows from the definition (\[superchar-def\]) that $\chi_{\mu+\nu} = \chi_\mu \otimes \chi_\nu$ and $\psi_{\mu+\nu} = \psi_\mu \otimes \psi_{\nu}$, and that $\chi_\nu=\psi_\nu = \logpsi_\nu \in \Irr(S)$ is the linear character with the formula $\chi_\nu(g) = \theta\circ\nu(g-1)$ for $g \in S$. To prove the rest of the proposition, we decompose the supercharacter $\chi_\mu$ of $S$, using Observation \[inflation\] and the results of Section \[cmplx-chars\]. To this end, we observe that as a vector space $\fk s = \fk a \oplus {\mathfrak{h}}$ where $$\ba {\mathfrak{a}}&= \{ X \in \fk s : X_\alpha =0 \text{ if }\alpha \notin \D\text{ or }\alpha \neq (1,2r+1)\}, \\ {\mathfrak{h}}&= \{ X \in \fk s : X_\alpha =0 \text{ if }\alpha \in \D\text{ or }\alpha = (1,2r+1)\}=\{ X \in \fk l : X_{1,2r+1} = 0 \}. \ea$$ We know that $\fk l$ is a two-sided ideal in $\fk s$, and it is easy to see that if $X \in \fk l$ and $Y \in \fk s$ then $(XY)_{1,2r+1} = (YX)_{1,2r+1} = 0$, since $Y_{i,2r+1} = 0$ for all $i>1$ and since $X_{i,2r+1} =0 $ whenever $Y_{1,i}\neq 0$. Therefore ${\mathfrak{h}}$ is also a two-sided ideal in $\fk s$. Furthermore, it is clear that $\ker(\mu) \supset {\mathfrak{h}}$. Now, as observed in Remark (iii) above, the vector space ${\mathfrak{a}}$ is a subalgebra naturally isomorphic to the algebra ${\mathfrak{a}}_{r+1}(q)$; under this isomorphism $\mu \downarrow {\mathfrak{a}}$ becomes identified with the functional $\kappa \in {\mathfrak{a}}_{r+1}(q)^*$ defined in Section \[cmplx-chars\]. Consequently, by Observation \[inflation\] the irreducible constituents of $\chi_\mu$ are in bijection with those of the supercharacter $\chi_\kappa$ of $\mathrm{A}_{r+1}(q)$, via a map of the form $\psi \mapsto \psi \circ \pi$ where $\pi : S \to \mathrm{A}_{r+1}(q)$ is some surjective homomorphism. In particular, the characters on each side of this bijection have the same sets of values. Since $\chi_\nu = \psi_\nu = \logpsi_\nu$ is linear with values in ${\mathbb{Q}}(\zeta_p)$, our assertions in parts (2) and (3) thus follow by a combination of Observation \[inflation\], Proposition \[cmplx-constits\], and the remarks following Theorem \[structural\]. As noted when defining $\lambda$, the character $\xi_\lambda$ is a constituent of the supercharacter of $\UT_n(q)$ whose shape is the set partition described in Theorem \[main\]. It remains to show that any supercharacter with the same shape has a constituent with the same properties as $\xi_\lambda$. This is immediate from the following observation, which proves Theorem \[main\] in its entirety. \[diagonal\] The group of automorphisms of $\UT_n(q)$ of the form $g \mapsto D g D^{-1}$, where $D$ is a diagonal matrix in $\GL(n,{\mathbb{F}}_q)$, acts transitively on the set of supercharacters of $\UT_n(q)$ with a given shape. Fix a diagonal matrix $D \in \GL(n,{\mathbb{F}}_q)$ and let $\varphi_D$ be the conjugation map of $g\mapsto DgD^{-1}$. From (\[superchar-def\]) ones sees that if $\mu \in \fkt_n(q)^*$ then $\chi_\mu \circ \varphi_D = \chi_{\nu}$, where $\nu_{ij} = \frac{D_{ii}}{D_{jj}} \mu_{ij}$ for all $i,j \in [n]$. We may assume that $\mu$ is quasi-monomial (see the discussion in Section \[pattern\]), and it is obvious that $\chi_\mu$ and $\chi_\nu$ have the same shape $\Lambda$. Furthermore, it is not difficult to see that $\chi_\mu = \chi_\nu$ (which occurs if and only if $\mu=\nu$) if and only if $D_{ii} = D_{jj}$ whenever $i,j\in [n]$ belong to the same part of $\Lambda$. By the orbit-stabilizer theorem, the orbit of $\chi_\mu$ under the action of the diagonal matrices in $\GL(n,{\mathbb{F}}_q)$ thus has cardinality $(q-1)^{n-\ell}$ where $\ell$ is the number of parts of $\Lambda $. As this is precisely the number of quasi-monomial $\nu \in \fkt_n(q)^*$ with shape $\Lambda$, our statement follows. [99]{} C. A. M. André, “Basic characters of the unitriangular group,” *J. of Algebra* **175** (1995), 287–319. C. A. M. André, “Hecke algebras for the basic characters of the unitriangular groups.” *Proc. Amer. Math. Soc.* **130** (2002), 1943–1954. C. A. M. André; A. Nicolás, “Supercharacters of the adjoint group of a finite radical ring,” *J. Group Theory* **11** (2008) 709–746. P. Diaconis; I. M. Isaacs, “Supercharacters and superclasses for algebra groups,” *Trans. Amer. Math. Soc.* **360** (2008), 2359–2392. P. Diaconis; N. Thiem, “Supercharacter formulas for pattern groups,” *Trans. Amer. Math. Soc.* **361** (2009), 3501–3533. A. Evseev, “Reduction for characters of finite algebra groups,” *J. of Algebra* (2010), in press. . Z. Halasi, “On the characters and commutators of finite algebra groups.” *J. Algebra* **275** (2004), 481–487. I. M. Isaacs, *Character theory of finite groups,* Dover, New York, 1994. I. M. Isaacs, “Characters of groups associated with finite algebras,” *J. of Algebra* **177** (1995), 708–730. I. M. Isaacs; D. Karagueuzian, “Conjugacy in groups of upper triangular matrices,” *J. of Algebra* **202** (1998), 704–711. I. M. Isaacs; D. Karagueuzian, “Erratum: Conjugacy in groups of upper triangular matrices,” *J. of Algebra* **208** (1998), 722. I. M. Isaacs; D. Karagueuzian, “Involutions and characters of upper triangular matrix groups,” *Mathematics of Computation* **252** (2005), 2027–2033. I. M. Isaacs, “Counting characters of upper triangular groups,” *J. Algebra* **315** (2007) 698–719. A. A. Kirillov, “Variations on the triangular theme,” *Lie groups and Lie algebras: E. B. DynkinÕs Seminar*, 43–73, Amer. Math. Soc. Transl. Ser. 2, **169** Providence, RI, 1995. E. Marberg; N. Thiem, “Superinduction for pattern groups,” *J. Algebra* **321** (2009), 3681–3703. E. Marberg, “Superclasses and supercharacters of normal pattern subgroups of the unipotent upper triangular matrix group,” 2010 preprint. . E. Marberg, “Iterative character constructions for algebra groups,” 2010 preprint. E. Marberg, “Combinatorial methods of character enumeration for the unitriangular group,” 2010 preprint. J. Sangroniz, “Characters of algebra groups and unitriangular groups.” *Finite groups 2003*, 335–349, Walter de Gruyter, Berlin, 2004. N. Thiem, “Branching rules in the ring of superclass functions of unipotent upper-triangular matrices,” *J. Algebr. Comb.* **31** (2009), 267–298. N. Thiem; V. Venkateswaran, “Restricting supercharacters of the finite group of unipotent uppertriangular matrices,” *Electron. J. Combin.* **16**(1) Research Paper 23 (2009). A. Vera-Lopez; J. M. Arregi, “Computing in unitriangular matrices over finite fields,” *J. Linear Algebra Appl.* **387** (2004), 193–219. N. Yan, “Representation theory of the finite unipotent linear groups.” PhD thesis, Department of Mathematics, University of Pennsylvania, 2001. [^1]: This research was conducted with government support under the Department of Defense, Air Force Office of Scientific Research, National Defense Science and Engineering Graduate (NDSEG) Fellowship, 32 CFR 168a.
--- abstract: 'This paper is devoted to model selection in logistic regression. We extend the model selection principle introduced by Birgé and Massart [-@birge2001gaussian] to logistic regression model. This selection is done by using penalized maximum likelihood criteria. We propose in this context a completely data-driven criteria based on the slope heuristics. We prove non asymptotic oracle inequalities for selected estimators. Theoretical results are illustrated through simulation studies.' address: - | (1) Laboratoire de Mathématiques et de Modélisation d’Evry\ Université d’Évry Val d’Essonne\ UMR CNRS 8071- USC INRA\ 23 Boulevard de France\ 91037 Évry - | (2) INRA, UR 341 MIA-Jouy,\ Domaine de Vilvert,\ F78352 Jouy-en-Josas, France author: - 'Marius Kwemou$^{(1)}$, Marie-Luce Taupin$^{(1)}$$^{(2)}$, Anne-Sophie Tocquet$^{(1)}$' bibliography: - 'biblioM.bib' title: Model selection in logistic regression --- [[**Keywords**]{}: logistic regression, model selection, projection.]{} [**AMS 2000 MSC**]{}: Primary 62J02, 62F12, Secondary 62G05, 62G20. Introduction ============ Consider the following generalization of the logistic regression model : let $(Y_1,x_1),\cdots,(Y_n,x_n)$, be a sample of size $n$ such that $(Y_i, x_i)\in \{0,1\}\times\mathcal{X}$ and $$\begin{aligned} \mathbb{E}_{f_0}(Y_i)=\pi_{f_0}(x_i)=\frac{\exp{f_{0}(x_i)}}{1+\exp{f_{0}(x_i)}},\end{aligned}$$ where $f_{0}$ is an unknown function to be estimated and the design points $x_1,...,x_n$ are deterministic. This model can be viewed as a nonparametric version of the “classical” logistic model which relies on the assumption that $x_i\in \mathbb{R}^d$, and that there exists $\beta_{0}\in \mathbb{R}^d$ such that $f_0(x_i)=\beta_0^\top x_i.$ Logistic regression is a widely used model for predicting the outcome of binary dependent variable. For example logistic model can be used in medical study to predict the probability that a patient has a given disease (e.g. cancer), using observed characteristics (explanatory variables) of the patient such as weight, age, patient’s gender *etc*. However in the presence of numerous explanatory variables with potential influence, one would like to use only a few number of variables, for the sake of interpretability or to avoid overfitting. But it is not always obvious to choose the adequate variables. This is the well-known problem of variables selection or model selection. In this paper, the unknown function $f_0$ is not specified and not necessarily linear. Our aim is to estimate $f_0$ by a linear combination of given functions, often called dictionary. The dictionary can be a basis of functions, for instance spline or polynomial basis. A nonparametric version of the classical logistic model has already been considered by Hastie [-@non_par_logist], where a nonparametric estimator of $f_0$ is proposed using local maximum likelihood. The problem of nonparametric estimation in additive regression model is well known and deeply studied. But in logistic regression model it is less studied. One can cite for instance Lu [-@Lu2006], Vexler [-@Vexler2006], Fan *et al.* [-@Fanetal1998], Farmen [-@Farmen1996], Raghavan [-@Raghavan1993], and Cox [-@Cox1990]. Recently few papers deal with model selection or nonparametric estimation in logistic regression using $\ell_1$ penalized contrast Bunea [-@bunea2008honest], Bach [-@bach10], van de Geer [-@van2008], Kwemou [-@kwemou]. Among them, some establish non asymptotic oracle inequalities that hold even in high dimensional setting. When the dimension of $\mathcal{X}$ is high, that is greater than dozen, such $\ell_1$ penalized contrast estimators are known to provide reasonably good results. When the dimension of $\mathcal{X}$ is small, it is often better to choose different penalty functions. One classical penalty function is what we call $\ell_0$ penalization. Such penalty functions, built as increasing function of the dimension of $\mathcal{X}$, usually refers to model selection. The last decades have witnessed a growing interest in the model selection problem since the seminal works of Akaike [-@akaike1973], Schwarz [-@schwarz1978]. In additive regression one can cite among the others Baraud [-@baraud2000model], Birgé and Massart [-@birge2001gaussian], Yang [-@yang1999], in density estimation Birgé [-@birge2014model], Castellan [-@castellan2003density] and in segmentation problem Lebarbier [-@lebarbier], Durot *et al.* [-@DurotLebarbierTocquet], and Braun *et al.* [-@braun]. All the previously cited papers use $\ell_{0}$ penalized contrast to perform model selection. But model selection procedures based on penalized maximum likelihood estimators in logistic regression are less studied in the literature. In this paper we focus on model selection using $\ell_0$ penalized contrast for logistic regression model and in this context we state non asymptotic oracle inequalities. More precisely, given some collection functions, we consider estimators of $f_0$ built as linear combination of the functions. The point that the true function is not supposed to be linear combination of those functions, but we expect that the spaces of linear combination of those functions would provide suitable approximation spaces. Thus, to this collection of functions, we associate a collection of estimators of $f_0$. Our aim is to propose a data driven procedure, based on penalized criterion, which will be able to choose the “best” estimator among the collection of estimators, using $\ell_{0}$ penalty functions. The collection of estimators is built using minimisation of the opposite of logarithm likelihood. The properties of estimators are described in term of Kullback-Leibler divergence and the empirical $L_2$ norm. Our results can be splitted into two parts. First, in a general model selection framework, with general collection of functions we provide a completely data driven procedure that automatically selects the best model among the collection. We state non asymptotic oracle inequalities for Kullback-Leibler divergence and the empirical $L_2$ norm between the selected estimator and the true function $f_0$. The estimation procedure relies on the building of a suitable penalty function, suitable in the sense that it performs best risks and suitable in the sense that it does not depend on the unknown smoothness parameters of the true function $f_0$. But, the penalty function depends on a bound related to target function $f_0$. This can be seen as the price to pay for the generality. It comes from needed links between Kullback-Leibler divergence and empirical $L_2$ norm. Second, we consider the specific case of collection of piecewise functions which provide estimator of type regressogram. In this case, we exhibit a completely data driven penalty, free from $f_0$. The model selection procedure based on this penalty provides an adaptive estimator and state a non asymptotic oracle inequality for Hellinger distance and the empirical $L_2$ norm between the selected estimator and the true function $f_0$. In the case of piecewise constant functions basis, the connection between Kullback-Leibler divergence and the empirical $L_2$ norm are obtained without bound on the true function $f_0$. This last result is of great interest for example in segmentation study, where the target function is piecewise constant or can be well approximated by piecewise constant functions. Those theoretical results are illustrated through simulation studies. In particular we show that our model selection procedure (with the suitable penalty) have good non asymptotic properties as compared to usual known criteria such as AIC and BIC. A great attention has been made on the practical calibration of the penalty function. This practical calibration is mainly based on the ideas of what is usually referred as slope heuristic as proposed in Birgé and Massart [-@birge2007] and developed in Arlot and Massart [-@arlot2009]. The paper is organized as follow. In Section \[S1\] we set our framework and describe our estimation procedure. In Section \[S2\] we define the model selection procedure and state the oracle inequalities in the general framework. Section \[S3\] is devoted to regressogram selection, in this section, we establish a bound of the Hellinger risk between the selected model and the target function. The simulation study is reported in Section \[S4\]. The proofs of the results are postponed to Section \[S5\] and  \[S6\]. Model and framework {#S1} =================== Let $(Y_1,x_1),\cdots,(Y_n,x_n)$, be a sample of size $n$ such that $(Y_i, x_i)\in \{0,1\}\times\mathcal{X}$. Throughout the paper, we consider a fixed design setting *i.e.* $x_{1},\dots,x_{n}$ are considered as deterministic. In this setting, consider the extension of the “classical” logistic regression model [(\[model\])]{} where we aim at estimating the unknown function $f_{0}$ in $$\begin{aligned} \label{model} \mathbb{E}_{f_0}(Y_i)=\pi_{f_0}(x_i)=\frac{\exp{f_{0}(x_i)}}{1+\exp{f_{0}(x_i)}}.\end{aligned}$$ We propose to estimate the unknown function $f_0$ by model selection. This model selection is performed using penalized maximum likelihood estimators. In the following we denote by $\mathbb{P}_{f_0}(x_1)$ the distribution of $Y_1$ and by $\mathbb{P}^{(n)}_{f_0}(x_1,\cdots,x_n)$ the distribution of $(Y_1,\dots,Y_n)$ under Model (\[model\]). Since the variables $Y_i$’s are independent random variables, $$\mathbb{P}^{(n)}_{f_0}(x_1,\cdots,x_n)=\Pi_{i=1}^n \mathbb{P}_{f_0}(x_i) =\prod_{i=1}^n \pi_{f_0}(x_i)^{Y_i}(1-\pi_{f_0}(x_i))^{1-Y_i} .$$ It follows that for a function $f$ mapping $\mathcal{X}$ into $\mathbb{R}$, the likelihood is defined as: $$\begin{aligned} L_n(f) = \mathbb{P}^{(n)}_{f}(x_1,\cdots,x_n) =\prod_{i=1}^n \pi_f(x_i)^{Y_i}(1-\pi_f(x_i))^{1-Y_i},\end{aligned}$$ where $$\begin{aligned} \label{pif} \pi_{f}(x_i)=\frac{\exp{(f(x_i))}}{1+\exp({f}(x_i))}.\end{aligned}$$ We choose the opposite of the log-likelihood as the estimation criterion that is $$\begin{aligned} \label{gamma_n} \gamma_n(f)=-\frac{1}{n}\log(L_n(f))=\frac{1}{n}\sum_{i=1}^{n}\Big\{\log(1+e^{f(x_i)})-Y_if(x_i)\Big\}.\end{aligned}$$ Associated to this estimation criterion we consider the Kullback-Leibler information divergence $\mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{f}^{(n)})$ defined as $$\begin{aligned} \mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{f} ^{(n)})=\frac{1}{n}\int \log\left( \frac{\mathbb{P}_{f_0}^{(n)}}{\mathbb{P}_{f}^{(n)}}\right) d\mathbb{P}_{f_0}^{(n)}.\end{aligned}$$ The loss function is the excess risk, defined as $$\begin{aligned} \label{gamma}\mathcal{E}(f):=\gamma(f)-\gamma(f_0) \mbox{ where, for any }f, \quad \gamma(f)=\mathbb{E}_{f_0}[\gamma_n(f)].~~~~~$$ Easy calculations show that the excess risk is linked to the Kullback-Leibler information divergence through the relation $$\mathcal{E}(f)=\gamma(f)-\gamma(f_0)=\mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{f}^{(n)}).$$ It follows that, $f_0$ minimizes the excess risk, that is $$f_0= \arg\min_{f} \gamma (f).$$ As usual, one can not estimate $f_0$ by the minimizer of $\gamma_n(f)$ over any functions space, since it is infinite. The usual way is to minimize $\gamma_n(f)$ over a finite dimensional collections of models, associated to a finite dictionary of functions $\phi_j : \mathcal{X} \rightarrow \mathbb{R}$ $$\mathcal{D}=\{\phi_1,\dots,\phi_M\}.$$ For the sake of simplicity we will suppose that $\mathcal{D}$ is a orthonormal basis of functions. Indeed, if $\mathcal{D}$ is not an orthonormal basis of functions, we can always find an orthonormal basis of functions $\mathcal{D^{\prime}}=\{\psi_1,\dots,\psi_{M^{\prime}}\}$ such that $$\langle \phi_1,\dots,\phi_M \rangle=\langle \psi_1,\dots,\psi_{M^{\prime}} \rangle.$$ Let $\mathcal{M}$ the set of all subsets $m \subset \{1,\dots,M\}$. For every $m\in \mathcal{M}$, we call $\mathcal{S}_m$ the model $$\label{model_col_gen} \mathcal{S}_m:=\Big\{f_{\beta}=\sum_{j\in m}\beta_j\phi_j\Big \}$$ and $D_m$ the dimension of the span of $\{\phi_j, j \in m\}$. Given the countable collection of models $\{\mathcal{S}_{m}\}_{m\in \mathcal{M}}$, we define $\{\hat f_m\}_{m\in \mathcal{M}}$ the corresponding estimators, *i.e.* the estimators obtaining by minimizing $\gamma_n$ over each model $\mathcal{S}_m$. For each $m\in \mathcal{M}$, $\hat f_m$ is defined by $$\label{fchapD1} \hat f_m = \arg\min_{t\in \mathcal{S}_m} \gamma_n(t).$$ Our aim is choose the “best” estimator among this collection of estimators, in the sense that it minimizes the risk. In many cases, it is not easy to choose the “best” model. Indeed, a model with small dimension tends to be efficient from estimation point of view whereas it could be far from the “true” model. On the other side, a more complex model easily fits data but the estimates have poor predictive performance (overfitting). We thus expect that this best estimator mimics what is usually called the oracle defined as $$\label{m*} m^{*}=\arg\min_{m\in \mathcal{M}} \mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{\hat f_m}^{(n)}).$$ Unfortunately, both, minimizing the risk and minimazing the kulback-leibler divergence, require the knowledge of the true (unknown) function $f_0$ to be estimated. Our goal is to develop a data driven strategy based on data, that automatically selects the best estimator among the collection, this best estimator having a risk as close as possible to the oracle risk, that is the risk of $\hat f_{m^{*}}$. In this context, our strategy follows the lines of model selection as developed by Birgé and Massart [-@birge2001gaussian]. We also refer to the book Massart [-@massart2007] for further details on model selection. We use penalized maximum likelihood estimator for choosing some data-dependent $\hat m$ nearly as good as the ideal choice $m^{*}$. More precisely, the idea is to select $\hat{m}$ as a minimizer of the penalized criterion $$\label{crit0} \hat{m}=\arg \min_{m\in \mathcal{M}}\left\{ \gamma_n(\hat f_m)+\mbox{pen}(m)\right\},$$ where $\mbox{pen} : \mathcal{M} \longrightarrow \mathbb{R}^{+}$ is a data driven penalty function. The estimation properties of $\hat f_m$ are evaluated by non asymptotic bounds of a risk associated to a suitable chosen loss function. The great challenge is choosing the penalty function such that the selected model $\hat m$ is nearly as good as the oracle $m{^*}$. This penalty term is classically based on the idea that $$m^{*}=\arg\min_{m\in \mathcal{M}} \mathbb{E}_{f_0}\mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{\hat f_m}^{(n)})=\arg\min_{m \in \mathcal{M}} \left[ \mathbb{E}_{f_0}\mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{f_m}^{(n)}) + \mathbb{E}_{f_0}\mathcal{K}(\mathbb{P}_{f_m}^{(n)},\mathbb{P}_{\hat f_m}^{(n)}) \right]$$ where $f_m$ is defined as $$f_m = \arg\min_{t \in S_m} \gamma(t).$$ Our goal is to build a penalty function such that the selected model $\hat m$ fulfills an oracle inequality: $$\mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{\hat f_{\hat m}}^{(n)})\leq C_n\inf_{m\in \mathcal{M}}\mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{\hat f_m}^{(n)}) + R_n.$$ This inequality is expected to hold either in expectation or with high probability, where $C_n$ is as close to 1 as possible and $R_n$ is a remainder term negligible compared to $\mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{\hat f_{m{*}}}^{(n)})$. In the following we consider two separated case. First we consider general collection of models under boundedness assumption. Second we consider the specific case of regressogram collection. Oracle inequality for general models collection under boundedness assumption {#S2} ============================================================================ Consider model (\[model\]) and $(\mathcal{S}_m)_{m\in\mathcal{M}}$ a collection of models defined by (\[model\_col\_gen\]). Let $C_0 >0$ and $\mathbb{L}_\infty(C_0)=\Big\{f : \mathcal{X}\to \mathbb{R}, ~~ \max_{1\leqslant i \leqslant n}|f(x_{i})|\leqslant C_0\Big\}$. For $m\in\mathcal{M}$, $\gamma_n$ given in (\[gamma\_n\]), and $\gamma$ is given by (\[gamma\]), we define $$\label{fchapD1c} \hat f_m = \arg\min_{t\in \mathcal{S}_m\cap \mathbb{L}_\infty(C_0)} \gamma_n(t) \mbox{ and } f_m = \arg\min_{t \in S_m\cap \mathbb{L}_\infty(C_0)} \gamma(t).$$ The first step consists in studying the estimation properties of $\hat f_m$ for each $m$, as it is stated in the following proposition. \[borne1\] Let $C_0 >0$ and $\mathcal{U}_0=e^{C_0}/(1+e^{C_0})^2$. For $m\in\mathcal{M}$, let $ \hat f_m$ and $f_m$ as in (\[fchapD1c\]). We have $$\begin{aligned} \mathbb{E}_{f_0} [\mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{\hat{f}_m}^{(n)})]\leqslant \mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{f_m}^{(n)})+\frac{D_m}{2 n \mathcal{U}_0^{2}} $$ This proposition says that the “best” estimator amoung the collection $\{\hat f_m\}_{m\in\mathcal{M}}$, in the sense of the Kullback-Leibler risk, is the one which makes a balance between the bias and the complexity of the model. In the ideal situation where $f_0$ belongs to $\mathcal{S}_m$, we have that $$\begin{aligned} \mathbb{E}_{f_0} [\mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{\hat{f}_m}^{(n)})]\leqslant \frac{1}{\mathcal{U}_0^{2}}\frac{D_m}{2n}. \end{aligned}$$ To derive the model selection procedure we need the following assumption : $$\begin{aligned} & {\stepcounter{hypc}\tag{$\mathbf{A_{\thehypc}}$} \label{A1}} \mbox{There exists a constant}~ 0 < c_{1} < \infty~ \mbox{such that } ~\max_{1\leqslant i \leqslant n}|f_{0}(x_{i})|\leqslant c_{1}.~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ $$ In the following theorem we propose a choice for the penalty function and we state non asymptotic risk bounds. \[theo1\] Given $C_0 >0$, for $m\in\mathcal{M}$, let $ \hat f_m$ and $f_m$ be defined as (\[fchapD1c\]). Let us denote $\parallel f\parallel_n^2=n^{(-1)}\sum_{i=1}^n f^2(x_i)$. Let $\{L_m\}_{m\in \mathcal{M}}$ some positive numbers satisfying $$\Sigma=\sum_{m\in \mathcal{M}}\exp(-L_mD_m)< \infty.$$ We define $\mbox{pen}: \mathcal{M} \rightarrow \mathbb{R}_{+}$ , such that, for $m\in \mathcal{M}$, $$\mbox{pen}(m)\geqslant \lambda\frac{D_m}{n}\left(\frac{1}{2}+\sqrt{5L_m}\right)^{2},$$ where $\lambda$ is a positive constant depending on $c_1$. Under Assumption (\[A1\]) we have $$\mathbb{E}_{f_0}[\mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{\hat f_{\hat m}}^{(n)})]\leqslant C\inf_{m\in \mathcal{M}}\left\{\mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{f_m}^{(n)})+\mbox{pen}(m)\right\}+C_1\frac{\Sigma}{n}$$ and $$\mathbb{E}_{f_0}\parallel \hat f_{\hat{m}}-f_{0}\parallel_{n}^{2}\leqslant C^{\prime}\inf_{m\in\mathcal{M}}\left\{ \parallel f_{0}-f_m\parallel_{n}^{2}+\mbox{pen}(m)\right\}+C_{1}^{\prime}\frac{\Sigma}{n}.$$ where $C,C^{\prime}, C_{1},C_{1}^{\prime}$ are constants depending on $c_{1}$ and $C_{0}$. This theorem provides oracle inequalities for $L_{2}-$norm and for K-L divergence between the selected model and the true function. Provided that penalty has been properly chosen, one can bound the $L_{2}-$norm and the K-L divergence between the selected model and the true function. The inequalities in Theorem \[theo1\] are non-asymptotic inequalities in the sense that the result is obtain for a fixed $n$. This theorem is very general and does not make specific assumption on the dictionary. However, the penalty function depends on some unknown constant $\lambda$ which depends on the bound of the true function $f_0$ through Condition [(\[cond\_lambda\])]{}. In practice this constant can be calibrated using “slope heuristics” proposed in Birgé and Massart [-@birge2007]. In the following we will show how to obtain similar result with a penalty function not connected to the bound of the true unknown function $f_0$ in the regressogram case. Regressogram functions {#S3} ====================== Collection of models {#models} --------------------- In this section we suppose (without loss of generality) that $f_0 : [0,1]\to \mathbb{R}$. For the sake of simplicity, we use the notation $f_0(x_i)=f_0(i)$ for every $i=1,\dots,n$. Hence $f_0$ is defined from $\{1,\dots,n\}$ to $\mathbb{R}$. Let $\mathcal{M}$ be a collection of partitions of intervals of $\mathcal{X}=\{1,\dots,n\}$. For any $m\in\mathcal{M}$ and $J\in m$, let ${{{{1}}\hspace{-1,1mm}{\mathrm I}}}_J$ denote the indicator function of $J$ and $S_m$ be the linear span of $\{{{{{1}}\hspace{-1,1mm}{\mathrm I}}}_J,J\in m\}$. When all intervals have the same length, the partition is said regular, and is is irregular otherwise. Collection of estimators: regressogram -------------------------------------- For a fixed $m$, the minimizer $\hat f_m$ of the empirical contrast function $\gamma_n$, over $S_m$, is called the *regressogram*. That is, $f_0$ is estimated by $\hat{f}_m$ given by $$\begin{aligned} \label{fchapD} \hat{f}_m =\arg\min_{f\in S_m}\gamma_n(f).\end{aligned}$$ where $\gamma_n$ is given by (\[gamma\_n\]). Associated to $S_m$ we have $$\begin{aligned} \label{fD} f_m=\arg\min_{f\in S_m}\gamma(f)-\gamma(f_0)=\arg\min_{f\in S_m}\mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{f}^{(n)}).\end{aligned}$$ In the specific case where $S_m$ is the set of piecewise constant functions on some partition $m$, $\hat{f}_m$ and ${f}_m$ are given by the following lemma. \[Projhisto\] For $m\in \mathcal{M}$ , let $f_m$ and $\hat f_m$ be defined by [(\[fD\])]{} and [(\[fchapD\])]{} respectively . Then, $f_m=\sum_{J\in m}\overline{f}_m^{(J)}{{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{J}$ and $\hat f_m=\sum_{J\in m}\hat{f}_m^{(J)}{{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{J}$ with $$\overline{f}_m^{(J)}=\log \left( \frac{ \sum_{i\in J}\pi_{f_0}(x_i) }{\vert J \vert (1-\sum_{i\in J}\pi_{f_0}(x_i) / \vert J \vert ) }\right) \mbox{ and } \hat{f}_m^{(J)}=\log \left( \frac{ \sum_{i\in J}Y_i }{\vert J \vert (1-\sum_{i\in J}Y_i/ \vert J \vert ) }\right).$$ Moreover, $\pi_{f_m}=\sum_{J\in m}\pi_{f_m}^{(J)}{{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{J}$ and $\pi_{\hat f_m}=\sum_{J\in m}\pi_{\hat f_m}^{(J)}{{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{J}$ with $$\pi_{f_m}^{(J)}=\frac{1}{\vert J\vert}\sum_{i\in J}\pi_{f_0}(x_i), \mbox{ and } \pi_{\hat f_m}^{(J)}=\frac{1}{\vert J\vert}\sum_{i\in J}Y_i.$$ Consequently, $\pi_{f_m}=\arg\min_{\pi \in S_m}\parallel \pi-\pi_{f_0}\parallel^2_n$ is the usual projection of $\pi_{f_0}$ on to $S_m$. First bounds on $\hat{f}_m$ --------------------------- Consider the following assumptions: $$\begin{aligned} &{\stepcounter{hypc}\tag{$\mathbf{A_{\thehypc}}$} \label{H0}} \mbox{ There exists a constant }\rho>0 \mbox{ such that } \min_{i=1,\cdots,n}\pi_{f_0}(x_i)\geq \rho~~ \mbox{and}~~~ \min_{i=1,\cdots,n} [1-\pi_{f_0}(x_i)]\geq \rho .\end{aligned}$$ \[borne2\] Consider Model [(\[model\])]{} and let $\hat{f}_m$ be defined by [(\[fchapD\])]{} with $m$ such that for all $J\in m$, $\vert J\vert \geqslant \Gamma [\log(n)]^2$ for a positive constant $\Gamma$. Under Assumption (\[H0\]), for all $\delta>0$ and $a>1$, we have $$\begin{aligned} \mathbb{E}_{f_0}[\mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{\hat f_{m}}^{(n)})] &\leqslant& \mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{f_{m}}^{(n)}))+\frac{(1+\delta)D_m}{(1-\delta)^2n} +\frac{\kappa(\Gamma,\rho,\delta)}{n^a}.\end{aligned}$$ Adaptive estimation and oracle inequality {#modsel} ----------------------------------------- The following result provides an adaptive estimation of $f_0$ and a risk bound of the selected model. \[def\_mf\] Let $\mathcal{M}$ be a collection of partitions of $\mathcal{X}=\{1,\dots,n\}$ constructed on the partition $m_f$ *i.e.* $m_f$ is a refinement of every $m \in \mathcal{M}.$ In other words, a partition $m$ belongs to $\mathcal{M}$ if any element of $m$ is the union of some elements of $m_f$. Thus $S_{m_f}$ contains every model of the collection $\{S_m\}_{m\in \mathcal{M}}$. \[theo2\] Consider Model [(\[model\])]{} under Assumption (\[H0\]). Let $\{S_m, m\in \mathcal{M}\}$ be a collection of models defined in Section \[models\] where $\mathcal{M}$ is a set of partitions constructed on the partition $m_{f}$ such that $$\label{def_Gam} \mbox{for all} ~~J\in m_{f}, \vert J\vert \geq \Gamma \log^{2}(n),$$ where $\Gamma$ is a positive constant. Let $(L_m)_{m\in \mathcal{M}}$ be some family of positive weights satisfying $$\label{sigma} \Sigma=\sum_{m\in \mathcal{M}}\exp(-L_m D_m) < +\infty.$$ Let $\mbox{pen}: \mathcal{M} \rightarrow \mathbb{R}_{+}$ satisfying for $m\in \mathcal{M}$, and for $\mu > 1,$ $$\mbox{pen}(m)\geqslant \mu\frac{D_m }{n}\left(1+6L_m+8\sqrt{L_m}\right).$$ Let $\tilde{f}=\hat{f}_{\hat m}$ where $$\hat m =\arg\min_{m \in \mathcal{M}}\left\{\gamma_n(\hat f_m)+\mbox{pen}(m)\right\},$$ then, for $C_{\mu}= 2\mu^{1/3}/(\mu^{1/3}-1)$, we have $$\label{oracle} \mathbb{E}_{f_0}[h^2(\mathbb{P}^{(n)}_{f_0},\mathbb{P}^{(n)}_{\tilde f})]\leqslant C_{\mu}\inf_{m\in \mathcal{M}}\left\{\mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{f_m}^{(n)})+\mbox{pen}(m)\right\}+\frac{C(\rho,\mu, \Gamma,\Sigma)}{n}.$$ This theorem provides a non asymptotic bound for the Hellinger risk between the selected model and the true one. On the opposite of Theorem \[theo1\], the penalty function does not depend on the bound of the true function. The selection procedure based only on the data offers the advantage to free the estimator from any prior knowledge about the smoothness of the function to estimate. The estimator is therefore adaptive. As we bound Hellinger risk in  [(\[oracle\])]{} by Kulback-Leibler risk, one should prefer to have the Hellinger risk on the right hand side instead of the Kulback-Leibler risk. Such a bound is possible if we assume that $\log(\Vert \pi_{f_0}/\rho\Vert_{\infty})$ is bounded. Indeed if we assume that there exists $T$ such that $\log(\Vert \pi_{f_0}/\rho\Vert_{\infty})\leq T$, this implies that $\log(\Vert \pi_{f_0}/\pi_{f_m}\Vert_{\infty})\leq T$ uniformly for all partitions $m\in \mathcal{M}.$ Now using Inequality (7.6) p. 362 in Birgé and Massart [-@birge1998] we have that $\mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{f_m}^{(n)})\leq (4+2\log(M))h^2(\mathbb{P}_{f_0},\mathbb{P}_{f_m})$ which implies, $$\mathbb{E}_{f_0}[h^2(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{\tilde f}^{(n)})]\leqslant C_{\mu}.C(T)\inf_{m\in \mathcal{M}}\left\{h^2(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{f_m}^{(n)})+\mbox{pen}(m)\right\}+\frac{C(\rho,\mu, \Gamma,\Sigma)}{n}.$$ ### Choice of the weights $\{L_m, m\in \mathcal{M}\}$ {#choice-of-the-weights-l_m-min-mathcalm .unnumbered} According to Theorem \[theo2\], the penalty function depends on the collection $\mathcal{M}$ through the choice of the weights $L_m$ satisfying [(\[sigma\])]{}, *i.e.* $$\label{Sigma1} \Sigma=\sum_{m\in \mathcal-{M}}\exp(-L_m D_m) =\sum_{D\geq 1}e^{-L_DD} Card\{m\in \mathcal{M}, \vert m \vert= D\}< \infty.$$ Hence the number of models having the same dimension $D$ plays an important role in the risk bound. If there is only one model of dimension $D$, a simple way of choosing $L_D$ is to take them constant, *i.e.* $L_D=L$ for all $m\in \mathcal{M}$, and thus we have from [(\[Sigma1\])]{} $$\Sigma=\sum_{D\geq 1} e^{-LD}< \infty.$$ This is the case when $\mathcal{M}$ is a family of regular partitions. Consequently, the choice *i.e.* $L_D=L$ for all $m\in \mathcal{M}$ leads to a penalty proportional to the dimension $D_m$, and for every $D_m\geq1$, $$\label{penlin} \mbox{pen}(m) = \mu \Big(1+6L+8\sqrt{L}\Big)\frac{D_m}{n}=c\times\frac{D_m}{n}.$$ In the more general context, that is in the case of irregular partitions, the numbers of models having the same dimension $D$ is exponential and satisfies $$Card\Big\{ m\in \mathcal{M}, \vert m \vert=D\Big\}={n-1\choose D-1} \leq {n\choose D}.$$ In that case we choose $L_m$ depending on the dimension $D_m$. With $L$ depending on $D$, $\Sigma$ in [(\[sigma\])]{} satisfies $$\begin{aligned} \Sigma&=&\sum_{D\geq 1}e^{-L_{D}D} Card\{m\in \mathcal{M}, \vert m \vert=D\}\\ &\leq& \sum_{D\geq 1}e^{-L_{D}D} {n\choose D}\\ &\leq& \sum_{D\geq 1}e^{-L_{D}D}\Big(\frac{en}{D}\Big)^{D}\\ &\leq&\sum_{D\geq 1}e^{-D\Big(L_{D}-1-\log{(\frac{n}{D})}\Big)}\end{aligned}$$ So taking $L_{D}=2+\log{(\frac{n}{D})}$ leads to $\Sigma <\infty$ and the penalty becomes $$\label{pen} \mbox{pen}(m)=\mu\times \mbox{pen}_{\mbox{shape}}(m),$$ where $$\mbox{pen}_{\mbox{shape}}(m)=\frac{D_m}{n}\Big[ 13+ 6\log{\Big(\frac{n}{D_m}\Big)}+8\sqrt{2+\log{\Big(\frac{n}{D_m}\Big)}}\Big].$$ The constant $\mu$ can be calibrated using the slope heuristics Birgé and Massart [-@birge2007] (see Section \[slope\]). In Theorem \[theo2\], we do not assume that the target function $f_0$ is piecewise constant. However in many contexts, for instance in segmentation, we might want to consider that $f_0$ is piecewise constant or can be well approximated by piecewise constant functions. That means there exists of partition of $\mathcal{X}$ within which the observations follow the same distribution and between which observations have different distributions. Simulations {#S4} =========== In this section we present numerical simulation to study the non-asymptotic properties of the model selection procedure introduced in Section \[modsel\]. More precisely, the numerical properties of the estimators built by model selection with our criteria are compared with those of the estimators resulting from model selection using the well known criteria AIC and BIC. Simulations frameworks ---------------------- We consider the model defined in [(\[model\])]{} with $f_0 : [0,1] \rightarrow \mathbb{R}$. The aim is to estimate $f_0$. We consider the collection of models $(S_m)_{m \in\mathcal{M}}$, where $$S_m=\mbox{Vect}\{{{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{[\frac{k-1}{D_m},\frac{k}{D_m}[} ~\mbox{such that}~1\leq k \leq D_m\},$$ and $\mathcal{M}$ is the collection of regular partitions $$m=\left\{\Big [\frac{k-1}{D_m},\frac{k}{D_m}\Big[, \mbox{ such that } 1\leq k\leq D_m, \right\},$$ $$D_m \leq \frac{n}{\log n}.$$ The collection of estimators is defined in Lemma \[Projhisto\]. Let us thus consider four penalties. - the AIC criretion defined by $$\mbox{pen}_{\mbox{AIC}}=\frac{D_m}{n};$$ - the BIC criterion defined by $$\mbox{pen}_{\mbox{BIC}}=\frac{\log n }{2n}D_m;$$ - the penalty proportional to the dimension as in [(\[penlin\])]{} defined by $$\mbox{pen}_{\mbox{lin}}=c\times\frac{D_m}{n};$$ - and the penalty defined in [(\[pen\])]{} by $$\mbox{pen}= \mu\times \mbox{pen}_{\mbox{shape}}(m).$$ $\mbox{pen}_{\mbox{lin}}$ and $\mbox{pen}$ are penalties depending on some unknown multiplicative constant (c and $\mu$ respectively) to be calibrated. As previously said we will use the “slope heuristics” introduced in Birgéa nd Massart [-@birge2007] to calibrate the multiplicative constant. We have distinguished two cases: - The case where there exists $m_o\in \mathcal{M}$ such that the true function belong to $S_{m_{o}}$ *i.e.* where $f_0$ is piecewise constant, $$\begin{aligned} \notag \mbox{Mod1:}~~ f_0&=&0.5{{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{[0,1/3)}+{{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{[1/3,0.5)}+2{{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{[0.5,2/3)}+0.25{{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{[2/3,1]}\\ \notag \mbox{Mod2:}~~f_0&=&0.75{{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{[0,1/4]} +0.5{{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{[1/4,0.5)}+0.2{{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{[0.5,3/4)}+0.3{{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{[3/4,1]}.\end{aligned}$$ - The second case, $f_0$ does not belong to any $S_m$, $m\in \mathcal{M}$ and is chosen in the following way: $$\begin{aligned} \notag \mbox{Mod3:} ~~f_0(x)&=&\sin{(\pi x)} \\ \notag \mbox{Mod4:}~~ f_0(x)&=&\sqrt{x}.\end{aligned}$$ In each case, the $x_i$’s are simulated according to uniform distribution on $[0,1].$ The Kullback-Leibler divergence is definitely not suitable to evaluate the quality of an estimator. Indeed, given a model $S_m$, there is a positive probability that on one of the interval $I\in m$ we have $\pi_{\hat f_{ m}}^{(I)}=0$ or $\pi_{\hat f_{ m}}^{(I)}=1$, which implies that $\mathcal{K}(\pi_{f_0}^{(n)},\pi_{\hat f_{m}}^{(n)})=+\infty$. So we will use the Hellinger distance to evaluate the quality of an estimator. Even if an oracle inequality seems of no practical use, it can serve as a benchmark to evaluate the performance of any data driven selection procedure. Thus model selection performance of each procedure is evaluated by the following benchmark $$C^{*}:=\frac{\mathbb{E}\Big[h^{2}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{\hat f_{\hat m}}^{(n)})\Big]}{\mathbb{E}\Big[\inf_{m\in \mathcal{M}}h^{2}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{\hat f_{m}}^{(n)})\Big]}.$$ $C^{*}$ evaluate how far is the selected estimator to the oracle. The values of $C^{*}$ evaluated for each procedure with different sample size $n\in \{100, 200,\dots,1000\}$ are reported in Figure \[fig1\] , Figure \[fig2\], Figure \[fig1\_1\] and Figure \[fig2\_1\]. For each sample size $n\in \{100, 200,\dots,1000\}$, the expectation was estimated using mean over 1000 simulated datasets. Slope heuristics {#slope} ----------------- The aim of this section is to show how the penalty in Theorem \[theo2\] can be calibrated in practice using the main ideas of data-driven penalized model selection criterion proposed by Birgé and Massart [-@birge2007]. We calibrate penalty using “slope heuristics” first introduced and theoretically validated by Birgé and Massart [-@birge2007] in a gaussian homoscedastic setting. Recently it has also been theoretically validated in the heteroscedastic random-design case by Arlot [-@arlot2009] and for least squares density estimation by Lerasle [-@lerasle]. Several encouraging applications of this method are developed in many other frameworks (see for instance in clustering and variable selection for categorical multivariate data  Bontemps and Toussile [-@bontemps], for variable selection and clustering via Gaussian mixtures Maugis and Michel [-@maugis2011], in multiple change points detection Lebarbier  [-@lebarbier]). Some overview and implementation of the slope heuristics can be find in Baudry *et al.* [-@baudry]. We now describe the main idea of those heuristics, starting from that main goal of the model selection, that is to choose the best estimator of $f_0$ among a collection of estimators $\{\hat f_m\}_{m \in \mathcal{M}}$. Moreover, we expect that this best estimator mimics the so-called oracle defined as (\[m\*\]). To this aim, the great challenge is to build a penalty function such that the selected model $\hat m$ is nearly as good as the oracle. In the following we call the ideal penalty the penalty that leads to the choice of $m*$. Using that $$\mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{\hat f_m}^{(n)})=\gamma(\hat f_m)-\gamma(f_0),$$ then, by definition, $m*$ defined in [(\[m\*\])]{} satisfies $$m*=\arg\min_{m\in \mathcal{M}}[\gamma(\hat f_m)-\gamma(f_0)]=\arg\min_{m\in \mathcal{M}}\gamma(\hat f_m).$$ The ideal penalty, leading to the choice of the oracle $m*$, is thus $[\gamma(\hat f_m)-\gamma_n(\hat f_m)]$, for $ m\in \mathcal{M}.$ As the matter of fact, by replacing $\mbox{pen}_{id}(\hat f_m)$ by its value, we obtain $$\begin{aligned} \arg\min_{m \in \mathcal{M}}[ \gamma_n(\hat f_m)+\mbox{pen}_{id}(\hat f_m)]&=&\arg\min_{m \in \mathcal{M}}[ \gamma_n(\hat f_m)+ \gamma(\hat f_m)-\gamma_n(\hat f_m)]\\&=&\arg\min_{m\in \mathcal{M}}[\gamma(\hat f_m)]\\ &=&m*.\end{aligned}$$ Of course this ideal penalty always selects the oracle model but depends on the unknown function $f_0$ throught the sample distribution, since $\gamma(t)=\mathbb{E}_{f_0}[\gamma_n(t)].$ A natural idea is to choose $\mbox{pen}(m)$ as close as possible to $\mbox{pen}_{id}(m)$ for every $m\in \mathcal{M}$. Now, we use that this ideal penalty can be decomposed into $$\begin{aligned} \mbox{pen}_{id}(m)&=&\gamma(\hat f_m) -\gamma_n(\hat f_m) = v_m+\hat v_m+ e_m,\end{aligned}$$ where $$v_m= \gamma(\hat f_m)-\gamma(f_m), \quad \hat v_m=\gamma_n(f_m)-\gamma_n(\hat f_m), \mbox{ and } \quad e_m=\gamma(f_m)-\gamma_n(f_m).$$ The slope heuristics relies on two points: - The existence of a minimal penalty $\mbox{pen}_{\mbox{min}}(m)=\hat v_m$ such that when the penalty is smaller than $\mbox{pen}_{\mbox{min}}$ the selected model is one of the most complex models. Whereas, penalties larger than $\mbox{pen}_{\mbox{min}}$ lead to a selection of models with “reasonable” complexity. - Using concentration arguments, it is reasonable to consider that uniformly over $\mathcal{M}$, $\gamma_n(f_m)$ is close to its expectation which implies that $e_m\approx 0$. In the same way, since $\hat v_m$ is a empirical version of $v_m$, it is also reasonable to consider that $v_m\approx \hat v_m$. Ideal penalty is thus approximately given by $2 \hat v_m$, and thus $$\begin{aligned} \mbox{pen}_{id}(m)&\approx &2 \mbox{pen}_{min}(m).\end{aligned}$$ In practice, $\hat v_m$ can be estimated from the data provided that ideal penalty $\mbox{pen}_{id}(.)=\kappa_{id}\mbox{pen}_{shape}(.)$ is known up to a multiplicative factor. A major point of the slope heuristics is that $$\frac{\kappa_{id}}{2}\mbox{pen}_{shape}(.)$$ is a good estimator of $\hat v_m$ and this provides the minimal penalty. Provided that $\mbox{pen}=\kappa\times \mbox{pen}_{shape}$ is known up to a multiplicative constant $\kappa$ that is to be calibrated, we combine the previously heuristic to the method usually known as dimension jump method. In practice, we consider a grid $\kappa_1,\dots,\kappa_M$, where each $\kappa_j$ leads to a selected model $\hat m_{\kappa_i}$ with dimension $D_{\hat m_{\kappa_i}}$. The constant $\kappa_{min}$ which corresponds to the value such that $\mbox{pen}_{min}=\kappa_{min}\times \mbox{pen}_{shape}$, is estimated using the first point of the “slope heuristics”. If $D_{\hat m_{\kappa_j}}$ is plotted as a function of $\kappa_j$, $\kappa_{min}$ is such that $D_{\hat m_{\kappa_j}}$ is “huge” for $\kappa< \kappa_{min}$ and “reasonably small” for $\kappa>\kappa_{min}$. So $\kappa_{min}$ is the value at the position of the biggest jump. For more details about this method we refer the reader to Baudry *et al.* [-@baudry] and Arlot and Massart [-@arlot2009]. Figures \[fig1\] and \[fig1\_1\] are the cases where the true function is piecewise constant. Figure \[fig2\] and Figure \[fig2\_1\] are situations where the true function does not belong to any model in the given collection. The performance of criteria depends on the sample size $n$. In these two situations we observe that our two model selection procedures are comparable, and their performance increases with $n$. While the performance of model selected by BIC decreases with $n$. Our criteria outperformed the AIC for all $n$. The BIC criterion is better than our criteria for $n\leq 200$. For $200< n\leq 400$, the performance of the model selected by BIC is quite the same as the performance of models selected by our criteria. Finally for $n>400$ our criteria outperformed the BIC. Theoretical results and simulations raise the following question : why our criteria are better than BIC for quite large values of $n$ yet theoretical results are non asymptotic? To answer this question we can say that, in simulations, to calibrate our penalties we have used “slope heuristics”, and those heuristic are based on asymptotic arguments (see Section \[slope\]). ![Different functions $f_0$ to be estimated](f_mod.pdf){height="20cm" width="15cm"} ![\[fig1\] Model selection performance ($C^{*}$) as a function of sample size n, with each penalty, Mod1.](pen_f1_1000.pdf){height="11cm" width="13cm"} ![\[fig1\_1\] Model selection performance ($C^{*}$) as a function of sample size n, with each penalty, Mod2.](pen_f10_1000.pdf){height="11cm" width="13cm"} ![\[fig2\] Model selection performance ($C^{*}$) as a function of sample size n, with each penalty, Mod3.](pen_sinpix_1000.pdf){height="11cm" width="13cm"} ![\[fig2\_1\] Model selection performance ($C^{*}$) as a function of sample size n, with each penalty, Mod4.](pen_sqrt_1000.pdf){height="11cm" width="13cm"} Proofs {#S5} ====== Notations and technical tools ----------------------------- Subsequently we will use the following notations. Denote by $\parallel f\parallel_n$ and $\langle f,g \rangle_n $ the empirical euclidian norm and the inner product $$\parallel f\parallel_n^2=\frac{1}{n}\sum_{i=1}^n f^2(x_i), \mbox{ and }\langle f,g\rangle_n=\frac{1}{n}\sum_{i=1}^n f(x_i)g(x_i).$$ Note that $\parallel . \parallel_n$ is a semi norm on the space $\mathcal{F}$ of functions $g : \mathcal{X} \longrightarrow \mathbb{R}$, but is a norm in the quotient space $\mathcal{F} \slash \mathcal{R} $ associated to the equivalence relation $\mathcal{R}$ : $g ~\mathcal{R}~ h$ if and only if $g(x_i)=h(x_i)$ for all $i\in \{1,\dots,n\}$. It follows from [(\[gamma\_n\])]{} that $\gamma$ defined in [(\[gamma\])]{} can be expressed as the sum of a centered empirical process and of the estimation criterion $\gamma_n$. More precisely, denoting by $\vec{\varepsilon}=(\varepsilon_1,\cdots,\varepsilon_n)^T$, with $\varepsilon_i=Y_i-\mathbb{E}_{f_0}(Y_i),$ for all $f$, we have $$\begin{aligned} \label{decompgamma} \gamma(f)=\gamma_n(f)+\frac{1}{n}\sum_{i=1}^n \varepsilon_i f(x_i):= \gamma_n(f)+\langle\vec{\varepsilon},f\rangle_n.\end{aligned}$$ Easy calculations show that for $\gamma$ defined in [(\[gamma\])]{} we have, $$\begin{aligned} \mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{f}^{(n)})&=&\frac{1}{n}\int \log\left( \frac{\mathbb{P}_{f_0}^{(n)}}{\mathbb{P}_{f}^{(n)}}\right) d\mathbb{P}_{f_0}^{(n)} =\gamma(f)-\gamma(f_0)\\ &=&\frac{1}{n}\sum_{i=1}^n \left[ \pi_{f_0}(x_i)\log\left( \frac{\pi_{f_0}(x_i)}{ \pi_{f}(x_i) }\right) +(1-\pi_{f_0}(x_i))\log\left( \frac{1-\pi_{f_0}(x_i)}{ 1-\pi_{f}(x_i) }\right) \right].\end{aligned}$$ Let us recall the usual bounds (see Castellan [-@Castellan2003]) for kullback-Leibler information: \[borneK\] For positive densities $p$ and $q$ with respect to $\mu$, if $f=\log(q/p)$, then $$\begin{aligned} \frac{1}{2}\int f^2 (1\wedge e^f) p\, d\mu \leqslant \mathcal{K}(p,q)\leqslant \frac{1}{2}\int f^2 (1\vee e^f) p \,d\mu .\end{aligned}$$ Proof of Proposition \[borne1\]: -------------------------------- By definition of $\hat{f}_m$, for all $f \in S_m\cap \mathbb{L}_\infty(C_0)$, $\gamma_n(\hat{f}_m)-\gamma_n(f)\leqslant 0.$ We apply [(\[decompgamma\])]{}, with $f=f_m$ and $f=\hat f_m$, $$\begin{aligned} \gamma(\hat{f}_m)-\gamma(f_0)\leqslant \gamma(f_m)-\gamma(f_0)+\langle\vec{\varepsilon},\hat{f}_m-f_m\rangle_n.\end{aligned}$$ As usual, the main part of the proof relies on the study of the empirical process $\langle\vec{\varepsilon},\hat{f}_m-f_m\rangle_n$. Since $\hat{f}_m-f_m$ belongs to $S_m$, $\hat{f}_m-f_m=\sum_{j=1}^{D_m}\alpha_j\psi_j$, where $\{\psi_1,\dots,\psi_{D_m}\},$ is an orthonormal basis of $S_m$ and consequently $$\langle\vec{\varepsilon},\hat{f}_m-f_m\rangle_n= \sum_{j=1}^{D_m}\alpha_j \langle\vec{\varepsilon},\psi_j\rangle_n.$$ Applying Cauchy-Schwarz inequality we get $$\begin{aligned} \langle\vec{\varepsilon},\hat{f}_m-f_m\rangle_n&\leqslant& \sqrt{\sum_{j=1}^{D_m}\alpha_j^{2}}\sqrt{\sum_{j=1}^{D_m}\left(\langle\vec{\varepsilon},\psi_j\rangle_n\right)^{2}} \\ &=&\lVert \hat{f}_m-f_m \rVert_n\sqrt{\sum_{j=1}^{D_m}\left(\frac{1}{n}\sum_{i=1}^{n}\varepsilon_{i}\psi_j(x_i)\right)^{2}}.\end{aligned}$$ We now apply Lemma \[lm\] (See Section \[appendix\] for the proof of Lemma \[lm\]) \[lm\] Let $\mathcal{S}_m$ the model defined in [(\[model\_col\_gen\])]{} and $\{\psi_1,\dots,\psi_{D_m}\}$ an orthonormal basis of the linear span $\{\phi_k,~ k\in m\}$. We also denote by $\Lambda_m$ the set of $\beta=(\beta_1,...,\beta_D)$ such that $f_\beta(.)=\sum_{j=1}^{D}\beta_j\psi_j(.)$ satisfies $f_{\beta}\in \mathcal{S}_m\cap \mathbb{L}_\infty(C_0)$. Let $\beta^{*}$ be any minimizer of the function $\beta \rightarrow \gamma(f_\beta)$ over $\Lambda_m$, we have $$\frac{\mathcal{U}_0^{2}}{2}\lVert f_{\beta}-f_{\beta^{*}}\rVert^2_{n}\leq \gamma(f_\beta)-\gamma(f_{\beta^{*}}),$$ where $\mathcal{U}_0=e^{C_0}/(1+e^{C_0})^2$. Then we have $$\langle\vec{\varepsilon},\hat{f}_m-f_m\rangle_n\quad \leqslant \quad \sqrt{\sum_{j=1}^{D_m}\left(\langle\vec{\varepsilon},\psi_j\rangle_n\right)^{2}}\frac{\sqrt{2}}{\mathcal{U}_0} \sqrt{\gamma(\hat{f}_m)-\gamma(f_m)}$$ Now we use that for every positive numbers, $a$, $b$, $x$, $ab\leqslant (x/2)a^2+ [1/(2x)] b^2$, and infer that $$\gamma(\hat{f}_m)-\gamma(f_0)\leq \gamma(f_m)-\gamma(f_0)+ \frac{x}{\mathcal{U}_0^{2}}\sum_{j=1}^{D_m}\left(\langle\vec{\varepsilon},\psi_j\rangle_n\right)^{2} + (1/2x)( \gamma(\hat{f}_m)-\gamma(f_m)).$$ For $x>1/2$, it follows that $$\begin{aligned} \mathbb{E}_{f_0}[\gamma( \hat{f}_m)-\gamma(f_0)]\leqslant \gamma(f_m)-\gamma(f_0)+\frac{2x^{2}}{(2x-1)\mathcal{U}_0^{2}} \mathbb{E}_{f_0}\left[\sum_{j=1}^{D_m}\left(\langle\vec{\varepsilon},\psi_j\rangle_n\right)^{2}\right] .\end{aligned}$$ We conclude the proof by using that $$\mathbb{E}_{f_0}\left[\sum_{j=1}^{D_m}\left(\langle\vec{\varepsilon},\psi_j\rangle_n\right)^{2}\right]\leqslant \frac{D_m}{4n}.$$ Proof of Theorem \[theo1\] -------------------------- By definition, for all $m \in \mathcal{M}$, $$\gamma_n(\hat f_{\hat{m}})+\mbox{pen}(\hat{m})\leqslant \gamma_n(\hat{f_m})+\mbox{pen}(m) \leqslant \gamma_n(f_m)+\mbox{pen}(m).$$ Applying [(\[decompgamma\])]{} we have $$\label{decomp1} \mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{\hat{f}_{\hat m}}^{(n)})\leqslant \mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{f_m}^{(n)})+\langle \vec{\varepsilon},\hat{f}_{\hat m}- f_m\rangle_n+\mbox{pen}(m)-\mbox{pen}(\hat{m}).$$ It remains to study $\langle \vec{\varepsilon}, \hat{f}_{\hat m}-f_m\rangle_n$, using the following lemma, which is a modification of Lemma 1 in Durot *et al.* [-@DurotLebarbierTocquet]. \[control\] For every $D$, $D^{\prime}$ and $x\geqslant 0$ we have $$\mathbb{P}\left(\sup_{u\in \big(S_{D}\cap \mathbb{L}_\infty(C_0)+S_{D^{\prime}}\cap \mathbb{L}_\infty(C_0)\Big)}\frac{\langle\vec{\varepsilon} ,u\rangle_n}{\parallel u \parallel_n}\rangle\sqrt{\frac{ D+D^{\prime}}{4n}}+\sqrt{\frac{5x}{n}}\right)\leqslant \exp{(-x)}.$$ Fix $\xi>0$ and let $\Omega_\xi(m)$ denote the event $$\Omega_\xi(m)=\bigcap_{m^{\prime}\in \mathcal{M}}\left\{ \sup_{u\in \Big(S_{m}\cap \mathbb{L}_\infty(C_0)+S_{m^{\prime}}\cap \mathbb{L}_\infty(C_0)\Big)}\frac{\langle\vec{\varepsilon} ,u\rangle_n}{\parallel u \parallel_n} \leq \sqrt{\frac{ D_m+D_{m^{\prime}}}{4n}}+\sqrt{5(L_{m^{\prime}}D_{m^{\prime}}+\xi)/n} \right\}.$$ Then we have $$\label{prob_sigma} \mathbb{P}\left(\Omega_\xi(m)\right)\geqslant 1-\Sigma\exp(-\xi).$$ See the Appendix for the proof of this lemma. Fix $\xi>0$, applying Lemma \[control\], we infer that on the event $\Omega_\xi(m),$ $$\begin{aligned} \langle \vec{\varepsilon}, \hat{f}_{\hat m}-f_m\rangle_n& \leqslant \left(\sqrt{\frac{ D_m+D_{\hat{m}}}{4n}}+\sqrt{5\frac{L_{\hat{m}}D_{\hat{m}}+\xi}{n}}\right)\parallel \hat{f}_{\hat m}-f_m\parallel_{n}\\ &\leqslant \left(\sqrt{\frac{ D_m+D_{\hat{m}}}{4n}}+\sqrt{5\frac{L_{\hat{m}}D_{\hat{m}}+\xi}{n}}\right)\left(\parallel \hat{f}_{\hat m}-f_{0}\parallel_n+\parallel f_{0}-f_m\parallel_{n}\right)\\ &\leqslant \left(\sqrt{D_{\hat{m}}}\left(\frac{1}{\sqrt{4n}}+\sqrt{\frac{5L_{\hat{m}}}{n}}\right)+\sqrt{\frac{D_m}{4n}}+\sqrt{5\frac{\xi}{n}}\right)\left(\parallel \hat{f}_{\hat m}-f_{0}\parallel_n+\parallel f_{0}-f_m\parallel_{n}\right).\end{aligned}$$ Applying that $2xy\leqslant\theta x^{2}+ \theta^{-1}y^{2}$, for all $ x>0$, $y>0$, $\theta>0$, we get that on $\Omega_\xi(m)$ and for every $\eta\in ]0,1[$ $$\begin{aligned} \langle \vec{\varepsilon}, \hat{f}_{\hat m}-f_m\rangle_n\!\!\!&\leqslant &\!\!\!(\frac{1-\eta}{2})\left[(1+\eta)\parallel \hat{f}_{\hat m}-f_{0}\parallel_{n}^{2}+(1+\eta^{-1})\parallel f_{0}-f_m\parallel_{n}^{2}\right]\\ &+&\frac{1}{2(1-\eta)}\left[(1+\eta) D_{\hat{m}}\left(\frac{1}{\sqrt{4n}}+\sqrt{\frac{5L_{\hat{m}}}{n}}\right)^{2}+(1+\eta^{-1})\left(\sqrt{\frac{D_m}{4n}}+\sqrt{\frac{5\xi}{n}}\right)^{2}\right]\\ \!\!\!&\leqslant&\!\!\! \frac{1-\eta^{2}}{2}\parallel \hat{f}_{\hat m}-f_{0}\parallel_{n}^{2}+ \frac{\eta^{-1}-\eta}{2}\parallel f_{0}-f_m\parallel_{n}^{2}+\frac{1+\eta}{2(1-\eta)}D_{\hat{m}}\left(\frac{1}{\sqrt{4n}}+\sqrt{\frac{5L_{\hat{m}}}{n}}\right)^{2}\\ &&+\frac{1+\eta^{-1}}{1-\eta}\Big(\frac{D_m}{4n}+\frac{5\xi}{n}\Big).\end{aligned}$$ If $\mbox{pen}(m)\geqslant \Big(\lambda D_m\left(\frac{1}{2}+\sqrt{5L_m}\right)^{2}\Big)/n, $ with $\lambda>0$, we have $$\begin{aligned} \langle \vec{\varepsilon}, \hat{f}_{\hat m}-f_m\rangle_n \!\!\!&\leqslant&\!\!\! \frac{1-\eta^{2}}{2}\parallel \hat{f}_{\hat m}-f_{0}\parallel_{n}^{2}+ \frac{\eta^{-1}-\eta}{2}\parallel f_{0}-f_m\parallel_{n}^{2}+\frac{1+\eta}{2(1-\eta)\lambda}\mbox{pen}(\hat{m})+\frac{1+\eta^{-1}}{(1-\eta)\lambda}\mbox{pen(m)}\\ &&+\frac{1+\eta^{-1}}{1-\eta}\frac{5\xi}{n}.\end{aligned}$$ It follows from (\[decomp1\]) that $$\begin{aligned} \mathcal{K}(\mathbb{P}_{f_0}^{(n)}, \mathbb{P}_{\hat{f}_{\hat m}}^{(n)})&\leqslant&\mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{f_m}^{(n)})+\frac{1-\eta^{2}}{2}\parallel \hat{f}_{\hat m}-f_{0}\parallel_{n}^{2}+\frac{\eta^{-1}-\eta}{2}\parallel f_{0}-f_m\parallel_{n}^{2}\\&&+\frac{1+\eta}{2(1-\eta)\lambda}\mbox{pen}(\hat{m})+\frac{1+\eta^{-1}}{(1-\eta)\lambda}\mbox{pen(m)}+\frac{1+\eta^{-1}}{1-\eta}\frac{5\xi}{n}+\mbox{pen}(m)-\mbox{pen}(\hat{m}).\end{aligned}$$ Taking $\lambda=(\eta+1)/(2(1-\eta))$, we have $$\begin{gathered} \mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{\hat{f}_{\hat m}}^{(n)})\leqslant \mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{f_m}^{(n)})\\+\frac{4\lambda}{(2\lambda+1)^{2}}\parallel \hat{f}_{\hat m}-f_{0}\parallel_{n}^{2}+ \frac{4\lambda}{4\lambda^{2}-1}\parallel f_{0}-f_m\parallel_{n}^{2}+\frac{6\lambda+1}{2\lambda-1}\mbox{pen}(m)+\frac{10\lambda(2\lambda+1)}{2\lambda-1}\frac{\xi}{n}.\end{gathered}$$ Now we use the following lemma (see Lemma 6.1 in Kwemou [-@kwemou]) that allows to connect empirical norm and Kullback-Leibler divergence. \[l81\] Under Assumptions (\[A1\]), for all $m\in \mathcal{M}$ and all $t\in S_m\cap \mathbb{L}_\infty(C_0)$, we have $$c_{min}\lVert t-f_{0}\rVert_{n}^{2}\leqslant \mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{t}^{(n)}) \leqslant c_{max}\lVert t-f_{0}\rVert_{n}^{2}.$$ where $c_{min}$ and $c_{max}$ are constants depending on $C_{0}$ and $c_{1}.$ Consequently $$\mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{\hat{f}_{\hat m}}^{(n)})\leqslant C(c_{min})\left\{\mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{f_m}^{(n)}) +\mbox{pen}(m)\right\}+C_1(c_{min})\frac{\xi}{n},$$ where $$C(c_{min})=\max\left\{ \frac{1+\frac{4\lambda}{(4\lambda^{2}-1)c_{min}}}{1-\frac{4\lambda}{c_{min}(2\lambda+1)^{2}}}; \frac{\frac{6\lambda+1}{2\lambda-1}}{1-\frac{4\lambda}{c_{min}(2\lambda+1)^{2}}}\right\}~\mbox{and} ~~C_1(c_{min})=\frac{\frac{10\lambda(2\lambda+1)}{2\lambda-1}}{1-\frac{4\lambda}{c_{min}(2\lambda+1)^{2}}}.$$ Thus we take $\lambda$ such that $$\label{cond_lambda} 1-\frac{4\lambda}{c_{min}(2\lambda+1)^{2}}>0,$$ where $c_{min}$ depends on the bound of the true function $f_0$. By definition of $\Omega_\xi(m)$ and [(\[prob\_sigma\])]{}, there exists a random variable $V\geqslant 0$ with $\mathbb{P} (V>\xi)\leqslant \Sigma\exp{(-\xi)}$ and $\mathbb{E}_{f_0}(V)\leqslant \Sigma,$ such that $$\mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{\hat{f}_{\hat m}}^{(n)})\leqslant C(c_{min})\left\{\mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{f_m}^{(n)})+\mbox{pen}(m)\right\}+C_1(c_{min})\frac{V}{n},$$ which implies that for all $m\in \mathcal{M}$, $$\mathbb{E}_{f_0}[\mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{\hat{f}_{\hat m}}^{(n)})]\leqslant C(c_{min})\left\{\mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{f_m}^{(n)})+\mbox{pen}(m)\right\}+C_1(c_{min})\frac{\Sigma}{n}.$$ This concludes the proof. Proof of Proposition \[borne2\]: -------------------------------- [Let $f_m$, $\hat f_m$, $\pi_{f_m}$ and $ \pi_{\hat f_m}$ given in Lemma \[Projhisto\], proved in appendix. In the following, $D_m=\vert m\vert.$ For $\delta>0$, let $\Omega_m(\delta)$ be the event $$\begin{aligned} \label{Omega} \Omega_m(\delta) = \bigcap_{J\in m} \left\lbrace \left\vert \frac{\pi_{\hat f_m}^{(J)}}{ \pi_{ f_m}^{(J)} } -1\right\vert \leqslant \delta\right\rbrace \bigcap \left\lbrace \left\vert \frac{1-\pi_{\hat f_m}^{(J)}}{ 1-\pi_{ f_m}^{(J)} } -1\right\vert \leqslant \delta \right\rbrace.\end{aligned}$$ According to pythagore’s type identity and Lemma \[Projhisto\] we write $$\begin{aligned} \mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{\hat{f_{m}}}^{(n)})=\mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{f_m}^{(n)})+\mathcal{K}(\mathbb{P}_{f_m}^{(n)},\mathbb{P}_{\hat{f_{m}}}^{(n)}){{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_m(\delta)}+ \mathcal{K}(\mathbb{P}_{f_m}^{(n)},\mathbb{P}_{\hat{f_{m}}}^{(n)}){{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_m^c(\delta)},\end{aligned}$$ where $$\begin{aligned} \label{k_m_m} \mathcal{K}(\mathbb{P}_{f_m}^{(n)},\mathbb{P}_{\hat{f_{m}}}^{(n)})\!\!\!&=&\!\!\!\frac{1}{n}\sum_{i=1}^n \left[\pi_{f_m}(x_i)\log\left(\frac{\pi_{f_m}(x_i)}{\pi_{\hat f_m}(x_i)}\right) +(1-\pi_{f_m}(x_i))\log\left(\frac{1-\pi_{f_m}(x_i)}{1-\pi_{\hat f_m}(x_i)}\right)\right]\\ \notag \!\!\!&=&\!\!\! \frac{1}{n}\sum_{J\in m} \vert J \vert \left[\pi_{f_m}^{(J)}\log\left(\frac{\pi_{f_m}^{(J)}}{\pi_{\hat f_m}^{(J)}}\right) +(1-\pi_{f_m}^{(J)})\log\left(\frac{1-\pi_{f_m}^{(J)}}{1-\pi_{\hat f_m}^{(J)}}\right)\right].\end{aligned}$$ The first step consists in showing that $$\begin{aligned} \label{step1} \frac{1-\delta}{2(1+\delta)^2}\mathcal{X}_m^2 {{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_m(\delta)}\leqslant \mathcal{K}(\mathbb{P}_{f_m}^{(n)},\mathbb{P}_{\hat{f_{m}}}^{(n)}) {{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_m(\delta)} \leqslant \frac{1+\delta}{2(1-\delta)^2}\mathcal{X}_m^2{{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_m(\delta)} ,\end{aligned}$$ where $$\begin{aligned} \label{chideux} \mathcal{X}_m^2=\frac{1}{n}\sum_{J\in m} \frac{(\sum_{k\in J}\varepsilon_k)^2}{\vert J\vert\pi_{f_m}^{(J)}[1-\pi_{f_m}^{(J)}]}, \mbox{ with }\qquad \frac{4\rho^2 D_m}{n} \leqslant \mathbb{E}_{f_0}[ \mathcal{X}_m^2]\leqslant \frac{2D_m}{n}.\end{aligned}$$ The second step relies on the proof of $$\begin{aligned} \label{step2} \big\vert \mathbb{E}_{f_0}\left(\mathcal{K}(\mathbb{P}_{f_m}^{(n)},\mathbb{P}_{\hat{f_{m}}}^{(n)}){{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_m^c(\delta)}\right)\Big\vert \leqslant 2\log\left( \frac{1}{\rho}\right)\mathbb{P}[ \Omega_m^c(\delta)] .\end{aligned}$$ The last step consists in showing that for $\epsilon >0$, since for all $J\in m$, $\vert J\vert \geq \Gamma [\log(n)]^2$, where $\Gamma>0$ is an absolute constant, then we have $$\begin{aligned} \label{step3}\mathbb{P}[ \Omega_m^c(\delta)] \leqslant 4\vert m\vert \exp\left(-\frac{\delta^2}{2(1+\delta/3)}\rho^2 \Gamma [\log(n)]^2 \right)\leq\frac{\kappa(\rho,\delta,\Gamma,\epsilon)}{n^{(1+\epsilon)}}.\end{aligned}$$ Gathering [(\[step1\])]{}-[(\[step3\])]{}, we conclude that $$\begin{aligned} \mathbb{E}_{f_0}[\mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{\hat{f_{m}}}^{(n)})]&\leqslant& \mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{f_{m}}^{(n)})+\frac{(1+\delta)\vert m\vert}{(1-\delta)^2n} +2\log\left( \frac{1}{\rho}\right)\mathbb{P}[ \Omega_m^c(\delta)]\\ &\leqslant& \mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{f_{m}}^{(n)})+\frac{(1+\delta)\vert m\vert}{(1-\delta)^2n} +\frac{\kappa(\rho,\delta,\Gamma,\epsilon)}{n^{(1+\epsilon)}}.\end{aligned}$$ We finish by proving [(\[step1\])]{}, [(\[chideux\])]{}, [(\[step2\])]{} and [(\[step3\])]{}. ]{} #### $\bullet$ Proof of [(\[step1\])]{} and [(\[chideux\])]{} : [ Arguing as in Castellan [-@Castellan2003] and using Lemma \[borneK\] we have $$\begin{aligned} \mathcal{K}(\mathbb{P}_{f_m}^{(n)},\mathbb{P}_{\hat{f_{m}}}^{(n)}) \!\!\!&\geqslant&\!\!\!\frac{1}{2n} \sum_{J\in m}\vert J\vert\left[\pi_{f_m}^{(J)}\left(1\wedge \frac{\pi_{\hat f_m }^{(J)} }{\pi_{f_m} ^{(J)} } \right)\log^2\left(\frac{\pi_{f_m}^{(J)} }{\pi_{\hat f_m}^{(J)} }\right) +(1-\pi_{f_m}^{(J)} ) \left(1\wedge \frac{1-\pi_{\hat f_m }^{(J)} }{1-\pi_{f_m} ^{(J)} } \right)\log^2\left(\frac{1-\pi_{f_m}^{(J)}}{1-\pi_{\hat f_m}^{(J)} }\right)\right] \\ $$ and $$\begin{aligned} \mathcal{K}(\mathbb{P}_{f_m}^{(n)},\mathbb{P}_{\hat{f_{m}}}^{(n)}) \!\!\!&\leqslant& \!\!\! \frac{1}{2n} \sum_{J\in m} \vert J\vert\left[\pi_{f_m}^{(J)} \left(1\vee \frac{\pi_{\hat f_m }^{(J)} }{\pi_{f_m}^{(J)} } \right)\log^2\left(\frac{\pi_{f_m}^{(J)} }{\pi_{\hat f_m}^{(J)} }\right) +(1-\pi_{f_m}^{(J)} ) \left(1\vee \frac{1-\pi_{\hat f_m }^{(J)} }{1-\pi_{f_m} ^{(J)} } \right)\log^2\left(\frac{1-\pi_{f_m}^{(J)} }{1-\pi_{\hat f_m}^{(J)} }\right)\right]. \\ $$ It follows that $$\begin{aligned} \label{encadrK} \frac{1-\delta}{2} V^2( \pi_{f_m},\pi_{\hat f_m}) {{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_m(\delta)}\leqslant \mathcal{K}(\mathbb{P}_{f_m}^{(n)},\mathbb{P}_{\hat{f_{m}}}^{(n)}) {{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_m(\delta)}\leqslant \frac{1+\delta}{2} V^2( \pi_{f_m},\pi_{\hat f_m}){{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_m(\delta)},\end{aligned}$$ where $V^2( \pi_{f_m},\pi_{\hat f_m})$ is defined by $$\begin{gathered} \label{V} V^2( \pi_{f_m},\pi_{\hat f_m})=\frac{1}{n} \sum_{J\in m} \vert J\vert \frac{[\pi_{\hat f_m}^{(J)} -\pi_{f_m}^{(J)} ] ^2}{\pi_{f_m}^{(J)} }\left[ \frac{\log[\pi_{\hat f_m}^{(J)}/\pi_{f_m}^{(J)}] }{\pi_{\hat f_m}^{(J)}/\pi_{f_m}^{(J)} -1 } \right]^2\\ +\frac{1}{n} \sum_{J\in m} \vert J\vert \frac{[\pi_{\hat f_m}^{(J)} -\pi_{f_m}^{(J)} ] ^2}{1-\pi_{f_m}^{(J)} }\left[ \frac{\log[(1-\pi_{\hat f_m}^{(J)})/(1-\pi_{f_m}^{(J)})] }{(1-\pi_{\hat f_m}^{(J)})/(1-\pi_{f_m}^{(J)}) -1 } \right]^2. $$ Now we use that, for all $x >0$, $$\begin{aligned} \frac{1}{1\vee x}\leqslant \frac{\log(x)}{x-1}\leqslant\frac{1}{1\wedge x}.\end{aligned}$$ Hence we infer that $$\frac{1}{(1+\delta)^2} \mathcal{X}_m^2{{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_m(\delta)}\leqslant V^2( \pi_{f_m},\pi_{\hat f_m}) {{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_m(\delta)}\leqslant \frac{1}{(1-\delta)^2} \mathcal{X}_m^2{{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_m(\delta)},$$ with $\mathcal{X}_m^2$ defined in [(\[chideux\])]{}. This entails that [(\[step1\])]{} is proved. It remains now to check that $$\frac{4\rho^2 \vert m\vert}{n} \leqslant \mathbb{E}_{f_0}[ \mathcal{X}_m^2]\leqslant\frac{ 2\vert m\vert}{n}.$$ According to Lemma \[Projhisto\] , partition $J\in m$ and for any $ x_i \in J$, $$\begin{aligned} \pi_{\hat f_m}(x_i)=\pi_{\hat f_m}^{(J)},\qquad \mbox{ with } &&\qquad \pi_{\hat f_m}^{(J)}=\frac{1}{\vert J\vert}\sum_{i\in J}Y_i,\\ \mbox{ and } \pi_{f_m}(x_i)=\pi_{ f_m}^{(J)}, \qquad \mbox{ with }&&\pi_{f_m}^{(J)}=\frac{1}{\vert J\vert}\sum_{i\in J}\pi_{f_0}(x_i).\end{aligned}$$ Consequently, $$\begin{aligned} \label{chideux2} \mathcal{X}_m^2=\frac{1}{n}\sum_{J\in m} \vert J\vert \frac{(\sum_{k\in J}\varepsilon_k)^2}{\sum_{k\in J}\pi_{f_0}(x_k)[\vert J\vert-\sum_{k\in J}\pi_{f_0}(x_k)]} =\frac{1}{n}\sum_{J\in m} \frac{(\sum_{k\in J}\varepsilon_k)^2}{\vert J\vert\pi_{f_m}^{(J)}[1-\pi_{f_m}^{(J)}]},\notag\end{aligned}$$ and finally $$\begin{aligned} \mathbb{E}_{f_0}(\mathcal{X}_m^2)= \frac{1}n \sum_{J \in m} \mathbb{E}\left(\frac{(\sum_{k\in J}\varepsilon_k)^2}{ \vert J\vert\pi_{f_m}^{(J)} [1-\pi_{f_m}^{(J)}] }\right)= \frac{1 }{n}\sum_{J \in m} \left(\frac{1}{\vert J\vert\pi_{f_m}^{(J)}[ 1-\pi_{f_m}^{(J)}] }\right)\sum_{k\in J}\mbox{Var}\left(Y_k\right).\end{aligned}$$ Consequently $$\begin{aligned} \mathbb{E}_{f_0}(\mathcal{X}_m^2) =\frac{1 }{n}\sum_{J \in m} \frac{\sum_{i\in J}\pi_{f_0}(x_i)(1-\pi_{f_0}(x_i))}{\vert J\vert\pi_{f_m}^{(J)} [1-\pi_{f_m}^{(J)}] }.\end{aligned}$$ Now, according to Assumption [(\[H0\])]{}, and Lemma \[Projhisto\], $m$, all $ J\in m$, $x_i \in J$ $$\begin{aligned} 0<\rho^2 \leqslant \pi_{f_0}(x_i)(1-\pi_{f_0}(x_i)) \leqslant 1/4 , \mbox{ and } 0<\rho \leqslant \pi_{f_m}^{(J)} \mbox{ and } 0<\rho \leqslant (1-\pi_{f_m}^{(J)}). \end{aligned}$$ It follows that $$\begin{aligned} 4\rho^2\ \leqslant \frac{\sum_{k\in J} \pi_{f_0}(x_k) (1-\pi_{f_0}(x_k))}{ \vert J\vert\pi_{f_m}^{(J)} [1-\pi_{f_m}^{(J)}]}=\frac{\sum_{k\in J} \pi_{f_0}(x_k) (1-\pi_{f_0}(x_k))}{ \vert J\vert\pi_{f_m}^{(J)}}+\frac{\sum_{k\in J} \pi_{f_0}(x_k) (1-\pi_{f_0}(x_k))}{ \vert J\vert[1-\pi_{f_m}^{(J)}]} \leqslant 2,\end{aligned}$$ and thus $$\begin{aligned} \frac{4\rho^2 \vert m\vert}{n}\leqslant\frac{1 }{n}\sum_{J \in m} \frac{\sum_{i\in J}\pi_{f_0}(x_i)(1-\pi_{f_0}(x_i))}{\vert J\vert\pi_{f_m}^{(J)}[1-\pi_{f_m}^{(J)}]} \leqslant \frac{2\vert m \vert }{n}.\end{aligned}$$ In other words, $$\begin{aligned} \frac{4\rho^2 \vert m\vert }{n}\leqslant\mathbb{E}_{f_0}(\mathcal{X}_m^2)\leqslant \frac{2\vert m\vert}{n}.\end{aligned}$$ The ends up the proof of [(\[step1\])]{} and [(\[chideux\])]{}.]{} #### $ \bullet$ Proof of [(\[step2\])]{} : We start from [(\[k\_m\_m\])]{}, apply Assumption [(\[H0\])]{} and Lemma \[Projhisto\], to obtain that and [(\[step2\])]{} is checked since $$\begin{aligned} \vert \mathbb{E}\left(\mathcal{K}(\mathbb{P}_{f_m}^{(n)},\mathbb{P}_{\hat{f_{m}}}^{(n)}){{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_m^c(\delta)}\right)\vert &\leqslant & \frac{1}{n}\sum_{i=1}^n\mathbb{E}\left\vert \left[\log \left( \frac{\pi_{f_m}(x_i)}{\pi_{\hat f_m}(x_i)}\right) {{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_m^c(\delta)}\right]\right\vert + \frac{1}{n}\sum_{i=1}^n\mathbb{E} \left\vert \left[\log \left( \frac{(1-\pi_{f_m}(x_i))}{(1-\pi_{\hat f_m}(x_i))}\right) {{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_m^c(\delta)}\right]\right\vert \\ &\leqslant & 2\log\left( \frac{1}{\rho}\right)\mathbb{P}[ \Omega_m^c(\delta)].\end{aligned}$$ #### $\bullet$ Proof of [(\[step3\])]{}: We come to the control of $\mathbb{P}_{f_0}[ \Omega_m^c(\delta)]$. Since $$\begin{aligned} \mathbb{P}[ \Omega_m^c(\delta)] &\leqslant& \sum_{J\in m} \mathbb{P}\left\lbrace \left\vert \frac{\pi_{\hat f_m}^{(J)}}{ \pi_{ f_m}^{(J)} } -1\right\vert \geqslant \delta\right\rbrace +\sum_{J\in m} \mathbb{P}\left\lbrace\left\vert \frac{1-\pi_{\hat f_m}^{(J)}}{ 1-\pi_{ f_m}^{(J)} } -1\right\vert\geqslant \delta\right\rbrace,\end{aligned}$$ by applying Lemma \[Projhisto\], we infer that $$\begin{aligned} \mathbb{P}\left\lbrace \left\vert \frac{\pi_{\hat f_m}^{(J)}}{ \pi_{ f_m}^{(J)} } -1\right\vert \geqslant \delta\right\rbrace &=&\mathbb{P}\left\lbrace \left\vert \frac{\sum_{k\in J} \varepsilon_k}{\sum_{k\in J}\pi_{ f_0}(x_k)} \right\vert \geqslant \delta\right\rbrace= \mathbb{P}\left\lbrace \left\vert \sum_{k\in J} \varepsilon_k \right\vert \geqslant \delta \sum_{k\in J}\pi_{ f_0}(x_k)\right\rbrace, \end{aligned}$$ and $$\begin{aligned} \mathbb{P}\left\lbrace \left\vert \frac{1-\pi_{\hat f_m}^{(J)}}{ 1-\pi_{ f_m}^{(J)} } -1\right\vert \geqslant \delta\right\rbrace &=&\mathbb{P}\left\lbrace \left\vert \frac{\sum_{k\in J} \varepsilon_k}{\sum_{k\in J}(1-\pi_{ f_0}(x_k))} \right\vert \geqslant \delta\right\rbrace = \mathbb{P}\left\lbrace \left\vert \sum_{k\in J} \varepsilon_k \right\vert \geqslant \delta \sum_{k\in J}(1-\pi_{ f_0}(x_k))\right\rbrace.\end{aligned}$$ We write $$\begin{aligned} \mathbb{P}\left\lbrace \left\vert \sum_{k\in J} \varepsilon_k \right\vert \geqslant \delta \sum_{k\in J}\pi_{ f_0}(x_k)\right\rbrace \!\!\!&\leqslant&\!\!\! \mathbb{P}\left\lbrace \left\vert \sum_{k\in J} \varepsilon_k \right\vert \geqslant \delta \sum_{k\in J}\pi_{ f_0}(x_k) (1-\pi_{f_0}(x_k)) \right\rbrace\end{aligned}$$ and $$\begin{aligned} \mathbb{P}\left\lbrace \left\vert \sum_{k\in J} \varepsilon_k \right\vert \geqslant \delta \sum_{k\in J}(1-\pi_{ f_0}(x_k))\right\rbrace \!\!\!&\leqslant&\!\!\! \mathbb{P}\left\lbrace \left\vert \sum_{k\in J} \varepsilon_k \right\vert \geqslant \delta \sum_{k\in J}\pi_{ f_0}(x_k) (1-\pi_{f_0}(x_k)) \right\rbrace.\end{aligned}$$ Then we have $$\begin{aligned} \mathbb{P}[ \Omega_m^c(\delta)] &\leqslant& 2\sum_{J\in m}\mathbb{P}\left\lbrace \left\vert \sum_{k\in J} \varepsilon_k \right\vert \geqslant \delta \sum_{k\in J}\pi_{ f_0}(x_k) (1-\pi_{f_0}(x_k)) \right\rbrace.\end{aligned}$$ Now, we apply Bernstein Concentration Inequality (see Massart [-@massart2007] for example) to the right hand side of previous inequality, starting by recalling this Bernstein inequality. \[Bernstein\] Let $Z_1,\cdots,Z_n$ be independent real valued random variables. Assume that there exist some positive numbers $v$ and $c$ such that for all $k\geqslant 2$, $$\sum_{i=1}^n\mathbb{E}\left[ \vert Z_i\vert ^k\right] \leqslant \frac{k!}{2} v c^{k-2}.$$ Then for any positive $z$, $$\mathbb{P}\left( \sum_{i=1}^n (Z_i-\mathbb{E}(Z_i) \geqslant \sqrt{2vz}+cz\right) \leqslant \exp(-z),\mbox{ and } \mathbb{P}\left( \sum_{i=1}^n (Z_i-\mathbb{E}(Z_i) \geqslant z\right) \leqslant \exp\left(-\frac{z^2}{2(v+cz)}\right).$$ Especially, if $\vert Z_i \vert \leqslant b$ for all $i$, then $$\begin{aligned} \label{casborne} \mathbb{P}\left( \sum_{i=1}^n (Z_i-\mathbb{E}(Z_i) \geqslant z\right) \leqslant \exp\left(-\frac{z^2}{2(\sum_{i=1}^n \mathbb{E}(Z_i^2)+bz/3)}\right).\end{aligned}$$ Applying [(\[casborne\])]{} with $z=\delta \sum_{k\in J}\pi_{ f_0}(x_k) (1-\pi_{f_0}(x_k) )$, $b=1$ and $v=\sum_{k\in J} \pi_{ f_0}(x_k)(1-\pi_{f_0}(x_k) ),$ we get that $$\mathbb{P}\left\lbrace \left\vert \sum_{k\in J} \varepsilon_k \right\vert \geqslant \delta \sum_{k\in J}\pi_{ f_0}(x_k) (1-\pi_{f_0}(x_k)) \right\rbrace$$ is less than $$\begin{aligned} 2\exp\left( -\frac{\delta^2 [ \sum_{k\in J}\pi_{ f_0}(x_k) (1-\pi_{f_0}(x_k))]^2}{2\left( \sum_{k\in J}\pi_{ f_0}(x_k)(1-\pi_{f_0}(x_k) ) +(\delta/3) \sum_{k\in J}\pi_{ f_0}(x_k) (1-\pi_{f_0}(x_k) )\right)}\right),\end{aligned}$$ and consequently $$\begin{aligned} \mathbb{P}\left\lbrace \left\vert \sum_{k\in J} \varepsilon_k \right\vert \geqslant \delta \sum_{k\in J}\pi_{ f_0}(x_k) (1-\pi_{f_0}(x_k)) \right\rbrace \!\!\!&\leqslant& \!\!\! 2\exp\left[ -\frac{\delta^2}{2(1+\delta/3)} \left( \sum_{k\in J}\pi_{ f_0}(x_k) (1-\pi_{f_0}(x_k)) \right)\right]\\ \!\!\!&\leqslant&\!\!\! 2\exp\left[ -\frac{\delta^2}{2(1+\delta/3)} \vert J\vert \rho^2\right].\end{aligned}$$ Consequently, $$\begin{aligned} \mathbb{P}[ \Omega_m^c(\delta)] &\leqslant& 4\vert m \vert \exp(-\Delta \rho^2\Gamma [\log(n)]^2), \qquad \mbox{ with } \qquad \Delta=\frac{\delta^2}{2(1+\delta/3)},\end{aligned}$$ where $\Gamma$ is given by [(\[def\_Gam\])]{}. For $\epsilon>0$ and $\delta$ such that $$\begin{aligned} \label{conddelta} \frac{\delta^2}{2(1+\delta/3)}\rho^2\Gamma \log(n) \geqslant 2+\epsilon,\end{aligned}$$ using that $\vert m\vert \leqslant n$ implies that $$\begin{aligned} 4\vert m\vert \exp\left(-\frac{\delta^2}{2(1+\delta/3)}\rho^2 \Gamma [\log(n)]^2 \right) \leqslant \frac{\kappa}{n^{(1+\epsilon)}}.\end{aligned}$$ And Result [(\[step3\])]{} follows. Proof of Theorem \[theo2\] --------------------------  \ By definition, for all $m \in \mathcal{M}$, $$\gamma_n(\hat{f}_{\hat m})+\mbox{pen}(\hat{m})\leqslant \gamma_n(\hat{f_m})+\mbox{pen}(m) \leqslant \gamma_n(f_m)+\mbox{pen}(m).$$ Applying Formula [(\[decompgamma\])]{}, we have $$\label{decomp2} \gamma(\hat{f}_{\hat m})-\gamma(f_{0})\leqslant \gamma(f_m)-\gamma(f_{0})+\langle \vec{\varepsilon}, \hat{f}_{\hat m}-f_m\rangle_n+\mbox{pen}(m)-\mbox{pen}(\hat{m}).$$ Following Baraud [-@Baraud2000] or Castellan [-@Castellan2003], instead of bounding the supremum of the empirical process $\langle \vec{\varepsilon}, \hat{f}_{\hat m}-f_m\rangle_n$, we split it in three terms. Let $$\overline{\gamma}_n(t)= {\gamma}_n(t)-\mathbb{E}_{f_0}(\gamma_n(t))=- <\vec{\varepsilon},f>_n$$ with $<\vec{\varepsilon},f>_n $ defined in [(\[decompgamma\])]{}, and write $$\begin{aligned} \gamma(\hat{f}_{\hat m})-\gamma(f_{0})&\leqslant& \gamma(f_m)-\gamma(f_{0})+\mbox{pen}(m)-\mbox{pen}(\hat{m})\nonumber\\ &&+\overline{\gamma}_n(f_m)-\overline{\gamma}_n(f_0) +\overline{\gamma}_n(f_0)-\overline{\gamma}_n(f_{\hat m})+\overline{\gamma}_n(f_{\hat m})-\overline{\gamma}_n(\hat{f}_{\hat m}).\end{aligned}$$ In other words, $$\begin{aligned} \mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{\hat{f}_{\hat m}}^{(n)})&\leqslant& \mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{ f_{m}}^{(n)})+\mbox{pen}(m)-\mbox{pen}(\hat{m})\nonumber \\ &&+\overline{\gamma}_n(f_m)-\overline{\gamma}_n(f_0) +\overline{\gamma}_n(f_0)-\overline{\gamma}_n(f_{\hat m})+\overline{\gamma}_n(f_{\hat m})-\overline{\gamma}_n(\hat{f}_{\hat m}).\label{decomp3}\end{aligned}$$ The proof of Theorem \[theo2\] can be decomposed in three steps : 1. \[biais1\] We prove that for $\epsilon > 0,$ $$\mathbb{E}_{f_0}\big[(\overline{\gamma}_n(f_m)-\overline{\gamma}_n(f_0)){{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_{f}}(\delta)}\big] \leqslant \frac{\kappa^{\prime}(\rho,\delta,\Gamma,\epsilon)}{n^{(1+\epsilon)}}.$$ 2. \[deviation\] Let $\Omega_1(\xi)$ be the event $$\begin{aligned} \Omega_1(\xi)=\bigcap_{m^\prime\in \mathcal{M}}\left\{ \chi_{m^\prime}^{2}{{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_{f}}(\delta)}\right. \!\!\!&\leqslant& \!\!\!\left. \frac{2}{n}\vert m^\prime\vert+\frac{16}{n}\Big(1+\frac{\delta}{3}\Big)\sqrt{(L_{m^{\prime}}\vert m^{\prime}\vert+\xi)\vert m^\prime\vert}+\frac{8}{n}\Big(1+\frac{\delta}{3}\Big)(L_{m^{\prime}}\vert m^{\prime}\vert+\xi)\right\},\end{aligned}$$ where $(L_{m^{\prime}})_{m'\in{\mathcal M}}$ satisfies Condition (\[sigma\]) and $m_{f}$ is given by Definition \[def\_mf\]. For all $m^{\prime}$ in $\mathcal{M}$ we prove that on $\Omega_1(\xi)$ $$\begin{aligned} \label{chy2} \notag\Big( \overline{\gamma}_n(f_{m^{\prime}})-\overline{\gamma}_n(\hat{f}_{m^{\prime}})\Big){{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_{f}}(\delta)} \leqslant &&\frac{1}{2n}\Big(\frac{1+\delta}{1-\delta}\Big)\vert m^{\prime}\vert \Big[2+\Big(1+\frac{\delta}{3}\Big)\Big(2\delta +8L_{m^\prime}+16\sqrt{L_{m^\prime}}\Big) \Big]\\ &&+\frac{4\xi}{n} \Big(\frac{1+\delta}{1-\delta}\Big)\Big(1+\frac{\delta}{3}\Big)\Big(1+\frac{4}{\delta}\Big)+\frac{1}{1+\delta} \mathcal{K}(\mathbb{P}_{f_{m^{\prime}}}^{(n)},\mathbb{P}_{\hat{f}_{m^{\prime}}}^{(n)}){{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_{f}}(\delta)}, \end{aligned}$$ and $$\label{probchy2} \mathbb{P}(\Omega_1(\xi)^{c}) \leqslant 2\Sigma e^{-\xi}.$$ 3. \[biais2\] Let $\Omega_2(\xi)$ be the event $$\Omega_2(\xi)=\bigcap_{m^\prime\in \mathcal{M}}\left[(\overline{\gamma}_n(f_0)-\overline{\gamma}_n(f_{m^{\prime}})) \leqslant \mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{f_{m^\prime}}^{(n)})-2 h^2(\mathbb{P}_{f_0}^{(n)}, \mathbb{P}_{f_{m^\prime}}^{(n)})+\frac{2}{n}(L_m^{\prime}\vert m^{\prime}\vert+\xi)\right].$$ We prove that, $\mathbb{P}(\Omega_2(\xi)^{c}) \leqslant \Sigma e^{-\xi}.$ Now, we will prove the result of Theorem \[theo2\] using (R-\[biais1\]), (R-\[deviation\]) and (R-\[biais2\]).\ According to [(\[decomp3\])]{}, we can write $$\begin{aligned} \mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{\hat{f}_{\hat m}}^{(n)}) {{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_f}(\delta)}&\leqslant& \mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{f_m}^{(n)})+\mbox{pen}(m)-\mbox{pen}(\hat{m})\\ &&+(\overline{\gamma}_n(f_m)-\overline{\gamma}_n(f_0)) {{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_f}(\delta)}+(\overline{\gamma}_n(f_0)-\overline{\gamma}_n(f_{\hat m})){{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_f}(\delta)}+(\overline{\gamma}_n(f_{\hat m})-\overline{\gamma}_n(\hat{f}_{\hat m}) {{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_f}(\delta)}.\end{aligned}$$ Combining (R-\[deviation\]) and (R-\[biais2\]) with $m^{\prime}=\hat{m}$, we infer that on $\Omega_1(\xi)\bigcap \Omega_2(\xi)$ $$\begin{aligned} \mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{\hat{f}_{\hat m}}^{(n)}) {{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_f}(\delta)}&\leqslant& \mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{f_m}^{(n)})+\mbox{pen}(m)-\mbox{pen}(\hat{m})+(\overline{\gamma}_n(f_m)-\overline{\gamma}_n(f_0)) {{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_f}(\delta)}\\ &&+\frac{1}{2n}\Big(\frac{1+\delta}{1-\delta}\Big)\vert\hat{m}\vert \Big[2+\Big(1+\frac{\delta}{3}\Big)\Big(2\delta +8L_{\hat{m}}+16\sqrt{L_{\hat{m}}}\Big) \Big]+2L_{\hat{m}}\frac{\vert \hat{m}\vert}{n}\\ &&+\frac{4\xi}{n}\Big[\frac{1}{2}+\Big(\frac{1+\delta}{1-\delta}\Big)\Big(1+\frac{\delta}{3}\Big)\Big(1+\frac{4}{\delta}\Big)\Big]\\ &&+\Big[\mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{f_{\hat{m}}}^{(n)})-2 h^2(\mathbb{P}_{f_0}^{(n)}, \mathbb{P}_{f_{\hat{m}}}^{(n)})+\frac{1}{1+\delta} \mathcal{K}(\mathbb{P}_{f_{\hat{m}}}^{(n)},\mathbb{P}_{\hat{f}_{\hat{m}}}^{(n)})\Big]{{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_{f}}(\delta)}.\end{aligned}$$ This implies that $$\begin{aligned} \mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{\hat{f}_{\hat m}}^{(n)}) {{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_f}(\delta)}&\leqslant& \mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{f_m}^{(n)})+\mbox{pen}(m)-\mbox{pen}(\hat{m})+(\overline{\gamma}_n(f_m)-\overline{\gamma}_n(f_0)) {{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_f}(\delta)}\\ && + \frac{\vert\hat{m}\vert}{n}\Big[\Big(\frac{1+\delta}{1-\delta}\Big)+\Big(\frac{\delta(1+\delta)^{2}}{1-\delta}\Big) +\Big(\frac{(1+\delta)^{2}}{1-\delta}\Big)\Big(6L_{\hat{m}}+8\sqrt{L_{\hat{m}}}\Big)\Big]\\ &&+\frac{4\xi}{n}\Big[\frac{1}{2}+\Big(\frac{1+\delta}{1-\delta}\Big)\Big(1+\frac{\delta}{3}\Big)\Big(1+\frac{4}{\delta}\Big)\Big]\\ &&+\Big[\mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{f_{\hat{m}}}^{(n)})-2 h^2(\mathbb{P}_{f_0}^{(n)}, \mathbb{P}_{f_{\hat{m}}}^{(n)}))+\frac{1}{1+\delta} \mathcal{K}(\mathbb{P}_{f_{\hat{m}}}^{(n)},\mathbb{P}_{\hat{f}_{\hat{m}}}^{(n)})\Big]{{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_{f}}(\delta)}.\end{aligned}$$ Since $$\Big\{\Big(\frac{1+\delta}{1-\delta}\Big)(1+\delta(1+\delta))\vee \Big(\frac{(1+\delta)^{2}}{1-\delta}\Big)\Big\} \leqslant C(\delta) \mbox{ with } C(\delta):=\Big(\frac{1+\delta}{1-\delta}\Big)^{3},$$ we infer $$\begin{aligned} \mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{\hat{f}_{\hat m}}^{(n)}) {{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_f}(\delta)}&\leqslant& \mathcal{K}(\mathbb{P}_{f}^{(n)},\mathbb{P}_{f_m}^{(n)})+\mbox{pen}(m)-\mbox{pen}(\hat{m})+(\overline{\gamma}_n(f_m)-\overline{\gamma}_n(f_0)) {{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_f}(\delta)}\\ && +\frac{\vert\hat{m}\vert}{n}C(\delta) \Big[1+6L_{\hat{m}}+8\sqrt{L_{\hat{m}}}\Big]+\frac{4\xi}{n}\Big[\frac{1}{2}+\Big(\frac{1+\delta}{1-\delta}\Big)\Big(1+\frac{\delta}{3}\Big)\Big(1+\frac{4}{\delta}\Big)\Big]\\ &&+\Big[\mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{f_{\hat{m}}}^{(n)})-2 h^2(\mathbb{P}_{f_0}^{(n)}, \mathbb{P}_{f_{\hat{m}}}^{(n)})+\frac{1}{1+\delta} \mathcal{K}(\mathbb{P}_{f_{\hat{m}}}^{(n)},\mathbb{P}_{\hat{f}_{\hat{m}}}^{(n)})\Big]{{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_{f}}(\delta)}.\end{aligned}$$ Using Pythagore’s type identity $\mathcal{K}(\mathbb{P}_{f_0},\mathbb{P}_{\hat{f}_{\hat m}})=\mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{f_{\hat{m}}}^{(n)})+\mathcal{K}(\mathbb{P}_{f_{\hat{m}}}^{(n)},\mathbb{P}_{\hat{f}_{\hat m}}^{(n)})$ (see Equation (7.42) in Massart [-@massart2007]) we have $$\begin{aligned} \mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{\hat{f}_{\hat m}}^{(n)}) {{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_f}(\delta)}&\leqslant& \mathcal{K}(\mathbb{P}_{f}^{(n)},\mathbb{P}_{f_m}^{(n)})+\mbox{pen}(m)-\mbox{pen}(\hat{m})+(\overline{\gamma}_n(f_m)-\overline{\gamma}_n(f_0)) {{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_f}(\delta)}\\ && +\frac{\vert\hat{m}\vert}{n}C(\delta) \Big[1+6L_{\hat{m}}+8\sqrt{L_{\hat{m}}}\Big]+\frac{4\xi}{n}\Big[\frac{1}{2}+\Big(\frac{1+\delta}{1-\delta}\Big)\Big(1+\frac{\delta}{3}\Big)\Big(1+\frac{4}{\delta}\Big)\Big]\\ &&+\Big[\mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{\hat f_{\hat{m}}}^{(n)})-2 h^2(\mathbb{P}_{f_0}^{(n)}, \mathbb{P}_{f_{\hat{m}}}^{(n)})-\frac{\delta}{1+\delta} \mathcal{K}(\mathbb{P}_{f_{\hat{m}}}^{(n)},\mathbb{P}_{\hat{f}_{\hat{m}}}^{(n)})\Big]{{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_{f}}(\delta)}.\end{aligned}$$ Now, we successively use - the relation between Kullback-Leibler information and the Hellinger distance $ \mathcal{K}(\mathbb{P}_{f_{\hat{m}}}^{(n)},\mathbb{P}_{\hat{f}_{\hat{m}}}^{(n)})\geq 2 h^2(\mathbb{P}_{f_{\hat{m}}}^{(n)},\mathbb{P}_{\hat{f}_{\hat{m}}}^{(n)})$ (see Lemma 7.23 in Massart [-@massart2007]), - and inequality $ h^2(\mathbb{P}_{f_0}^{(n)}, \mathbb{P}_{\hat{f}_{\hat{m}}}^{(n)})\leqslant2[h^2(\mathbb{P}_{f_0}^{(n)}, \mathbb{P}_{f_{\hat{m}}}^{(n)})+ h^2(\mathbb{P}_{f_{\hat{m}}}^{(n)}, \mathbb{P}_{\hat{f}_{\hat{m}}}^{(n)})]$. Consequently, on $\Omega_1(\xi)\bigcap \Omega_2(\xi)$ $$\begin{aligned} \frac{\delta}{1+\delta}h^{2}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{\hat{f}_{\hat m}}^{(n)}) {{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_f}(\delta)}&\leqslant& \mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{f_m}^{(n)})+\mbox{pen}(m)-\mbox{pen}(\hat{m})+(\overline{\gamma}_n(f_m)-\overline{\gamma}_n(f_0)) {{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_f}(\delta)}\\ && +\frac{\vert\hat{m}\vert }{n} C(\delta) \Big[1+6L_{\hat{m}}+8\sqrt{L_{\hat{m}}}\Big]+\frac{4\xi}{n} \Big[\frac{1}{2}+\Big(\frac{1+\delta}{1-\delta}\Big)\Big(1+\frac{\delta}{3}\Big)\Big(1+\frac{4}{\delta}\Big)\Big].\end{aligned}$$ Since $\mbox{pen}(\hat{m})\geq \mu \vert \hat{m}\vert\Big[1+6L_{\hat{m}}+8\sqrt{L_{\hat{m}}}\Big]/n$, by taking $\mu=C(\delta)$ yields that on $\Omega_1(\xi)\bigcap \Omega_2(\xi)$ $$\begin{aligned} h^{2}(\mathbb{P}_{f_0},\mathbb{P}_{\hat{f}_{\hat m}}) {{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_f}(\delta)}&\leqslant& \frac{2\mu^{1/3}}{\mu^{1/3}-1}\Big( \mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{f_m}^{(n)})+\mbox{pen}(m)+(\overline{\gamma}_n(f_m)-\overline{\gamma}_n(f_0)) {{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_f}(\delta)}\Big)+ \frac{\xi}{n} C_1(\mu).\end{aligned}$$ Then, using that $$\mathbb{P}(\Omega_1(\xi)^{c}\cup \Omega_2(\xi)^{c}) \leqslant 3\Sigma e^{-\xi},$$ we deduce that $\mathbb{P}(\Omega_1(\xi)\cap \Omega_2(\xi))\geq 1-3\Sigma e^{-\xi}.$ We now integrating with respect to $\xi$, and use (R-\[biais1\]) to write that $$\begin{aligned} \mathbb{E}_{f_0}\Big[h^{2}(\mathbb{P}_{f_0},\mathbb{P}_{\hat{f}_{\hat m}}) {{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_f}(\delta)}\Big]&\leqslant& \frac{2\mu^{1/3}}{\mu^{1/3}-1}\Big( \mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{f_m}^{(n)})+\mbox{pen}(m)\Big)+ \frac{\kappa_1(\rho,\mu,\Gamma,\epsilon)}{n^{(1+\epsilon)}}+\frac{C_2(\mu,\Sigma)}{n} .\end{aligned}$$ Furthermore, since $h^{2}(\mathbb{P}_{f_0},\mathbb{P}_{\hat{f}_{\hat m}})\leqslant1,$ by applying Inequality (\[step3\]) we have, $$\mathbb{E}_{f_0}\Big[h^{2}(\mathbb{P}_{f_0},\mathbb{P}_{\hat{f}_{\hat m}}) {{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_f}^{c}(\delta)}\Big]\leq\frac{\kappa_2(\rho,\mu,\Gamma,\epsilon)}{n^{(1+\epsilon)}}.$$ Hence we conclude that $$\begin{aligned} \mathbb{E}_{f_0}\Big[h^{2}(\mathbb{P}_{f_0},\mathbb{P}_{\hat{f}_{\hat m}})\Big]&\leqslant& \frac{2\mu^{1/3}}{\mu^{1/3}-1}\Big( \mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{f_m}^{(n)})+\mbox{pen}(m)\Big)+ \frac{\kappa_3(\rho,\mu,\Gamma,\epsilon)}{n^{(1+\epsilon)}}+\frac{C_2(\mu,\Sigma)}{n} ,\end{aligned}$$ and minimizing over $\mathcal{M}$ leads to the result of Theorem \[theo2\].\ We now come to the proofs of (R-\[biais1\]), (R-\[deviation\]) and (R-\[biais2\]).\ $\bullet$ Proof of (R-\[biais1\])\ We know that $$\begin{aligned} \Big\vert \mathbb{E}_{f_0}\Big[(\overline{\gamma}_n(f_m)-\overline{\gamma}_n(f_0)){{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_f}(\delta)}\Big] \Big\vert\!\!\!&=&\!\!\! \Big\vert \mathbb{E}_{f_0}\Big[(\overline{\gamma}_n(f_m)-\overline{\gamma}_n(f_0)){{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_f}^{c}(\delta)}\Big] \Big\vert\\ \!\!\!&\leq&\!\!\! \mathbb{E}_{f_0}\Big[\frac{1}{n}\sum_{i=1}^{n}\Big\{\Big\vert\epsilon_{i}\log\{\frac{\pi_{f_{m}}(x_i)}{\pi_{f_{0}}(x_i)}\} \Big\vert + \Big\vert \epsilon_i \log\{\frac{1-\pi_{f_m}(x_i)}{1-\pi_{f_0}(x_i)}\}\Big\vert\Big\}{{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_f}^{c}(\delta)}\Big]\\ \!\!\!&\leq&\!\!\! 2\log\left\lbrace\frac{1}{\rho}\right\rbrace\mathbb{P}(\Omega_{m_f}^{c}(\delta)).\end{aligned}$$ We conclude the proof of (R-\[biais1\]) by using Inequality (\[step3\]), which implies that $$\Big\vert \mathbb{E}_{f_0}\Big[(\overline{\gamma}_n(f_m)-\overline{\gamma}_n(f_0)){{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_f}(\delta)}\Big] \Big\vert\leq 2\log\left\lbrace\frac{1}{\rho}\right\rbrace\frac{\kappa(\rho,\delta,\Gamma,\epsilon)}{n^{(1+\epsilon)}}=\frac{\kappa^{\prime}(\rho,\delta,\Gamma,\epsilon)}{n^{(1+\epsilon)}}.$$ $\bullet$ Proof of (R-\[deviation\])\ We start by the proof of (\[chy2\]) $$\begin{aligned} \overline{\gamma}_n(f_{m^{\prime}})-\overline{\gamma}_n(\hat{f}_{m^{\prime}}) &=& -\frac{1}{n}\sum_{i=1}^{n}\Big\{\epsilon_{i}\log\Big(\frac{\pi_{f_{m^{\prime}}}(x_i)}{\pi_{\hat{f}_{m^{\prime}}}(x_i)}\Big) - \epsilon_i \log\Big(\frac{1-\pi_{f_{m^{\prime}}}(x_i)}{1-\pi_{\hat{f}_{m^{\prime}}}(x_i)}\Big)\Big\}\\ &=& -\frac{1}{n}\sum_{J\in m^{\prime}}\Big(\sum_{i\in J}\epsilon_{i}\Big)\Big[\frac{\sqrt{\vert J \vert\pi_{f_{m^{\prime}}}^{(J)}}}{\sqrt{\vert J \vert\pi_{f_{m^{\prime}}}^{(J)}}}\log\Big(\frac{\pi_{f_{m^{\prime}}}^{(J)}}{\pi_{\hat{f}_{m^{\prime}}}^{(J)}}\Big) - \frac{\sqrt{\vert J \vert 1-\pi_{f_{m^{\prime}}}^{(J)}}}{\sqrt{\vert J \vert (1-\pi_{f_{m^{\prime}}}^{(J)})}} \log\Big(\frac{1-\pi_{f_{m^{\prime}}}^{(J)}}{1-\pi_{\hat{f}_{m^{\prime}}}^{(J)}}\Big)\Big].\end{aligned}$$ By Cauchy-Schwarz inequality, we have $$\begin{gathered} \overline{\gamma}_n(f_{m^{\prime}})-\overline{\gamma}_n(\hat{f}_{m^{\prime}}) \leq \sqrt{\frac{1}{n}\sum_{J\in m^{\prime}}\vert J\vert \Big[ \pi_{f_{m^{\prime}}}^{(J)}\log^{2}{\Big(\frac{\pi_{\hat{f}_{m^{\prime}}}^{(J)}}{\pi_{f_{m^{\prime}}}^{(J)}}\Big)}+(1-\pi_{f_{m^{\prime}}}^{(J)})\log^{2}\Big({\frac{1-\pi_{\hat{f}_{m^{\prime}}}^{(J)}}{1-\pi_{f_{m^{\prime}}}^{(J)}}\Big)}\Big]}\\ \\ \times\sqrt{\frac{1}{n}\sum_{J \in m^{\prime}}\Big[\frac{\Big(\sum_{i\in J}\epsilon_{i}\Big)^{2}}{\vert J \vert\pi_{f_{m^{\prime}}}^{(J)}} +\frac{\Big(\sum_{i\in J}\epsilon_{i}\Big)^{2}}{\vert J \vert(1-\pi_{f_{m^{\prime}}}^{(J)})}\Big]} ~ ,\end{gathered}$$ and in other words $$\begin{aligned} \overline{\gamma}_n(f_{m^{\prime}})-\overline{\gamma}_n(\hat{f}_{m^{\prime}}) \leq \sqrt{\mathcal{X}^{2}_{m^{\prime}}}\times \sqrt{V^{2}(\pi_{f_{m^{\prime}}},\pi_{\hat{f}_{m^{\prime}}})},\end{aligned}$$ where $\mathcal{X}^{2}_{m^{\prime}}$ and $V^{2}(\pi_{f_{m^{\prime}}},\pi_{\hat{f}_{m^{\prime}}})$ are defined respectively in  [(\[chideux\])]{} and [(\[V\])]{} . Using both that inequality $2xy\leqslant\theta x^{2}+ \theta^{-1}y^{2}$, for all $ x>0$, $y>0$ with $\theta=(1+\delta)/(1-\delta),$ and Inequality (\[encadrK\]), we obtain on $\Omega_{m_{f}}(\delta)$ that, $$\begin{aligned} \overline{\gamma}_n(f_{m^{\prime}})-\overline{\gamma}_n(\hat{f}_{m^{\prime}})) \!\!\!&\leq&\!\!\! \frac{1}{2}\Big(\frac{1+\delta}{1-\delta}\Big) \chi^{2}_{m^{\prime}}+\frac{1}{1+\delta} \mathcal{K}(\mathbb{P}_{f_{m^{\prime}}}^{(n)},\mathbb{P}_{\hat{f}_{m^{\prime}}}^{(n)}). \end{aligned}$$ Consequently, on $\Omega_{1}(\xi)$ $$\begin{aligned} (\overline{\gamma}_n(f_{m^{\prime}})-\overline{\gamma}_n(\hat{f}_{m^{\prime}})){{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_{f}}(\delta)} &\leq&\frac{1}{2n}\Big(\frac{1+\delta}{1-\delta}\Big) \Big[2\vert m^\prime\vert+16\Big(1+\frac{\delta}{3}\Big)\sqrt{(L_{m^{\prime}}\vert m^{\prime}\vert+\xi)\vert m^\prime\vert}+8\Big(1+\frac{\delta}{3}\Big)(L_{m^{\prime}}\vert m^{\prime}\vert+\xi)\Big]\\ &+&\frac{1}{1+\delta} \mathcal{K}(\mathbb{P}_{f_{m^{\prime}}}^{(n)},\mathbb{P}_{\hat{f}_{m^{\prime}}}^{(n)}){{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_{f}}(\delta)}.\end{aligned}$$ Using inequalities $\vert x+y\vert^{1/2}\leqslant\vert x\vert^{1/2}+ \vert y \vert^{1/2}$ and $2xy\leqslant\theta x^{2}+ \theta^{-1}y^{2}$ with $\theta=\delta/4$, we infer that [(\[chy2\])]{} follows since $$\begin{aligned} \overline{\gamma}_n(f_{m^{\prime}})-\overline{\gamma}_n(\hat{f}_{m^{\prime}})){{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_{f}}(\delta)} \!\!\!&\leq&\!\!\!\frac{1}{2n}\Big(\frac{1+\delta}{1-\delta}\Big) \Big[2\vert m^\prime\vert+\Big(1+\frac{\delta}{3}\Big)\Big(16\sqrt{L_{m^{\prime}}}\vert m^{\prime}\vert+8L_{m^{\prime}}\vert m^\prime\vert+2\delta\vert m^{\prime}\vert\Big)\\ \!\!\!&&\!\!\! +8\xi\Big(1+\frac{\delta}{3}\Big)(1+\frac{4}{\delta})\Big]+\frac{1}{1+\delta} \mathcal{K}(\mathbb{P}_{f_{m^{\prime}}}^{(n)},\mathbb{P}_{\hat{f}_{m^{\prime}}}^{(n)}){{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_{f}}(\delta)}\\ \!\!\!&\leq&\frac{1}{2n}\Big(\frac{1+\delta}{1-\delta}\Big)\vert m^{\prime}\vert \Big[2+\Big(1+\frac{\delta}{3}\Big)\Big(2\delta +8L_{m^\prime}+16\sqrt{L_{m^\prime}}\Big) \Big]\\ \!\!\!&&+\frac{ 4\xi}{n}\Big(\frac{1+\delta}{1-\delta}\Big)\Big(1+\frac{\delta}{3}\Big)\Big(1+\frac{4}{\delta}\Big)+\frac{1}{1+\delta} \mathcal{K}(\mathbb{P}_{f_{m^{\prime}}}^{(n)},\mathbb{P}_{\hat{f}_{m^{\prime}}}^{(n)}){{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_{f}}(\delta)}.\end{aligned}$$ $\bullet$ Proof of [(\[probchy2\])]{} :\ Write $\mathcal{X}_{m^{\prime}}^2=\sum_{J\in m^{\prime}}\{Z_{1,J}+Z_{2,J}\},$ where $$Z_{1,J}=\frac{1}{n}\frac{(\sum_{k\in J}\varepsilon_k)^2}{\vert J\vert\pi_{f_{m^{\prime}}}^{(J)}}~~ \mbox{and}~~ Z_{2,J}=\frac{1}{n}\frac{(\sum_{k\in J}\varepsilon_k)^2}{\vert J\vert (1-\pi_{f_{m^{\prime}}}^{(J)})}.$$ We will control $\sum_{J\in m^{\prime}}Z_{1,J}$ and $\sum_{J\in m^{\prime}}Z_{2,J}$ separately. In order to use Bernstein inequality (see Theorem \[Bernstein\]), we need an upper bound of $\sum_{J\in m^{\prime}}\mathbb{E}[Z_{1,J}^{p}{{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_f}(\delta)}]$, for every $p\geq 2$. By definition $$\begin{aligned} \mathbb{E}[Z_{1,J}^{p}{{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_f}(\delta)}]&=&\frac{1}{\Big(n\vert J\vert\pi_{f_{m^{\prime}}}^{(J)}\Big)^{p}}\int_{0}^{\infty}2px^{2p-1} \mathbb{P}\Big(\Big\{\vert\sum_{k\in J}\varepsilon_{k}\vert\geq x\Big\}\cap \Omega_{m_f}(\delta)\Big)dx.\end{aligned}$$ For every $m^{\prime}$ constructed on the grid $m_f$, for all $ J\in m^{\prime}$, on $\Omega_{m_f}(\delta)\cap \Big\{x\leqslant\vert\sum_{k\in J}\varepsilon_{k}\vert\Big\} ,$ we have $$x\leqslant\vert\sum_{k\in J}\varepsilon_{k}\vert\leqslant\delta \sum_{i\in J}\pi_{f_0}(x_i).$$ Combining the previous inequality, the Bernstein inequality [(\[casborne\])]{} with the fact that $\varepsilon_k\leqslant1 $, we infer that $$\begin{aligned} \mathbb{E}[Z_{1,J}^{p}{{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_f}(\delta)}]&\leq&\frac{1}{\Big(n \sum_{k\in J}\pi_{f_{0}}(x_k)\Big)^{p}}\int_{0}^{\delta \sum_{k\in J}\pi_{f_0}(x_k)}2px^{2p-1} \mathbb{P}\Big(\vert\sum_{k\in J}\varepsilon_{k}\vert\geq x\Big)dx\\ &\leq& \frac{1}{\Big(n \sum_{k\in J}\pi_{f_{0}}(x_k)\Big)^{p}}\int_{0}^{\delta \sum_{i\in J}\pi_{f_0}(x_i)} 4px^{2p-1}\exp\Big(-\frac{x^{2}}{2\Big( \frac{x}{3}+\sum_{k\in J}\pi_{f_{0}}(x_k) \Big)}\Big)dx\\ &\leq& \frac{1}{\Big(n \sum_{k\in J}\pi_{f_{0}}(x_k)\Big)^{p}}\int_{0}^{\delta \sum_{i\in J}\pi_{f_0}(x_i)} 4px^{2p-1}\exp\Big(-\frac{x^{2}}{2\Big( 1+\frac{\delta}{3}\Big)\sum_{k\in J}\pi_{f_{0}}(x_k)}\Big)dx\\ &\leq&\frac{1}{n^{p}}2^{p+1}(1+\frac{\delta}{3})^{p}p\int_{0}^{\infty}t^{p-1}\exp(-t)dt\\ &\leq& \frac{1}{n^{p}}2^{p+1}p(1+\frac{\delta}{3})^{p}(p!).\end{aligned}$$ Consequently $$\sum_{J\in m^{\prime}}\mathbb{E}[Z_{1,J}^{p}{{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_f}(\delta)}]\leqslant\frac{1}{n^{p}}2^{p+1}p(1+\frac{\delta}{3})^{p}(p!)\times\vert m^{\prime}\vert .$$ Now, since $p\leqslant2^{p-1}$, we have $$\sum_{J\in m^{\prime}}\mathbb{E}[Z_{1,J}^{p}{{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_f}(\delta)}]\leqslant \frac{p!}{2}\times\Big[\frac{32}{n^{2}}(1+\frac{\delta}{3})^{2}\vert m^{\prime}\vert \Big]\times\Big[\frac{4}{n}(1+\frac{\delta}{3})\Big]^{p-2} .$$ Using Bernstein inequality and that $\mathbb{E}\Big[\sum_{J\in m^{\prime}}Z_{1,J})\Big]\leqslant\vert m^{\prime}\vert/n$, we have that for every positive $x$ $$\mathbb{P}\Big(\sum_{J\in m^{\prime}}Z_{1,J}{{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_f}(\delta)}\geq \frac{\vert m^{\prime}\vert}{n} + \frac{8}{n}(1+\frac{\delta}{3})\sqrt{x\vert m^{\prime}\vert}+ \frac{4}{n}(1+\frac{\delta}{3})x\Big)\leqslant\exp(-x).$$ In the same way we prove that $$\mathbb{P}\Big(\sum_{J\in m^{\prime}}Z_{2,J}{{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_f}(\delta)}\geq \frac{\vert m^{\prime}\vert}{n} + \frac{8}{n}(1+\frac{\delta}{3})\sqrt{x\vert m^{\prime}\vert}+ \frac{4}{n}(1+\frac{\delta}{3})x\Big)\leqslant\exp(-x).$$ Hence $$\mathbb{P}\Big(\mathcal{X}_{m^{\prime}}^{2}{{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{\Omega_{m_f}(\delta)}\geq \frac{2\vert m^{\prime}\vert}{n} + \frac{16}{n}(1+\frac{\delta}{3})\sqrt{x\vert m^{\prime}\vert}+ \frac{8}{n}(1+\frac{\delta}{3})x\Big)\leqslant2\exp(-x),$$ and we conclude that $ \mathbb{P}(\Omega_1^{c}(\xi))\leqslant2\sum_{m^{\prime}}\exp(-L_m^{\prime}\vert m^{\prime}\vert-\xi)=2\Sigma e^{-\xi}.$ This ends the proof of (R-\[deviation\]). $\bullet$ Proof of (R-\[biais2\])\ Recall that $\overline{\gamma}_n(f)=\gamma_n(f)-\mathbb{E}(\gamma_n(f))$ for every $f$. According to Markov inequality, for $b>0$, $$\begin{aligned} \mathbb{P}((\overline{\gamma}_n(f_{0})-\overline{\gamma}_n(g))\geq b) &=&\mathbb{P}\Big(\exp\Big(\frac{n}{2}(\overline{\gamma}_n(f_{0})-\overline{\gamma}_n(g))\Big)\geq \exp\Big(\frac{nb}{2}\Big)\Big)\\ &\leq& \exp\Big(\frac{-nb}{2}\Big)\mathbb{E}\Big[\exp\Big(\frac{n}{2}(\overline{\gamma}_n(f_{0})-\overline{\gamma}_n(g))\Big)\Big]\\ &=& \exp\Big[\frac{-nb}{2}+\log\mathbb{E}\Big[\exp\Big(\frac{n}{2}\Big(\gamma_n(f_{0})-\gamma_n(g)\Big)+\frac{n}{2}\mathbb{E}\Big[\gamma_n(g)-\gamma_n(f_{0}) \Big]\Big)\Big]\\ &\leq& \exp\Big[\frac{-nb}{2}+\frac{n}{2}\mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{g}^{(n)})+\log\mathbb{E}\Big[\exp\Big(\frac{n}{2}\Big(\gamma_n(f_{0})-\gamma_n(g)\Big)\Big)\Big]\Big].\end{aligned}$$ Now, $$\begin{aligned} \log\mathbb{E}\Big[\exp\Big(\frac{n}{2}\Big(\gamma_n(f_{0})-\gamma_n(g)\Big)\Big)\Big]&=& \log \mathbb{E}\Big[\exp\Big( \frac{1}{2}\sum_{i=1}^{n}Y_i\log(\frac{\pi_{g}(x_i)}{\pi_{f_0}(x_i)})+(1-Y_i)\log(\frac{1-\pi_{g}(x_i)}{1-\pi_{f_0}(x_i)})\Big) \Big]\\ &=& \log \mathbb{E}\Big[\Pi_{i=1}^{n}\Big\{\Big(\frac{\pi_{g}(x_i)}{\pi_{f_0}(x_i)}\Big)^{Y_i/2}\times\Big(\frac{1-\pi_{g}(x_i)}{1-\pi_{f_0}(x_i)}\Big)^{(1-Y_i)/2}\Big\} \Big]\\ &=& \log\Pi_{i=1}^{n}\Big\{\sqrt{\frac{\pi_{g}(x_i)}{\pi_{f_0}(x_i)}}\pi_{f_0}(x_i)+\sqrt{\frac{1-\pi_{g}(x_i)}{1-\pi_{f_0}(x_i)}}(1-\pi_{f_0}(x_i))\Big\} \\ &=&\sum_{i=1}^{n}\log\Big\{\sqrt{\pi_{g}(x_i)\pi_{f_0}(x_i)}+\sqrt{(1-\pi_{g}(x_i))(1-\pi_{f_0}(x_i))}\Big\}.\end{aligned}$$ In other words we have $$\begin{gathered} \log\mathbb{E}\Big[\exp\Big(\frac{n}{2}\Big(\gamma_n(f_{0})-\gamma_n(g)\Big)\Big)=\\ \sum_{i=1}^{n}\log\Big\{ 1-\frac{1}{2}\Big[\Big(\sqrt{\pi_{f_0}(x_i)}-\sqrt{\pi_{g}(x_i)}\Big)^{2} +\Big(\sqrt{1-\pi_{f_0}(x_i)}-\sqrt{1-\pi_{g}(x_i)}\Big)^{2}\Big]\Big\}.\end{gathered}$$ This implies that $$\begin{aligned} \log\mathbb{E}\Big[\exp\Big(\frac{n}{2}\Big(\gamma_n(f_{0})-\gamma_n(g)\Big)\Big)\Big] &\leq& \sum_{i=1}^{n}-\frac{1}{2}\Big[\Big(\sqrt{\pi_{f_0}(x_i)}-\sqrt{\pi_{g}(x_i)}\Big)^{2} +\Big(\sqrt{1-\pi_{f_0}(x_i)}-\sqrt{1-\pi_{g}(x_i)}\Big)^{2}\Big]\\ &=& -nh^{2}(\mathbb{P}_{f_0},\mathbb{P}_{g}).\end{aligned}$$ Consequently $$\mathbb{P}(\overline{\gamma}_n(f_{0})-\overline{\gamma}_n(g)\geq b)\leqslant\exp\Big[\frac{-nb}{2}+ \frac{n}{2}\mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{g}^{(n)})-nh^{2}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{g}^{(n)})\Big],$$ and, if we choose for positive $x$, $$b= \frac{2x}{n}+\mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{g}^{(n)})-2h^{2}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{g}^{(n)})>0,$$ we have, $$\mathbb{P}\Big(\overline{\gamma}_n(f_{0})-\overline{\gamma}_n(g)\geq \frac{2x}{n}+\mathcal{K}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{g}^{(n)})-2h^{2}(\mathbb{P}_{f_0}^{(n)},\mathbb{P}_{g}^{(n)})\Big)\leqslant\exp(-x).$$ We conclude that $ \mathbb{P}(\Omega_2^{c}(\xi))\leqslant\sum_{m^{\prime}}\exp(-L_m^{\prime}\vert m^{\prime}\vert-\xi)\leq\Sigma e^{-\xi},$ which ends the proof of (R-\[biais2\]). Appendix {#S6} ======== \[appendix\] Proof of Lemma \[Projhisto\]. ----------------------------- By definition $$f_m=\arg\min_{f \in S_m} \left[ \sum_{i=1}^n \log(1+\exp(f(x_i)))-\pi_{f_0}(x_i)f(x_i) \right].$$ For all $f\in S_m$, for all $J\in m$ and for all $x\in J$, we have $f(x)=f^{(J)}$. Hence $f_m(x)=\overline{f}_m^{(J)}$ for all $x$ in $J$, and for all $J$ in $m$, we aim at finding $\overline{f}_m^{(J)}$ such that $$\overline{f}_m^{(J)}=\arg\min_{f^{(J)} } \left[ |J|\log(1+\exp(f^{(J)}))-\sum_{i\in J}\pi_{f_0}(x_i)f^{(J)} \right]$$ where $|J|={\mbox card} \{i\in\{1,...,n\} ; x_i\in J\}$. Easy calculations show that he coefficient $\overline{f}^{(J)}_m$ satisfies $$\vert J\vert \frac{\exp(\overline{f}_m^{(J)})}{1+\exp (\overline{f}_m^{(J)})}-\sum_{i\in J}\pi_{f_0}(x_i)=0,$$ that is $$\begin{aligned} \label{projhisto}\overline{f}_m^{(J)}=\log \left( \frac{ \sum_{i\in J}\pi_{f_0}(x_i) }{\vert J \vert (1-\sum_{i\in J}\pi_{f_0}(x_i) / \vert J \vert ) }\right).\end{aligned}$$ Consequently, $\pi_{f_m}$ defined as in [(\[pif\])]{} satisfies that $\pi_{f_m}(x)=\pi_{f_m}^{(J)}$ for all $x\in J$, where $$\pi_{f_m}^{(J)}=\frac{1}{\vert J\vert}\sum_{i\in J}\pi_{f_0}(x_i),$$ and hence $\pi_{f_m}=\arg\min_{t\in S_m}\parallel t-\pi_{f_0}\parallel_n$ is the usual projection of $\pi_{f_0}$ on to $S_m=<\Phi_j, j\in m>.$ In the same way, $\hat{f}_m$ defined by [(\[fchapD\])]{} satisfies $\hat f_m(t)=\hat{f}_m^{(J)} $ for all $t\in J$, where $$\hat{f}_m^{(J)}=\log \left( \frac{ \sum_{i\in J}Y_i }{\vert J \vert (1-\sum_{i\in J}Y_i/ \vert J \vert ) }\right).$$ In other words, $\pi_{\hat f_m}$, defined as $\pi_f$ with $f$ replaced by $\pi_{\hat f_m}$, satisfies $\pi_{\hat f_m}(x)=\pi_{\hat f_m}^{(J)}$, $ x\in J$, with $$\pi_{\hat f_m}^{(J)}=\frac{1}{\vert J\vert}\sum_{i\in J}Y_i.$$ Proof of Lemma \[lm\]. ---------------------- In the following, for the sake of notation simplicity, we will use $\gamma(\beta)$ for $\gamma(f_{\beta})$. A second-order Taylor expansion of the function $\gamma()$ around $\beta^{*}$ gives for any $\beta\in \Lambda_m$ $$\begin{gathered} \gamma(\beta)=\gamma(\beta^{*})+\nabla_{\beta}\gamma(\beta^{*})(\beta-\beta^{*}) \\+ \int_{0}^{1}(1-t)\sum_{i_1+\dots+i_D=2}\frac{2!}{i_1!\dots i_D!}(\beta_1-\beta_{1}^{*})^{i_1}\dots (\beta_D-\beta_{D}^{*})^{i_D}\frac{\partial\gamma^{2}}{\partial\beta_1\dots\partial\beta_D}(\beta^{*}+t(\beta-\beta{*}))dt .\end{gathered}$$ Easy calculation shows that $$\begin{aligned} && \sum_{i_1+\dots+i_D=2}\frac{2!}{i_1!\dots i_D!}(\beta_1-\beta_{1}^{*})^{i_1}\dots (\beta_D-\beta_{D}^{*})^{i_D}\frac{\partial\gamma^{2}}{\partial\beta_1\dots\partial\beta_D}(\beta^{*}+t(\beta-\beta{*}))\\ &=&\sum_{j=1}^{D}\frac{1}{n}\sum_{i=1}^{n}\psi_{j}^{2}(x_{i})(\beta_{j}-\beta^{*}_{j})^{2}\pi\left(f_{\beta^{*}+t(\beta-\beta{*})}(x_i)\right)\left[1-\pi\left(f_{\beta^{*}+t(\beta-\beta{*})}(x_i)\right)\right]\\ &+&2\sum_{l\neq k}\frac{1}{n}\sum_{i=1}^{n}\psi_{l}(x_{i})\psi_{k}(x_i)(\beta_l-\beta_l^{*})(\beta_k-\beta_k^{*})\pi\left(f_{\beta^{*}+t(\beta-\beta{*})}(x_i)\right)\left[1-\pi\left(f_{\beta^{*}+t(\beta-\beta{*})}(x_i)\right)\right]\\ &=&\frac{1}{n}\sum_{i=1}^{n}\pi\left(f_{\beta^{*}+t(\beta-\beta{*})}(x_i)\right)\left[1-\pi\left(f_{\beta^{*}+t(\beta-\beta{*})}(x_i)\right)\right](f_\beta(x_i)-f_{\beta^{*}}(x_i))^{2}.\end{aligned}$$ This implies that $$\gamma(\beta)\geq\gamma(\beta^{*})+\nabla_\beta\gamma(\beta^{*})(\beta-\beta^{*})+\frac{\mathcal{U}_{0}^{2}}{2}\lVert f_{\beta}-f_{\beta^{*}}\rVert^2_{n}.$$ Since $\beta^{*}$ is the minimizer of $\gamma(.)$ over the set $\Lambda_m$, we have $\nabla_\beta\gamma(\beta^{*})(\beta-\beta^{*})\geq 0$ for all $\beta \in \Lambda_m$. Thus the result follows. Proof of Lemma \[control\] -------------------------- Let $S_{D}$ and $S_{D^{\prime}}$ two vector spaces of dimension $D$ and $D^{\prime}$ respectively. Set $S=S_{D}\cap \mathbb{L}_\infty(C_0)+S_{D^{\prime}}\cap \mathbb{L}_\infty(C_0)$ and $\vec{\varepsilon}^{\prime}$ be an independent copie of $\vec{\varepsilon}.$ Set $$\label{Z} Z=\sup_{u\in S}\frac{\langle\vec{\varepsilon} ,u\rangle_n}{\parallel u \parallel_n} , \mbox{ and for all }i=1,\dots,n, \quad Z^{(i)}=\sup_{u\in S}\frac{1}{\parallel u \parallel_n}\left(\frac{1}{n}\sum_{k\neq i}\varepsilon_{k}u(x_{k})+ \varepsilon_{i}^{\prime}u(x_{i}) \right).$$ By Cauchy-Schwarz Inequality the supremum in (\[Z\]) is achieved at $\Pi_S(\vec{\varepsilon}).$ Consequently, $$Z-Z^{(i)}\leqslant \frac{(\varepsilon_{i}-\varepsilon_{i}^{\prime})(\Pi_S(\vec{\varepsilon})(x_i)}{n\parallel\Pi_S(\vec{\varepsilon})\parallel_n}, \qquad \mbox{ and }\qquad \mathbb{E}_{f_0}[(Z-Z^{(i)})^{2}|\vec{\varepsilon}] \leq \mathbb{E}_{f_0}\left[\frac{(\varepsilon_{i}-\varepsilon_{i}^{\prime})^{2}[\Pi_S(\vec{\varepsilon})(x_i)]^2}{n^2\parallel\Pi_S(\vec{\varepsilon})\parallel_n^2}|\vec{\varepsilon}\right]$$ with $$\begin{aligned} \mathbb{E}_{f_0}\left[\frac{(\varepsilon_{i}-\varepsilon_{i}^{\prime})^{2}[\Pi_S(\vec{\varepsilon})(x_i)]^2}{n^2\parallel\Pi_S(\vec{\varepsilon})\parallel_n^2}|\vec{\varepsilon}\right] &=&\frac{[\Pi_S(\vec{\varepsilon})(x_i)]^2}{n^2\parallel\Pi_S(\vec{\varepsilon})\parallel_n^2}\mathbb{E}_{f_0}\left[(\varepsilon_{i}-\varepsilon_{i}^{\prime})^{2}|\vec{\varepsilon} \right]\\&=&\frac{[\Pi_S(\vec{\varepsilon})(x_i)]^2}{n^2\parallel\Pi_S(\vec{\varepsilon})\parallel_n^2}\left( \varepsilon_i^2+\mathbb{E}_{f_0}(\varepsilon_i^2) \right)\leq \frac{5[\Pi_S(\vec{\varepsilon})(x_i)]^2}{4n^2\parallel\Pi_S(\vec{\varepsilon})\parallel_n^2} .\end{aligned}$$ This implies that $$\sum_{i=1}^{n}\mathbb{E}_{f_0}[(Z-Z^{(i)})^{2}{{{{1}}\hspace{-1,1mm}{\mathrm I}}}_{Z>Z^{(i)}}|\vec{\varepsilon}] \leq \frac{5}{4n}.$$ We now apply Lemma \[Bouch\] from Boucheron *et al.* [-@boucheron]), that is recalled here. \[Bouch\] Let $X_{1} ,\dots, X_{n}$ independent random variables taking values in a measurable space $\mathcal{X}$. Denote by $X_{1}^{n}$ the vector of these $n$ random variables. Set $Z=f(X_{1},\dots,X_{n})$   and   $Z^{(i)}=f(X_{1},\dots,X_{i-1},X_{i}^{\prime},X_{i+1},\dots,X_{n}),$ where $X_{1}^{\prime} ,\dots, X_{n}^{\prime}$ denote independent copies of $X_{1} ,\dots, X_{n}$ and f : $\mathcal{X}^{n} \rightarrow \mathbb{R}$ some measurable function. Assume that there exists a positive constant $c$ such that, $\mathbb{E}_{f_0}\left[\sum_{i=1}^{n}(Z-Z^{(i)})^{2}\mathds{1}_{Z> Z^{(i)}}|X_{1}^{n}\right]\leqslant c$. Then for all $t > 0$, $$\mathbb{P}_{f_0} (Z>\mathbb{E}_{f_0}(Z)+t)\leqslant e^{-t^{2}/4c}.$$ Applying Lemma \[Bouch\] to $Z$ defined in [(\[Z\])]{}, we obtain that for all $x>0$, $$\mathbb{P}\left(\sup_{u\in S}\frac{\langle\vec{\varepsilon} ,u\rangle_n}{\parallel u \parallel_n}> \mathbb{E}_{f_0}\left[\sup_{u\in S}\frac{\langle\vec{\varepsilon} ,u\rangle_n}{\parallel u \parallel_n}\right]+\sqrt{\frac{5x}{n}}\right)\leqslant \exp{(-x)}.$$ Let $\{ \psi_1,\dots,\psi_{D+D^{\prime}}\}$ be an orthonormal basis of $S_{D}+S_{D^{\prime}}$. Using Jensen’s Inequality, we write $$\begin{aligned} \mathbb{E}_{f_0}\left[\sup_{u\in S}\frac{\langle\vec{\varepsilon} ,u\rangle_n}{\parallel u \parallel_n}\right]= \mathbb{E}_{f_0}(\parallel\Pi_S(\vec{\varepsilon})\parallel_n)&=\mathbb{E}_{f_0}\left[\left(\sum_{k=1}^{D+D^{\prime}}(\langle\vec{\varepsilon} ,\psi_k\rangle_n)^2\right)^{1/2}\right]\\ &\leq\left(\sum_{k=1}^{D+D^{\prime}}\mathbb{E}_{f_0}(\langle\vec{\varepsilon} ,\psi_k\rangle_n)^2\right)^{1/2}\\ &\leqslant \sqrt{\frac{ D+D^{\prime}}{4n}}.\end{aligned}$$ This concludes the proof of Lemma \[control\].
--- abstract: 'We present a sample of 407 $z\sim3$ Lyman break galaxies (LBGs) to a limiting isophotal $u$-band magnitude of 27.6 mag in the Hubble Ultra Deep Field (UDF). The LBGs are selected using a combination of photometric redshifts and the $u$-band drop-out technique enabled by the introduction of an extremely deep $u$-band image obtained with the Keck I telescope and the blue channel of the Low Resolution Imaging Spectrometer. The Keck $u$-band image, totaling 9 hrs of integration time, has a 1$\sigma$ depth of $30.7$ mag arcsec$^{-2}$, making it one of the most sensitive $u$-band images ever obtained. The $u$-band image also substantially improves the accuracy of photometric redshift measurements of $\sim50\%$ of the $z\sim3$ LBGs, significantly reducing the traditional degeneracy of colors between $z\sim3$ and $z\sim0.2$ galaxies. This sample provides the most sensitive, high-resolution multi-filter imaging of reliably identified $z\sim3$ LBGs for morphological studies of galaxy formation and evolution and the star formation efficiency of gas at high redshift.' author: - 'Marc Rafelski, Arthur M. Wolfe, Jeff Cooke, Hsiao-Wen Chen, Taft E. Armandroff, & Gregory D. Wirth' bibliography: - 'lbgbib.bib' title: | Deep Keck $u$-band imaging of the Hubble Ultra Deep Field:\ A catalog of $z\sim3$ Lyman Break Galaxies\ --- Introduction ============ The Hubble Ultra Deep Field [UDF; @Beckwith:2006p1529] provides the most sensitive high-resolution images ever taken, yielding a unique data set for studying galaxy evolution. These data have contributed to many scientific advances, including constraining the star formation efficiency of gas at $z\sim3$ [@Wolfe:2006p474], aiding in determining the luminosity function in the redshift range $4\lesssim z \lesssim 6$ [@Bouwens:2007p4335], bringing insight into the merger fractions of galaxies [@Conselice:2008p5047], and yielding the discovery of clumpy galaxies at high redshift [@Elmegreen:2007p1537]. Knowledge of galaxy redshifts is essential to understanding their nature, and various approaches are used to estimate this key attribute. A number of studies identify objects in the redshift range $4\lesssim z \lesssim 6$ using the deep multi-filter (BVIZJH) UDF images by identifying so-called “dropout" galaxies which are detectable in certain broadband filters but not in others [@Beckwith:2006p1529; @Bouwens:2006p4586; @Bouwens:2007p4335]. Others use photometric redshifts derived across the entire redshift range $0 < z \lesssim 6$ by analyzing the colors of galaxies in a wider range of filters [@Coe:2006p1519]. This latter approach has the potential to provide relatively accurate [$\sigma_{\Delta z/(1+z)} \lesssim 0.1$, @FernandezSoto:2001p2773] redshift estimates for a large number of galaxies in a given field, but traditionally has serious problems producing accurate results near $z\sim3$, a redshift range for which key spectral energy distribution (SED) features fall blueward of the previously available filter set. The majority of galaxies observed at $z\sim3$ are Lyman break galaxies (LBGs): star-forming galaxies that are selected based on the break in their SED at the 912 Å Lyman limit primarily by interstellar gas intrinsic to the galaxy, as well as a flux decrement shortward of 1216 Å in the galaxy rest frame due to absorption by the Lyman series of opticallythick hydrogen gas along the line of sight. This Lyman limit discontinuity in the SED significantly dims these galaxies shortward of $\sim3500$ Å, allowing them to be found with the U-band dropout technique [@Steidel:1992p1911; @Steidel:1995p1873; @Steidel:1996p5981; @Steidel:1996p5985]. Previously, the available observations in the UDF did not include deep $u$-band imaging. The next prominent broadband signature in the SED is the 4000 Å break, which is redshifted to the near infrared (IR) for $z\sim3$ LBGs. Without the $u$-band or very deep IR coverage, it is very difficult to determine from broadband imaging alone whether the observed decrement in the SED near the observed-frame $\sim4800$ Å is a low-redshift galaxy with a 4000 Å break, or a high-redshift galaxy with a decrement from the 1216 Å break. This degeneracy causes “catastrophic" errors in the photometric redshifts of galaxies at $z\sim3$ without $u$-band data [@Ellis:1997p3771; @FernandezSoto:1999p2784; @Benitez:2000p3572]. The purpose of this paper is to present a reliable sample of LBGs at $z\sim3$ in the UDF through the introduction of ultra-deep u’-band imaging acquired with the 10m Keck I telescope. The Keck I telescope and the Low Resolution Imaging Spectrometer [LRIS; @Oke:1995p6046; @McCarthy:1998p6102] form an ideal combination to probe galaxies at the $z\sim3$ epoch due to the light-gathering power of the 10 m primary mirror and the outstanding efficiency of the LRIS blue channel in the near UV. This allows the $u$-band filter (with effective wavelength $\lambda_{o} \sim3400$ Å, FWHM$\sim690$ Å) used with the blue arm (LRIS-B) to be $\sim300$ Å bluer and $\sim360$ Å wider than the $u$-band filter on the Visible Multi-Object Spectrograph [VIMOS; @LeFevre:2003p8774] instrument ($\lambda_{o} \sim3700$ Å, FWHM$\sim330$ Å) at the Very Large Telescope (VLT) and still be effective. Consequently, this enables us to probe the Lyman break efficiently to a lower limit of $z\sim2.5$, versus the VIMOS limit of $z\sim2.9$ [as shown by the VIMOS observations of GOODS-South by @Nonino:2009p9926]. In addition, although the UDF field can only be observed at high air mass from Mauna Kea, the total throughput of LRIS-B and its bluer $u$-band is approximately twice that of VIMOS with its redder $u$-band. This gives Keck the unique ability to select lower redshift LBGs via their Lyman break. We assemble a reliable sample of LBGs in the UDF with a combination of the $u$-band dropout technique and photometric redshifts, and provide photometric redshifts for all galaxies with good $u$-band photometry. We find that the $u$-band imaging improves the photometric redshifts of $z\sim3$ galaxies by reducing the degeneracy between low and high-redshift galaxies. Furthermore, the combination of deep LRIS $u$-band and high-resolution multiband Hubble Space Telescope (HST) imaging provides an unprecedented view of $z\sim3$ LBGs to improve our understanding of their highly irregular rest-frame UV morphologies [@Law:2007p5043] down to unprecedented depths. The deep, high-resolution LBG sample will also extend current constraints on the star formation efficiency of gas at $z\sim3$. Reservoirs of neutral gas are needed to provide the fuel for star formation. Damped Ly${\alpha}$ systems (DLAs), selected for their neutral hydrogen column densities of $N_{\rm H I} \geq2\times10^{20}$cm$^{-2}$, dominate the neutral-gas content of the universe in the redshift interval $0<z<5$. DLAs contain enough gas to account for 50% of the mass content of visible matter in modern galaxies [see @Wolfe:2005p382 for a review] and may act as neutral-gas reservoirs for star formation, since stars form when local values of the ${\rm H~I}$ column density, $N_{\rm H I}$, exceed a critical value. At high redshift, the star formation rate (SFR) is assumed to follow the Kennicutt-Schmidt (KS) law (established for nearby galaxies) which states how the SFR per unit physical area, $\dot{\Sigma_{*}}$, relates to the neutral gas (i.e., ${\rm H~I}$ and ${\rm H_2}$) column density: $\dot{\Sigma_{*}} \propto N_H^{1.4}$ [@Kennicutt:1998p3174]. Strong DLAs with $N_{\rm H I} \geq1.6\times10^{21}$cm$^{-2}$ in the redshift range $2.5\lesssim z \lesssim 3.5$ are predicted to have emission from star formation in the rest-frame FUV redshifted into the optical such that they are detectable in the UDF F606W image. @Wolfe:2006p474 searched for low surface-brightness emission from DLAs in the UDF and found the [*in situ*]{} SFR efficiency of DLAs to be less than 5$\%$ of the KS law. In other words, star formation must occur at much lower rates in DLAs at $z\sim3$ than in modern galaxies. Whereas the @Wolfe:2006p474 results set sensitive upper limits on [*in situ*]{} star formation in DLAs excluding known galaxy regions, no such limits exist for DLAs containing LBGs. The sample presented here enables a search for spatially extended low-surface-brightness emission around $z\sim3$ LBGs which will yield constraints on the star formation efficiency at high redshift (Rafelski et al. in prep). This is one of the main motivations for constructing this sample, and therefore we construct our sample conservatively to minimize the number of potential interlopers. We present our catalog of $z\sim3$ LBGs, provide photometric redshifts for the entire sample of objects that have reliable $u$-band photometry, and make the $u$-band image available to the public. The observations are described in §2, and the data reduction and analysis in §3. We discuss the photometric selection of $z\sim3$ galaxies in §4, and summarize our major findings in §5. Throughout this paper, we adopt the AB magnitude system and an $(\Omega_M, \Omega_\Lambda, h)=(0.3,0.7,0.7)$ cosmology with parameters $\Gamma=0.21, n=1$, which are largely consistent with recent values [@Hinshaw:2009p8215]. Observations ============ The $u$-band ($\lambda_{o} \sim3400$ Å, FWHM$\sim690$ Å) images of the UDF ($\alpha(J2000) = 03^{h}32^{m} 39^{s}$, $\delta(J2000) = -27^{\circ}$47$\tt'$29.$\tt''1$) were obtained with the 10 m Keck I telescope using the LRIS. The $u$-band data were taken with the new Cassegrain Atmospheric Dispersion Corrector [Cass ADC; @Phillips:2006p6047] to minimize image distortions from differential atmospheric refraction. The Cass ADC was critical to the success of these observations because of the low elevation of the UDF from Mauna Kea, and the blue wavelength of our primary band. The data also benefit from the backside illuminated, dual UV-optimized Marconi $2048\times4096$ pixel CCDs on LRIS-B, with 0.$\tt''$135 pixels and very high UV quantum efficiency ($\sim$ 50% at $\lambda_{o} \sim3450$ Å). Because the quantum efficiency of the two CCD chips varies $\sim$30%–35% in the $u$-band, we placed the UDF entirely on the more sensitive chip (CCD1). We used a dichroic beam splitter (D460) to simultaneously observe the $u$-band on the blue side, and the $V$-band and $R$-band on the red side. The red channel data were taken for astrometric and photometric consistency checks, and were not used in this study other than for calibration purposes due to the much deeper and higher resolution UDF images available over those wavelength ranges. The observations were carried out in darktime over two runs, 2007 October 7–9 (three half nights) and 2007 December 3–4 (two half nights). We lost the entire first night of the first run to weather, and had moderate weather and seeing conditions throughout both runs that yielded a median seeing FWHM of $\sim$1.$\tt''$3 in the $u$-band. In order to maximize time on sky, we adopted the dither strategy of @Sawicki:2005p1714 [see their Eqn. 1], setting the red channel exposure times such that the last readout of the red channel coincided with the end of the blue channel readout. The $u$-band images were acquired as a series of $36\times900$s exposures to avoid the nonlinear regime of the CCD, totaling 9 hrs of integration time on target. We executed a nine-point dither pattern with 10$\tt''$ dithers to deal with bad pixels and to create a super-sky flat. The UDF can only be observed at large air masses at Mauna Kea, which can affect the shape of the $u$-band throughput. We show a histogram of our observed air masses and the variation of the filter throughput for different air masses in Figure \[airmass\]. We find that the variations in air masses in our sample do not significantly affect the blue side cutoff of our filter. Specifically, we find that the variation between our best and worst air mass yields a change in $\lambda_{o} \lesssim 10\AA$, with typical changes in $\lambda_{o}$ being $\lesssim 5\AA$. Given the small variation of $\lambda_{o}$, we derive the final $u$-band filter transmission curve by convolving the measured filter throughput with the atmospheric attenuation at our average air mass of 1.57 and the CCD quantum efficiency. Throughout the paper we utilize the $B$, $V$, $i^\prime$, and $z^\prime$ band (F435W, F606W, F775W, and F850LP, respectively) observations of the UDF [@Beckwith:2006p1529], obtained with the Wide Field Camera (WFC) on the HST Advanced Camera for Surveys [ACS; @Ford:2002p6197]. These images cover 12.80 arcmin$^{2}$, although we prune our catalog to the central 11.56 arcmin$^{2}$ which contains at least half the average depth of the whole image and overlaps our $u$-band image with uniform depth. In addition to these ACS images, we also include observations taken with the NICMOS camera NIC3 in the $J$ and $H$ bands [F110W and F160W; @Thompson:2006p1569]. These red wavelengths cover the central 5.76 arcmin$^{2}$ of the UDF, so we only use them whenever the field of view (FOV) overlaps. Figure \[filter\] plots the total throughput of the filters used in this paper: the Keck LRIS-B $u$-band, the HST ACS $B$, $V$, $i^\prime$, and $z^\prime$ bands, and the HST NICMOS $J$ and $H$ bands, and include the CCD quantum efficiency and atmospheric attenuation. Data Reduction and Analysis =========================== Image processing ---------------- The LRIS-B data were processed in a combination of custom code in IDL, and standard data reduction algorithms from IRAF[^1]. The images were bias subtracted, first from the overscan region and then from separate bias frames to remove any residuals, before being trimmed to remove any vignetted regions of LRIS. Super-sky flats were created using IRAF from the median of all the unregistered images with sigma clipping to remove objects and cosmic rays, which were then used to flat-field the images. These prove superior to dome and twilight sky flats in determining the CCD response in the $u$-band because of the short wavelengths, and yield excellent flats. Dithering offsets were determined using custom code and $\tt SExtractor$ [@Bertin:1996p6133] to locate bright objects common to all the images. In order to [*drizzle*]{} [@Fruchter:2002p6141] only once and keep correlated noise to a minimum, the images were distortion corrected using the solution provided by J. Cohen & W. Huang (private communication, February 2008) and shifted to correct for the dithering offsets all at once using the $\tt geotran$ package in IRAF. We drizzled with $\tt pixfrac = 0.5$ to improve the point-spread function (PSF) and set the pixel scale such that the pixels are integer multiples of the UDF pixels, 0.$\tt''$12, for reasons explained in §3.4. To maximize the signal to noise (S/N), the images were weighted by their inverse variances. The drizzled images were then combined using the IRAF task $\tt imcombine$, with sigma clipping to remove bad pixels and cosmic rays. These images were trimmed to include regions of uniform depth (11.56 arcmin$^{2}$), normalized to an effective exposure time of 1s, and then background subtracted using the global background determined with $\tt SExtractor$. An astrometric solution was applied with the IRAF task $\tt ccmap$ by matching bright and nearly unresolved objects in the UDF to those in the final stacked image. The final rms astrometric errors are between 0.$\tt''$02 and 0.$\tt''$03, negligible in comparison to the 1.$\tt''$3 FWHM of the $u$-band PSF. Photometric Calibration ----------------------- Moderate weather yielded no completely photometric night for our calibration, and we therefore calibrated to the Multi-wavelength Survey by Yale–Chile [MUSYC; @Gawiser:2006p2052], which covers the same part of the sky as our primary observations. We include all our filters ($u$, $V$, and $R$) for this calibration. All our calibrations are in the AB95 system of @Fukugita:1996p2320, hereafter referred to as AB magnitudes. We used the IRAF tasks $\tt phot$ and $\tt fitparams$ to solve for zero-point magnitudes, air-mass correction coefficients, and appropriate color correction coefficients. Specifically, we use the equation: $$\rm m = -2.5 \log_{10}(F) + Z - c X - Y, \label{eq:photcal}$$ where $F$ is the flux in counts/s, $Z$ is the zero-point magnitude, $c$ is the air-mass coefficient, $X$ is the air-mass, and $Y$ is the color term. We allow a color term to account for any differences between the $U$-band filter used by MUSYC and the $u$-band filter in our observations, similar to the color term used by @Gawiser:2006p2052 to correct for their differences compared to the Johnson-Cousins filter set. The galactic extinction of 0.0384 is subtracted from the zero-point magnitude, using the relation $A(u)=4.8E(B-V)$ interpreted from @Cardelli:1989p2011, where $E(B-V)=0.008$ [@Beckwith:2006p1529]. The final results for the $u$-band calibration are a zero-point magnitude $Z=27.80\pm0.03$, an air-mass coefficient term $c=0.41$, an average air-mass $X=1.57$, and a color term $Y=(0.13\pm0.02)\times(U-B)_{AB}$. We double check our calibration using multiple observed photometric standard stars [@Landolt:1992p6144] over a range of air masses, and the zero-point and air-mass correction coefficients are consistent with those found when calibrating to the MUSYC catalog. As a result, we are confident that the $u$-band image is well calibrated. Depth of the $u$-band Image --------------------------- It is useful to characterize the depth of the $u$-band image, however, different definitions exist to describe the sensitivity of an image. Two commonly quoted limits are presented here: a measurement of the sky fluctuations of the image, and a limiting magnitude corresponding to a 50% decrease in object counts through Monte Carlo simulations. The sky noise of the image is measured via the pixel to pixel rms fluctuations in the image, best measured by fitting a Gaussian to the histogram of all pixels without sources. Sources are identified with the program $\tt SExtractor$, with the threshold set such that the negative image has no detections. This yields a depth of 31.0 mag arcsec$^{-2}$, $1\sigma_{u}$ sky fluctuations. However, since the image is drizzled, correlated noise between the pixels is introduced. The theoretical increase in noise due to $\tt pixfrac = 0.5$ using equation 10 from @Fruchter:2002p6141 is 20%. Alternatively, the correlated noise can be estimated empirically with equation 2 from @FernandezSoto:1999p2784, which uses a covariance matrix to determine a small overestimate of the real error[^2]. This results in a slightly more conservative depth of 30.7 mag arcsec$^{-2}$, $1\sigma_{u}$ sky fluctuations, which is what we quote here. Additionally, to get a better sense of the usable depth of the image, a limiting magnitude is often quoted [e.g., @Chen:2002p2597; @Sawicki:2005p1714]. We define $u_{\mathrm{lim}}$ as the magnitude limit at which more than $50\%$ of the objects are detected. The best way to determine $u_{\mathrm{lim}}$ is through Monte Carlo simulations, which take into account both the sky surface brightness and the seeing in our image. Since our image PSF has more flux in the wings than a Gaussian, we plant both Gaussian objects, and objects modeled to fit our PSF using a two-dimensional (2-D) Moffat profile[^3]. The custom IDL code was used to extract bright unresolved objects in the $u$-band image, take the median of all these objects, and create a composite object stack. The composite image was then fit both by a Gaussian, and by a 2-D Moffat profile using $\tt MPFIT$ [@Markwardt:2009p7396], with a modification to ensure that the wings of the profile go to 0 for the Moffat profile. We semi-randomly insert these objects with a range of fluxes into the $u$-band image. The locations of the planted objects are constrained such that they do not: 1) fall off the edges, 2) fall on a real detected object, and 3) fall on any previously planted objects. We find a total $u_{\mathrm{lim}}$ of 27.3 mag for the Gaussian and 27.2 mag for the Moffat profile (see Figure \[det\]), where the total magnitudes are based on $\tt SExtractor$’s $\tt mag\_auto$ apertures, which are Kron-like [@Kron:1980p9237] elliptical apertures corrected for possible contamination. While total magnitudes are generally reported, isophotal apertures are more appropriate for LBGs, and yield a $u_{\mathrm{lim}}$ of 27.7 mag for the Gaussian and 27.6 mag for the Moffat profile. This is one of the deepest $u$-band images ever obtained, with our sensitivity being similar to those reported in the Keck Deep Fields [@Sawicki:2005p1714]. This detection method does not match the detection method used in §3.4, and therefore does not constrain our detection efficiency of LBGs. As explained below, we use our prior knowledge of the positions of the sources which yields a different completeness limit. However, this result gives the depth of our $u$-band image for comparison to other studies. Photometry through Template Fitting ----------------------------------- In order to obtain robust colors across images with varied PSFs, it is necessary to match apertures and correct for PSF differences. If the difference is minor, then methods to apply aperture corrections to account for the variations are appropriate, such as the $\tt ColorPro$ software by @Coe:2006p1519. However, such algorithms don’t perform well when the difference in the PSF FWHM is large, such as when combining the high-resolution data from the HST (0.$\tt''$09 FWHM) with the low-resolution images obtained in this study with Keck (1.$\tt''$3 FWHM). In this case, the uncertainties in aperture corrections are unreasonably large, and the low-resolution images are crowded such that objects overlap, making object definitions that are valid in both high and low-resolution images difficult to determine. In order to avoid these uncertainties, we use the $\tt TFIT$ [@Laidler:2007p2733] template-fitting method that uses prior knowledge of the existence, locations, and morphologies of sources in the deeper high-resolution UDF images to improve the photometric measurements in our low-resolution $u$-band image. This method creates a template of every object by convolving each object in the high-resolution image with the PSF of the low-resolution image. These templates are then fit to the low-resolution image to determine the flux of all the objects in the $u$-band, relative to the flux in the UDF $V$-band image. We chose the $V$-band as the high-resolution reference image because it is closest in wavelength to the $u$-band, without being affected by the Ly$-\alpha$ forest over the redshift interval $2.5\lesssim z \lesssim 3.5$. The result is a very robust color which relates every object in the high-resolution $V$-band image to the low-resolution $u$-band, avoiding the problem of aperture matching between the two images while intrinsically correcting for the PSF difference. Using the $V$-band flux, the color is converted to a $u$-band flux, which inherits the same isophotal aperture as the high-resolution $V$-band image. The aperture correction used to obtain total fluxes for the $V$-band image is then also valid to obtain total fluxes for the $u$-band. For a more in-depth explanation, see @Laidler:2007p2733. This technique is similar to others in the literature [@FernandezSoto:1999p2784; @Labbe:2005p6209; @Shapley:2005p4909; @Grazian:2006p6244], with the original version based on @Papovich:2001p4910 [@Papovich:2004p6306]. We chose to use $\tt TFIT$ as it is publicly available, well documented, and its performance is carefully tested. Like all the other methods, there are some constraints that had to be met to use this algorithm. The first is that the pixel scale of the $u$-band image must be an integer multiple of the pixel scale of the UDF $V$-band image. This was accomplished by drizzling the images such that the pixel scale of the $u$-band is 4 times larger than the $V$-band, as mentioned in §3.1. The second and third requirements are that the images must not be rotated with respect to each other, and the corner of the $V$-band image must coincide with the corner of the $u$-band image. These two requirements were met by rotating and trimming the $V$-band image using IDL. The resultant image was compared to the original image, and the difference in photometry was negligible compared to the intrinsic uncertainties. We also improved our fit by source weighting the rms map before providing it to the $\tt TFIT$ pipeline, as suggested by @Laidler:2007p2733. In order to avoid proliferating different catalogs with minor differences in object definitions, we adopt the object definitions of @Coe:2006p1519. These definitions include the catalogs of @Beckwith:2006p1529 and @Thompson:2006p1569, as well as detections performed on a white light image by @Coe:2006p1519. By using identical object definitions as @Coe:2006p1519, we can use the careful photometry for the $B$, $V$, $i^\prime$, $z^\prime$, $J$, and $H$ bands already determined, and compare our redshift determinations knowing we have used identical apertures. $\tt TFIT$ requires a $\tt SExtractor$ catalog of the $V$-band image as an input to the pipeline. The program $\tt sexseg$ [@Coe:2006p1519] was run on the segmentation map of @Coe:2006p1519 to provide the necessary information to $\tt TFIT$ while using the desired object definitions of @Coe:2006p1519. The segmentation map defines which pixels belong to each identified $V$-band object. The $\tt sexseg$ program forces $\tt SExtractor$ to run using a predefined segmentation map [for details, see @Coe:2006p1519]. $\tt TFIT$ also requires a representative 2-D model of the $u$-band PSF, and is sensitive to the quality of its construction. We use the same Moffat profile fit described in §3.3 to model the PSF of the $u$-band image, and then use this as the transfer kernel by $\tt TFIT$ to convolve the $V$-band galaxy cutouts. We note that there is no significant spatial variation of the PSF across the field. In general, the higher the resolution and sensitivity of the high-resolution image, the better $\tt TFIT$ can model the sources for the low-resolution image. If an input catalog is not complete enough, then unmodeled objects can act as an unsubtracted background, slightly increasing the flux of all objects [@Laidler:2007p2733]. However, this has a limit, and eventually there are so many sources that are too faint to detect in the low-resolution image that galaxies are not well constrained given the substantial number of priors. This yields a large number of galaxies with unconstrained fluxes that increase the uncertainties of the nearby objects without yielding any new information. The UDF $V$-band image has substantially higher resolution and is deeper than the $u$-band image, and therefore a limit was put on the faintest galaxy used as a prior in $\tt TFIT$. Only galaxies brighter than $V=29$ mag are included in the input catalog to $\tt TFIT$. The galaxies fainter than $V=29$ mag are too faint to be constrained by the u image, and only add noise to the $\tt TFIT$ results. We stress that this is a conservative cut, and does not introduce an unsubtracted background. The quality of the resulting photometric fits can be evaluated through Figure \[TFIT\], which depicts four panels: the $V$-band image from the UDF, the u image from Keck, the model image, and the residual image. The model and residual images are diagnostics produced by $\tt TFIT$, and are not used in the fitting process. The model image is a collage of the $V$-band galaxies convolved with the PSF of the $u$-band image, scaled by the $\tt TFIT$ flux measurement for each object. The residual image is the difference of the model and the $u$-band images. Ideally the residual image would be zero, but this is not the case (especially for bright objects), with multiple effects contributing to the imperfect residual. For instance, if the object in the $V$-band image is saturated, then it has the wrong profile for the $u$-band and leaves a residual. Alternatively, imperfections in the modeled PSFs when scaled to large flux measurements of bright objects will also leave a residual. This effect was minimized by using a source-weighted rms map, although photometry of the brightest objects are imperfect. In practice, it is very difficult to perfectly align two images, the distortion correction is not perfect, and images generally have spatially varying PSFs. To minimize these affects, $\tt TFIT$ does a “registration dance", where it cross-correlates each region of the model with the region of the data to find any local shifts. This registration dance was performed, which slightly improved the residual image, and leads to more robust photometry. Sample Selection ---------------- Our aim is to identify a sample of high-redshift galaxies that are suitable for constraining the star formation efficiency of gas at $z\sim3$. A large fraction of objects for which fluxes are measured with $\tt TFIT$ are too faint to yield sufficient information regarding the object’s redshift, and we therefore limit our sample to those objects with high S/N. We select objects based on their $V$-band magnitudes, since cuts in $u$-band would preferentially remove LBGs. The median $u$-band S/N of all objects decreases as a function of the $V$-band magnitude and drops below 3$\sigma$ at $V\geq27.6$ mag. We adopt this $V$-band magnitude cut to include the majority of high S/N u objects, while removing S/N $<$ 3 objects. We note that this is a conservative 3$\sigma$ cut since most LBGs won’t be detected in the $u$-band reducing the overall median S/N. In addition to removing low S/N objects, we wish to remove objects with photometry affected by nearby neighbors. $\tt TFIT$ can identify such objects with the covariance index diagnostic that uses the covariance matrix [@Laidler:2007p2733]. During the fitting of an object’s photometry as described in §3.4, $\tt TFIT$ uses the singular value decomposition routine to perform a chi-square ($\chi^2$) minimization. This yields a covariance matrix which is used to calculate uncertainties via the square root of the variance (the diagonal element), as well as the covariance (the off diagonal elements) of all objects in the fit. The covariance index is the absolute value of the ratio of the off-diagonal and the diagonal elements [@Laidler:2007p2733]. The maximum value of the covariance index is saved, along with the corresponding object ID, and yields information about how an object’s photometry is affected by its most influential neighbor. Objects for which this ratio is much less than 1 are generally isolated objects, while objects with large covariance index values can have unreliable photometry. Multiple cuts are implemented to remove objects whose photometry have been significantly affected. First, all objects with a covariance index greater than 1 are cut because their measurements are not considered reliable. All remaining objects are kept if one of two conditions apply: either they have a covariance index less than 0.5, or they have a $V$-band flux greater than twice that of the nearest neighbor. This approach balances the desire for a large sample with the need to obtain reliable photometry. Lastly, we only consider objects that are detected in all four ACS bands ($B$, $V$, $i^\prime$, and $z^\prime$) to facilitate and improve color selection in §4.2. This requirement removes most galaxies at $z>4$ as they have little flux in the B or redder bands. Table \[tab1\] lists the 1457 galaxies that are left after the $V$-band magnitude, covariance index, and $V$-band flux ratio cuts. Each entry includes the object ID (matching those from @Coe:2006p1519), R.A., decl., u magnitudes and uncertainties, and u S/N. It also lists information regarding the most significant neighbor, namely, its object ID, covariance index, $V$-band flux ratio, and separation distance. The u magnitude uncertainties include the uncertainties due to the $V$-band aperture correction made in @Coe:2006p1519, as the accuracy of our u magnitudes depend on the accuracy of the $V$-band magnitudes. Objects that are observed with a $u$-band flux less than 3$\sigma$ significance are considered undetected in the $u$-band and are assigned a 3$\sigma$ upper limit. The magnitude limit is set to: $$m_{3\sigma} = -2.5 \log_{10}(3\sigma_{\mathrm{TFIT}}) + Z_{\mathrm{LRIS}}, \label{eq:maglimit}$$ where $Z_{\mathrm{LRIS}}$ is the zero point magnitude. The error distribution from $\tt TFIT$ is normally distributed and the upper limits from $\tt TFIT$ are robust, as evaluated in @Laidler:2007p2733. Photometric Selection of $z\sim3$ Galaxies ========================================== Star-forming galaxies early in their history, such as LBGs, exhibit a clear break in their Spectral Energy Distribution (SED) at the 912 Å Lyman limit (Lyman break), as well as multiple absorption lines shortward of 1216 Å by the Lyman series. Photons bluer than the Lyman limit are not observed because the interstellar gas intrinsic to the galaxy and components of the foreground gas along the line of sight of the galaxy are optically thick at $\lambda \leq912$ Å. At a redshift of $\gtrsim2.5$, these spectral features are redshifted to optical wavelengths, with the Lyman break entering the $u$-band ($\sim$3500 Å). The ability to observe the Lyman break optically allows large samples of high-redshift galaxies to be identified based on multicolor photometry where LBGs are selected by a strong flux decrement shortward of the Lyman limit, and a continuum longward of rest frame Ly$\alpha$ (1215 Å) [e.g., @Steidel:1992p1911; @Steidel:1995p1873; @Steidel:1996p5981; @Steidel:1996p5985]. In addition, photometric redshift determination algorithms can estimate object redshifts using galaxy SED templates[^4], which include additional information other than the Lyman break, such as the slope of the SED, the Balmer break at 3646 Å, and the more pronounced 4000 Å break, due to the sudden onset of stellar photospheric opacity by ionized metals and the CaII HK doublet [@Hamilton:1985p8966]. Although color selection and photometric redshifts utilize the same SEDs for selecting $z\sim3$ galaxies, they each have their strengths and weaknesses. We use a combination of color selection and photometric redshifts to create our sample of LBGs (§4.2). We provide a description of the photometric redshift process, color selection method, and a catalog of all objects and their photometric redshifts below. Photometric Redshifts --------------------- Photometric redshifts (hereafter, photo-$z$’s) are a well known and robust procedure to determine redshifts of galaxies when spectra are unavailable [e.g. @Koo:1985p9108; @Lanzetta:1998p9100; @Benitez:2000p3572; @Coe:2006p1519; @Hildebrandt:2008p2281]. They have the advantage over color selection that they take into account all the colors available simultaneously in $\chi^2$ fits to template SEDs and yield more precise redshift information with clear redshift confidence limits. The photo-$z$’s also sample $z\sim3$ galaxies in regions of color space that color selected samples avoid because of low-redshift galaxies, and therefore can provide a larger sample. However, photo-$z$ uncertainties do not always include systematic errors caused by variations and evolution of galaxy SEDs compared to SED templates, and possible mismatches of SED templates (see §4.1.2). Such systematic problems and the lack of a large spectroscopic sample make it difficult to characterize the contamination fraction of photo-$z$ selected $z\sim3$ galaxies (see §4.1.3). Nonetheless, they provide the largest sample of LBGs for study. For each galaxy, photo-$z$ codes produce a probability distribution function, $P(z)$, representing the probability of a galaxy being at any specific redshift. However, the $P(z)$ can have multiple peaks, especially at $z\sim3$ where there is a degeneracy with galaxies at $z\sim0.2$, which then translates into large uncertainties for the photo-$z$ [@Benitez:2000p3572]. The introduction of the $u$-band data helps resolve the photo-$z$ degeneracy and improve the photo-$z$ fits for $z\sim3$ galaxies as it targets the most dominant signature in their SED, the Lyman break. We present photo-$z$’s for the entire sample of galaxies with $u$-band data from §3.5 in Table \[tab2\], but caution against using them blindly to select galaxies at $z\sim3$. We recommend making cuts on the sample to select galaxies with good $\chi^2_{\mathrm{mod}}$ and $\tt ODDS$ (for a description of these parameters, see §4.1.1). ### Bayesian Photometric Redshifts There are many different photo-$z$ codes available, and @Hildebrandt:2008p2281 explain the benefits of the different algorithms. We chose to use the Bayesian photo-$z$’s (BPZ) [@Benitez:2000p3572; @Benitez:2004p3578; @Coe:2006p1519] to be consistent with past photo-$z$’s determined for the UDF without $u$-band data [@Coe:2006p1519]. @Hildebrandt:2008p2281 advise using the SED templates supplied with their respective codes, since user-supplied SED templates can cause problems. However, a re-calibration of the SED template set improves the performance of the photo-$z$ redshifts, and we use the re-calibrated SED templates from @Benitez:2004p3578 and @Coe:2006p1519 that have been extensively tested with BPZ. These re-calibrated SEDs are based on the star-forming (Im), spiral (Scd, Sbc), and elliptical (El) galaxy templates from @Coleman:1980p4084, the star bursting galaxy templates with different reddening (SB2, SB3) from @Kinney:1996p6459, and the faint blue galaxy SEDs with ages of 25 and 5 Myr and metallicities of Z=0.08 without dust from @Bruzual:2003p4897, described in section §4.1 of @Coe:2006p1519. We interpolate between adjacent galaxy SED templates for an additional two SED templates in the photo-$z$ fit, similar to @Benitez:2004p3578 and @Coe:2006p1519. SED templates are not always a good match for each specific galaxy, and when it is not possible to get a good fit to the SED template, then the resulting redshift may not be accurate. As a diagnostic of the goodness of fit, the BPZ code provides a reduced chi square ($\chi^2_\nu$) value. However, high $\chi^2_\nu$ values do not always indicate an unreliable redshift. Bright galaxies with small photometric uncertainties will have larger $\chi^2_\nu$ values than faint galaxies with larger photometric uncertainties for the same numerator in $\chi^2$ (also known as the variance), yet have more reliable redshifts [see Figure 21 in @Coe:2006p1519]. This problem occurs because the systematic uncertainties of the SED templates are not taken into account, making $\chi^2_\nu$ no longer represent the relative quality of the fit. Nonetheless, a mechanism to evaluate the quality of the fits is required to trim the sample to reliable redshifts. To this end, @Coe:2006p1519 introduce a modified reduced chi square ($\chi^2_{\mathrm{mod}}$) value that assigns an uncertainty to the SED templates in addition to the uncertainty in the photometry of the galaxy. For clarity, we reproduce the equation from @Coe:2006p1519 here: $$\chi^2_{\mathrm{mod}}=\sum_{\alpha}\frac{(f_\alpha-f_{T\alpha})^2}{\sigma^2_{f_\alpha}+\sigma^2_{f_T}}/\nu, \label{eq:chi}$$ where $f_\alpha$ are the observed fluxes, $\sigma_{f_\alpha}$ is the error in observed fluxes, and $f_{T_\alpha}$ are the model fluxes, normalized to the observed fluxes. $\sigma_{f_{T_\alpha}}$ represent the model flux errors, which are set by @Coe:2006p1519 to $\sigma_{f_{T_\alpha}}=$max$_\alpha(f_{T\alpha})/15$. While the definition of $\sigma_{f_{T_\alpha}}$ is arbitrary, it was picked such that the resultant $\chi^2_{\mathrm{mod}}$ is a more realistic measure of the goodness of fit. This is especially important for bright galaxies, as uncertainties in the templates dominate the error budget, and the $\chi^2_{\nu}$ values are not useful. The reported $\chi^2_{mod}$ values are reduced chi square values, obtained by dividing by the number of degrees of freedom, $\nu$. The number of degrees of freedom is the difference between the number of filters observed and the number of parameters (in this case there are three fit parameters, redshift, template, and amplitude). The minimum number of filters used is 5 and the maximum is 7, so the range of $\nu$ in our study is $2\leq \nu \leq4$. We note that $\chi^2_{\mathrm{mod}}$ is calculated after the photo-$z$ determinations, and does not affect $P(z)$. If the quality of the fit to the SED template is good, then the $\tt ODDS$ parameter is useful in measuring the spread in $P(z)$. A galaxy with high $\tt ODDS$ has a single peak in $P(z)$, while multiple or very wide peaks yield low $\tt ODDS$. In general, restricting the photometric sample to those objects with $\tt ODDS$ $> 0.9--0.99$ yield clearly defined redshifts [@Benitez:2000p3572; @Benitez:2004p3578; @Coe:2006p1519]. In this paper we are conservative and restrict our sample to objects with the best vales of $\tt ODDS$, those with $\tt ODDS$ $>0.99$. Additionally, selecting galaxies based on SED template type ($t_b$) can useful when selecting a specific type of galaxy, where 1=El\_cww, 2=Scd\_cww, 3=Sbc\_cww, 4=Im\_cww, 5=SB3\_kin, 6=SB2\_kin , 7=25Myr, and 8=5Myr. For instance, in selecting LBGs we constrain ourselves to galaxies with $t_b>3$, which only include star-forming galaxy templates. The redshifts, redshift uncertainties for a 95% confidence interval, $t_b$, $\tt ODDS$, $\chi^2_\nu$, and $\chi^2_{\mathrm{mod}}$ are all tabulated in Table \[tab2\]. ### Photometric Redshift Measurement Uncertainties It is important to understand the origins of photo-$z$ uncertainties in order to have confidence in their values. There are two types of uncertainties in photo-$z$’s: 1) photometric measurement uncertainties and 2) template mismatch variance [@Lanzetta:1998p9100; @FernandezSoto:2001p2773; @FernandezSoto:2002p9103; @Chen:2003p8912]. Photometric measurement uncertainties are well understood, and they are responsible for the width of $P(z)$, which determines the reported photo-$z$ uncertainties. Faint galaxies with larger photometric measurement uncertainties will yield larger uncertainties in the photo-$z$’s than brighter galaxies. If the photometric measurement uncertainties are very large, then the photo-$z$ will be poorly constrained, because this results in multiple peaks in $P(z)$ corresponding to many possible redshifts. The other possible uncertainty comes from template mismatches, which is a systematic error due to the finite number of templates used in the photo-$z$ determination. Not all galaxies will be well represented by our template SEDs, yielding large $\chi^2_{\nu}$ values. One method to decrease template mismatch errors is to introduce more template SEDs, since that increases the chance that there exist good matching SED templates for each galaxy. While this can improve low-redshift performance, it also increases the number of degenerate solutions and therefore gives poorer high-redshift performance [@Hildebrandt:2008p2281]. Degenerate photo-$z$’s occur when different SED templates fit the photometry equally well, resulting in multiple peaks in $P(z)$ and therefore multiple possible redshifts. We consider galaxy redshifts degenerate if they have two or more peaks in $P(z)$ at 95% confidence separated by $\Delta z > 1$. In these cases, the resultant reported uncertainties are very large and the galaxy redshift is poorly constrained. We are mainly interested in good performance at high redshift and therefore do not increase the number of SED templates. Template mismatches are also the cause for “catastrophic" photo-$z$ errors that occur, where the photo-$z$ is incorrect and the uncertainty does not include the correct redshift [@Ellis:1997p3771; @FernandezSoto:1999p2784; @Benitez:2000p3572]. Catastrophic photo-$z$ errors typically occur because multiple peaks in $P(z)$ are incorrectly suppressed, leaving only one peak. The suppression of multiple peaks in bright galaxies likely occur because of their small photometric uncertainties yielding large $\chi^2_\nu$ values without taking into account the systematic uncertainties in the SED templates, which exaggerate the differences between the different SED template fits that correspond to different peaks in $P(z)$. This suppresses the peaks at other redshifts, resulting in possible “catastrophic" photo-$z$ errors (Dan Coe, private communication). It is likely that the incorrect suppression of peaks in $P(z)$ can be fixed through the introduction of SED uncertainties in the initial $\chi^2$ fit, although such an addition requires an understanding of those uncertainties using large surveys with both photo-$z$ and spec-$z$’s. This is neither in the scope of the UDF or this paper, and is being investigated elsewhere (Coe et al. in prep). In the mean time, the best we can do is reject photo-$z$’s based on their $\chi^2_{\mathrm{mod}}$ values. However, $\chi^2_{\mathrm{mod}}$ is not a true statistical test like $\chi^2_{\nu}$, and therefore cuts normally appropriate for $\chi^2_{\nu}$ are not valid for $\chi^2_{\mathrm{mod}}$. Additionally, galaxies with $J$ and $H$-bands data have median $\chi^2_{\mathrm{mod}}$ larger by $\sim0.7$ than those without infrared (IR) data, although the inclusion of the IR data improves the reliability of the photo-$z$’s because of the significantly increased lever arm [@Coe:2006p1519]. We therefore don’t use the same cut as used in @Coe:2006p1519 ($\chi^2_{\mathrm{mod}} < 1$), but rather use $\chi^2_{\mathrm{mod}}$ to conservatively remove possible bad photo-$z$ fits. While we do present all the photo-$z$ fits, we only include those with $\chi^2_{\mathrm{mod}} < 4$ for our $z\sim3$ galaxy sample. This cut only reduces the total number of galaxies with photo-$z$’s by $\sim$5% (to 1385 galaxies). ### Comparison of Photometric and Spectroscopic Redshifts In order to test the accuracy of the photo-$z$’s, we compare the redshifts with spectroscopic redshifts (spec-$z$’s). We compile a list of 100 reliable spec-$z$’s in the UDF that match our sample from §3.5 (see Table \[tab3\])[^5]. In this sample, 18 spec-$z$’s are from the VIMOS VLT Deep Survey [VVDS; @LeFevre:2004p3988], where we only include redshifts with 95% confidence and multiple lines. Another 22 redshifts come from the GOODS VLT VIMOS survey [VIMOS; @Popesso:2009p6629], where we include redshifts with A or B quality spectra. These spectra have good cross-correlation coefficients of the spectra with the templates and multiple lines are well identified. An additional 6 redshifts are from @Szokoly:2004p4004 using the VLT FORS1/FORS2 spectrographs, where we use only those flagged as ’reliable’ redshifts (quality flags “2" or “2+"). The remaining 57 redshifts are from the GOODS VLT FORS2 survey [FORS2; @Vanzella:2005p6608; @Vanzella:2006p6605; @Vanzella:2008p3704; @Vanzella:2009p9955], where we only include redshifts from A or B quality spectra. We do not include redshifts from the slitless spectra obtained as part of the Grism ACS Program for Extragalactic Science (GRAPES) [@Pirzkal:2004p9474], since the redshift determinations do not provide an independent check to photo-$z$’s because photo-$z$’s were used to help identify the emission lines [@Xu:2007p4282]. The photo-$z$’s agree relatively well with the spec-$z$’s (see left panel of Figure \[photzspec\]), and have 100% agreement in the redshift interval of interest ($2.5\lesssim z \lesssim 3.5$), although only five objects have spec-$z$’s at these redshifts. There are clearly some galaxies that have incorrect photo-$z$’s at lower redshift where the $u$-band does not sample the Lyman break. Of the 100 galaxies from Table \[tab3\], 97 have $\chi^2_{\mathrm{mod}} < 4$, of which 93 have $\tt ODDS$ $ > 0.99$. The galaxy at a spec-$z$ of 1.99 (ID 6834) has a $\chi^2_{\mathrm{mod}} \sim17$ and the resultant photo-$z$ should be ignored. Only object 8585 meets our criteria, but has a significantly different photo-$z$ than its spec-$z$ and does not include the correct redshift in its $P(z)$. This object, with a spec-$z$ of $z=0.3775$ [@LeFevre:2004p3988] and a photo-$z$ of $1.45\pm0.24$, is a case where the spec-$z$ may be wrong. In general we tried to minimize this possibility by selecting reliable spec-$z$’s, but the spec-$z$’s can still be wrong as seen in some comparisons of @FernandezSoto:2001p2773. In our case, the photometric redshift yields a good fit to the SED templates, and the fit to the SED template at the spec-$z$ is a bad fit. The published spectrum shows that the spec-$z$ is mainly determined by the $H\alpha$ line. This could easily be confused with the O$\rm{II}$ line for a galaxy with a redshift of $z=1.43$, which would be consistent with our photo-$z$. This leaves one galaxy with a possible “catastrophic error", although it is not in the redshift interval of interest ($2.5\lesssim z \lesssim 3.5$). ### Improvement of photo-$z$’s with the Addition of the $u$-band A comparison of the redshift interval ($2.5\lesssim z \lesssim 3.5$) in the two panels of Figure \[photzspec\] shows a significant improvement in the photo-$z$’s with $u$-band data. The left panel depicts photo-$z$’s with $u$-band data while the right panel depicts photo-$z$’s without $u$-band data. The $u$-band helps prevent catastrophic redshift errors that can occur because of a similarity in the colors of low-redshift galaxies and high-redshift galaxies. [@Ellis:1997p3771; @FernandezSoto:1999p2784; @Benitez:2000p3572]. Usually, this degeneracy causes $P(z)$ to have multiple peaks representing both possible redshifts in the absence of $u$-band data [@Coe:2006p1519]. As discussed in §4.1.2, however, sometimes secondary peaks are absent yielding incorrect redshift uncertainties and possibly incorrect redshifts. For example, three galaxies (IDs 830, 4267, 5491) with $\chi^2_{\mathrm{mod}} < 4$ and $\tt ODDS$$> 0.99$ have catastrophic redshift errors without $u$-band photometry and gain accurate photo-$z$’s based on their spec-$z$’s after we include the $u$-band (e.g. see Figure \[obj830\]). As expected, the u data clearly improve the photo-$z$’s of galaxies at $z\sim3$ when compared to the spec-$z$’s. However, we are constrained to discussing small number statistics, and the true contamination fraction is unknown. We would ideally like to compare a larger number of $z\sim3$ photo-$z$’s to spec-$z$’s to get a better understanding of the improvement of the photo-$z$’s with the addition of the $u$-band, although such spectroscopic data are lacking in the UDF. However, since the $u$-band samples the Lyman break of $z\sim3$ galaxies, and we know that the photo-$z$’s with $u$-band are more reliable than those without, we can investigate the changes in the photo-$z$’s. We therefore compare all the photo-$z$’s with $\chi^2_{\mathrm{mod}} < 4$ of galaxies with and without the $u$-band data in Figure \[photzcomp\], which highlights two effects. The first is that $P(z)$ changes markedly for 125 galaxies, mostly in the redshift interval $2\lesssim z \lesssim 3$, where the $u$-band probes the Lyman break. Of these, 102 change their photo-$z$’s from $z\leq1$ to $z\sim2-3$, whereas 23 switch in the other direction. This change is a result of the code selecting different $P(z)$ peaks as the most probable redshift for galaxies having multiple peaks . The second effect shown in Figure \[photzcomp\] is the removal of the degeneracy of $z\sim3$ and $z\sim0.2$ photo-$z$’s, due to the removal of one of the peaks in $P(z)$. One hundred seventy five galaxies went from being degenerate to non-degenerate, and are marked by the red crosses in Figure \[photzcomp\]. Not all of the degenerate photo-$z$’s are removed; however, 51 galaxies are degenerate between these redshifts as marked by the blue crosses in Figure \[photzcomp\], of which 17 were not degenerate before the addition of the $u$-band. The new degeneracies occur when the $u$-band best fit redshift is different compared to the best-fit to the other bands. The old degeneracies that are not removed occur when the $u$-band does not conclusively rule out another template, often because they are faint. Figure \[photzcomp\] also includes galaxies with $\chi^2_{\mathrm{mod}} < 4$ in the photometric redshift fits selected to be at $z\sim3$ using the color selection method described in §4.2 (below), which is useful when comparing the two methods. Figure \[obj97\] shows an example of both effects, where the $u$-band changes the photo-$z$ and removes a secondary peak in $P(z)$ for a high-redshift galaxy. Out of 1384 galaxies with $\chi^2_{\mathrm{mod}} < 4$, there are 274 galaxies that have photo-$z$’s in the interval $2.5 \leq z \leq 3.5$ without $u$-band. The addition of the $u$-band increases this number by 91, to 365 galaxies that have a photo-$z$ in this redshift interval either with or without the $u$-band (including the 23 that switched to low redshift). Of these 365 galaxies, 161 galaxies either had their photo-$z$ changed or the degeneracy removed with the addition of the $u$-band. This shows that the addition of the $u$-band significantly changed the photo-$z$’s of the $z\sim3$ galaxy sample by $\sim 50\%$. Color Selection --------------- Color selection is an efficient means to select high-redshift galaxies, and extensive research has been carried out to determine the best color criteria to minimize the interloper fraction from low-redshift galaxies or stars, e.g. @Steidel:1996p5981 [@Steidel:1996p5985; @Steidel:1999p4108; @Steidel:2003p1769; @Adelberger:2004p3895; @Cooke:2005p484]. The color selection criteria used in these studies are based on predicted colors of model star-forming galaxies at high redshift, which are then confirmed with spec-$z$’s, that result in known contamination fractions between 3%–5% [@Steidel:2003p1769; @Reddy:2008p4837]. Such low contamination fractions are achieved by avoiding colors where low-redshift galaxies reside. While color selection techniques do not provide a complete sample of LBGs, they do an excellent job of selecting galaxies in a specific redshift range, as evidenced by their contamination fractions. While the UDF data provide an extraordinary data set, they also use a different set of filters than used in previous color selection studies, meaning we must define new color criteria for LBG selection. We therefore develop and calibrate new color criteria for selecting $z\sim3$ LBGs using the same methodology. Since our motivation is to generate a sample of galaxies to put constraints on the star formation efficiency at high redshift, we choose our color selection criteria to best minimize possible low-redshift interlopers (see below). ### Color Selection Criteria Using the same approach as @Steidel:1996p5981 [@Steidel:1996p5985; @Steidel:1999p4108; @Steidel:2003p1769], @Adelberger:2004p3895, and @Cooke:2005p484, we derive galaxy colors by evolving different galaxy SED templates to high redshift convolved with the total throughput of the different filters shown in Figure \[filter\]. We include galaxy SED templates consistent with our photometric redshifts described in §4.1, with galaxy SED templates from @Kinney:1996p6459, @Coleman:1980p4084, and @Bruzual:2003p4897. In addition we use a 2.0 Gyr Elliptical Galaxy from the @Bruzual:2003p4897 synthesis code (E2G) since it is quite different than the elliptical galaxy SED template from @Coleman:1980p4084 and represents possible low-redshift galaxies we wish to avoid. We apply the $K$-correction for different redshifts and correct for the opacity from the intergalactic medium by using estimates from [@Madau:1995p4114]. The resultant colors and redshifts of the SED template galaxies are used to determine the appropriate color criteria to maximize the number of LBG candidates at $z\sim3$ while minimizing the contamination from objects at other redshifts. We test multiple color–color planes and find that for our set of filters, $z\sim3$ LBGs can best be selected in a ($u-V$) versus ($V-z^\prime$) color–color plane. Figure \[colorcut\] plots the expected colors of different model galaxies at different redshifts in the ($u-V$) versus ($V-z^\prime$) diagram. The region defining candidate $z\sim3$ LBGs is indicated with the dashed black line in Figure \[colorcut\]. In order to avoid selecting low-redshift galaxies, this selection region excludes SED template colors for $z\le2.5$. The deviations from SED templates and photometric errors will cause intrinsic scatter in the color–color plane. We therefore leave $\sim0.2$ mag between our color selection and the low-redshift elliptical galaxy SEDs that cause the largest contamination for galaxies at $z\sim3$. In addition to the cut in the ($u-V$) versus ($V-z^\prime$) diagram, we also apply secondary color cuts using the ($u-B$) color to improve our color selections by removing potential interlopers. We also include a cut on $V$-band magnitude, where the bright end does not remove any LBG candidates and the faint end is the $V$-band magnitude determined in §3.5 to keep the S/N of the $u$-band $>3\sigma$. The following conservative constraints are used to select LBG candidates: $$(u-V) \geq 1.0, \label{eq:uv}$$ $$(u-B) \geq 0.8, \label{eq:ub}$$ $$(V-z^\prime) \leq 0.6, \label{eq:vz}$$ $$3 (V-z^\prime) \leq (u-V) -1.2, \label{eq:uvz}$$ $$23.5 \leq V \leq 27.6. \label{eq:vmag}$$ ### Reliability of Color Selection In order to test our selection criteria, we compare our selection of the 100 galaxies with reliable spec-$z$’s as described in §4.1.3 and Table \[tab3\]. These spectra allow us to test the efficacy of selecting targets via the ($u-V$) versus ($V-z^\prime$) color plane (Figure \[colorspec\]). No $z<2.5$ galaxies with reliable spec-$z$’s passed our color cut confirming that our cut effectively excludes low-redshift galaxies. We note that there are three $z>2.5$ objects that do not meet our color criteria in Figure \[colorspec\]. The $z=3.68$ object (triangle, object 865) is classified as a quasar by @Szokoly:2004p4004 due to active galactic nuclei (AGNs) activity. The $z=4.77$ object with ($V-z^\prime$) is a bright $V$-band dropout galaxy, whose $V$-band magnitude (27.26) is just bright enough to remain in the sample before the color cuts. Finally, the $z=3.80$ is also excluded by our color cut due to a decrease in the $V$-band magnitude, reddening the color. There are a number of objects that could contaminate our sample of $z\sim3$ LBGs because their spectra are unusual and thus do not match our model SEDs. While at brighter magnitudes the color selection criteria include stars and AGNs, for our V $>$ 23.5 sample the color selection criteria are mainly contaminated by low-redshift galaxies at $z\lesssim0.2$ [@Reddy:2008p4837]. Additionally, about $1/3$ of the Distant Red Galaxies (DRGs) fall within the color selection sample of Steidel [@vanDokkum:2006p8396], which is similar to our criteria. DRGs are galaxies at $z\gtrsim$2 that have faint UV luminosities, have previously undergone their episode(s) of star formation [@Franx:2003p8348], and have stellar masses $\gtrsim10^{11}M_{\odot}$ [@vanDokkum:2004p8401; @vanDokkum:2006p8396]. However, given the DRGs’ estimated space density of $(2.2\pm0.6)\times10^{-4}$ Mpc$^{-3}$ [@vanDokkum:2006p8396], @Reddy:2009p6997 conservatively determine that the fractional contribution would be $\sim$2% for UV-faint sources such as our sample. As discussed earlier, to get a sense of the number of possible interlopers, we would like to have a large number of spectra to determine the contamination fraction. However, we only have a relatively small number of spectra (100) as shown in Figure \[colorspec\] and Table \[tab3\]. Instead, we compare our color cuts to the ($Un-G$) versus ($G-R$) color cuts of @Steidel:2003p1769 for redshifts $2.7\leq z \leq3.4$, and assume that our sample has similar contamination. In Figure \[colorcut\], we determine the redshift that corresponds to Steidel’s color cut for a given SED template in ($Un-G$) and ($G-R$). We then determine the ($u-V$) and ($V-z^\prime$) colors that this SED template has at this redshift and compare those resulting colors to our color criteria by marking them with filled black circles in Figure \[colorcut\]. The filled black circles are outside or near the edge of our color selection criteria, which shows that we are more conservative in our cut for the redder SED templates and about equivalently conservative for the bluer ones as the cut in @Steidel:2003p1769. We therefore infer that our contamination fraction is comparable to those of @Steidel:2003p1769 and @Reddy:2008p4837, where the contamination fraction of $z\sim3$ LBGs is $\sim3$% for objects $23.5 \leq V \leq 25.5$ mag. Sample of z $\sim$ 3 Galaxies ============================= Photometric redshifts and color selection are both good ways to select $z\sim3$ galaxies. Photometric redshifts have the advantage of creating a larger sample since they can measure redshifts in regions of color space that color selected samples avoid because of low-redshift galaxies. They also use more information than color selection, including all colors simultaneously to constrain the redshift. However, while the error rate of the photo-$z$’s is not well defined, color selection is efficient, has a clearly defined contamination fraction, and allows direct comparisons to other studies. The completeness of our LBG selection is limited primarily on the $u$-band depth because the ACS bands from HST are deeper. In order to characterize this completeness, we compare our color selected LBGs with other studies in §5.3. To justify such comparisons, we compare the redshift distribution of this sample in §5.1, and investigate our uncertainties from cosmic variance in §5.2. In choosing our color selection criteria we chose to be conservative and create a less complete catalog with a small contamination fraction similar to that of @Steidel:2003p1769 and @Reddy:2008p4837 of $~$3%. Depending on the purpose, a higher contamination fraction is acceptable in exchange for a larger and more complete sample. The photo-$z$ sample yields a more complete sample, and may even have a similar contamination fraction based on our spectroscopic sample, although due to our small numbers at the redshift of interest, it is not clear at this point. If the lowest contamination fraction possible is needed, a subset of $z\sim3$ LBGs that are both color selected and photo-$z$ selected are the most robust candidates available. We present both samples of $z\sim3$ LBGs in Table \[tab4\], for a total of 407 candidates, along with their photo-$z$’s and colors. We distinguish the samples by designating them as either color selected, photo-$z$ selected, or both. The photo-$z$ sample consists of galaxies in the redshift interval $2.5\leq z \leq 3.5$ that have $t_b>3$, $\tt ODDS$ $> 0.99$, and $\chi^2_{\mathrm{mod}} < 4$, and contains 365 galaxies. Of the 42 galaxies not included in our photo-$z$ sample, two have $z\sim2.2$, 11 have $z>3.5$, eight others have $\tt ODDS <0.99$, and 21 others have $\chi^2_{\mathrm{mod}} > 4$. The color selected sample contains 260 galaxies, all of which have photo-$z$’s with $z>2$, with 258 that have $z>2.5$ and 11 that have $z>3.5$. However, the overlap of the two samples is only 216 galaxies. We show the final LBG selection in Figure \[objsel\], which plots all objects from Table \[tab1\] on a color–color diagram, with 287 galaxies falling in the color selection region corresponding to constraints \[eq:uv\], \[eq:vz\], and \[eq:uvz\] from §4.2.1. There are 27 galaxies that are in the selection area in this diagram, but are rejected by the ($u-B$) color (constraint \[eq:ub\]), leaving 260 galaxies that are color selected. The objects marked as blue stars that are selected by photo-$z$’s but not by color selection are generally galaxies at $z\sim2.8$ that are missed by our color selection criteria in order to avoid elliptical and low-redshift galaxies. Redshift Distribution --------------------- In order to understand the redshift distribution of the color selected LBG sample, we look at the photo-$z$’s that meet the color selection criteria and our photo-$z$ criteria of $\chi^2_{\mathrm{mod}} < 4$. This leaves a sample of 235 galaxies that have a mean redshift of $3.0\pm0.3$ in the redshift interval $2.4 \lesssim z \lesssim 3.8$ (see left panel in Figure \[zhist\]), with a median uncertainty in the photo-$z$’s of $\pm$0.4. Additionally, we investigate the redshift distribution by adding the probability histograms $P(z)$ of the individual galaxies, which yields a similar result (see the right panel in Figure \[zhist\]). The redshifts selected are similar to those of @Steidel:2003p1769, with $\sim$80% of our sample matching their reported redshift interval $2.7<z<3.4$. In fact, their distribution is very similar to ours, with a number of their LBGs falling outside of this interval. Given the large uncertainties in our photo-$z$’s, we conclude that our color selected redshift distribution is similar to that of @Steidel:2003p1769 within our uncertainties. The similar redshift distribution of our color selected sample justifies our comparison of the number densities of LBGs in §5.3. Cosmic Variance --------------- The UDF has a very small volume, with our overlap area consisting of 11.56 arcmin$^{2}$, which in the redshift interval $2.5\leq z \leq 3.5$ is a comoving volume of $\sim38000$ Mpc$^3$. A single pointing with a small solid angle and such a small volume is likely to be affected by cosmic variance, yielding larger than Poisson uncertainties in the LBG number counts. It is important to estimate the cosmic variance effect in order to understand the systematic uncertainties for comparisons with number densities of LBGs in the literature. We calculate the cosmic variance using the code from @Newman:2002p7424 and the prescription from @Adelberger:2005p4252 to get the fractional error per count $\sigma/N$ for our given volume in the redshift interval $2.5\lesssim z \lesssim 3.5$ . The variance is determined from the integral of the linear regime of the cold dark matter (CDM) power spectrum ($P(k)$), $$\sigma^2_{\mathrm{CDM}} = \frac{1}{8\pi^3}\int P(k)|\tilde{W}(k)|^2d^3k \; \; \; , \label{sigma}$$ where $\tilde{W}(k)$ is the Fourier transform of our survey volume. Since we want the variance of galaxy counts rather than CDM fluctuations, we need to correct for the clustering bias $(b)$ of LBGs to get their variance ($\sigma_g^2$), where $\sigma_g^2 \simeq b^2\sigma^2_{\mathrm{CDM}}$. The galaxy bias for typical LBGs is then calculated from the ratio of galaxy to CDM fluctuations in spheres of comoving radius 8 $h^{-1} \mathrm{Mpc}$, where $\sigma_8(z)$ represents the CDM fluctuations for our redshift, and $b=\sigma_{8,g} / \sigma_8(z)$ [@Adelberger:2005p4252]. The resulting variance depends on a fit to the LBG correlation function $\xi_g(r)=(r/r_o)^{-\gamma}$, where $r_o$ is the spatial correlation length and $\gamma$ the correlation index. The galaxy variance is then $$\sigma^2_{8,g}=\frac{72(r_o/ 8\; h^{-1} Mpc)^\gamma}{(3-\gamma)(4-\gamma)(6-\gamma)2^\gamma} \; \; \; , \label{sigma2}$$ [@JamesEdwinPeebles:1980p8194 eq. 59.3] from which we can then calculate the fractional error per LBG count. Empirical fits to the correlation function yield differing values for $r_o$ and $\gamma$ depending on the sample, redshift distribution, luminosity range, and redshift, which in turn affect the value of $\sigma^2_{8,g}$ and our fractional uncertainty [@Adelberger:2005p4252; @Hildebrandt:2007p4025; @Hildebrandt:2009p8300; @Kashikawa:2006p4666; @Lee:2006p4240; @Ouchi:2004p4668; @Ouchi:2005p4177; @Yoshida:2008p1419]. We avoid looking at samples covering a small area of the sky such as the HDF [@Giavalisco:2001p7613], as such studies are also plagued by cosmic variance as shown in @Ouchi:2005p4177. The values in the larger studies generally vary between $r_o \sim2.8-5.5$ and $\gamma \sim1.5-2.2$, with an increasing $r_o$ and $\gamma$ with increasing luminosity, i.e., the brighter LBGs are more strongly clustered. There is also some minor evolution with redshift, where the higher redshift galaxies are more clustered [@Hildebrandt:2009p8300]. These values result in $\sigma^2_{8,\mathrm{LBGs}}$ of $\sim0.56--1.1$ which correspond to a fractional error per count of $\sim0.14-0.28$. Our sample includes the fainter less clustered LBGs at $z\sim3$, so we are on the less clustered side of this range. We therefore adopt a fractional error per count ($\sigma/N$) of $\sim$0.2 in the rest of this study, which suggests that we could detect a relatively large over-density or under-density in our small volume. In fact, there is evidence of an over-density of $z\sim3$.7 galaxies in the Chandra Deep Field South, of which the UDF is a part [@Kang:2009p8043]. However, no clear over-density is indicated in the correlation length measured from the GOODs survey covering the same area on the sky [@Lee:2006p4240] at slightly higher redshift. We use the above estimate of the cosmic variance for our small field of view to constrain our results of the number densities of LBGs in §5.3. LBG Number Counts ----------------- The number counts of $z\sim3$ LBGs per unit of magnitude indicate the completeness of the LBG selection, and can be compared to number counts from other studies. Such data are only available for color-selected samples, and we therefore only use our color-selected sample in our comparison. The same comparison could be accomplished with the luminosity function, which we do not calculate because we only have one pointing and our comoving volume is small. In other words, we have a small number of LBGs that in conjunction with the uncertainty due to cosmic variance discussed above, would not yield meaningful constraints on the luminosity function. Additionally, to calculate the luminosity function, Monte Carlo simulations need to be run as described in @Reddy:2008p4837 that are computationally prohibitive given our complex analysis technique to get reliable photometry with largely varying PSFs [@Laidler:2007p2733]. We therefore compare our results with the number counts from other studies. We stress that we are comparing number counts that are not corrected for completeness, and therefore the counts will fall at faint magnitudes in each study due to sample incompleteness. Our ground-based LRIS images have much greater PSF FWHMs ($\sim$1$\tt''$.3) than the HST $V$-band image ($\sim$0.09$\tt''$), thus affecting the observed number counts. The dominant effect is caused by the blending of neighboring objects that affects the color of objects and therefore their selection. In addition, at significantly lower resolution, isolated compact faint objects have part of their flux lost to the noise floor of the background. This causes the faintest objects to go undetected in the low-resolution images, even if they would have been detected in a similarly sensitive high-resolution image. We use the HST $V$-band image to determine our high-resolution-detection (HRD) number counts and correct for this resolution effect. We convolve the HST $V$-band image with the PSF modeled from the LRIS $V$-band image (FWHM of $\sim$1$\tt''$) that was taken concurrently with the $u$-band data. This yields a low-resolution-detection (LRD) image from which a segmentation map is generated using $\tt SExtractor$. The final photometry is measured with this new segmentation map in all bands using $\tt sexseg$ as discussed in §3.4, and then the same color selection is used as discussed in §4.2.1. Figure \[numcounts\] shows the number counts per half magnitude bin per square arcminute for both the HRD and LRD, along with number counts from @Reddy:2008p4837, the Keck Deep Field (KDF) [@Sawicki:2006p1733], and @Steidel:1999p4108. For the HRD sample, we include only Poisson uncertainties for reference, while for the LRD we include both Poisson uncertainties and the fractional error per count of 0.2 to take into account the cosmic variance (see §5.2). The figure shows that we are photometrically complete to $V\sim27$ mag, after which we start losing LBGs due to the sensitivity of the $u$-band image. In order to compare to other studies, we convert their $R$-band magnitudes from Keck to $V$-band magnitudes from HST with a $K$-correction at $z\sim3$ for the Im galaxy template described in §4.1, which results in an R-V color of $\sim-0.15$. The LBG number counts in Figure \[numcounts\] of the LRD’s agree at the brighter end to within our uncertainties, however, at the fainter end ($V>26.0$), our results are larger than the uncorrected KDF results, which is the only survey to probe to equivalent depths. Based on the turnover in the LBG number counts in Figure \[numcounts\], the KDF and our LRD study appear to be complete to $V\gtrsim26$ and $V\sim27$, respectively. The cumulative number counts for $V<26$ is $3.7\pm0.6$ LBGs arcmin$^{-2}$ in our LRD study and $4.3\pm0.2$ LBGs arcmin$^{-2}$ for the KDF, where the uncertainties are Poisson. These results are consistent with each other, without invoking cosmic variance, and suggest that the difference in number counts for $V\gtrsim26$ is due to differing completeness limits. In order to understand this better, we also include the KDF number counts corrected for LBG completeness in Figure \[numcounts\]. This completeness correction is different than the one applied in [@Sawicki:2006p1733], as it previously included the volume correction simultaneously. The correction applied here assumes that all $z\sim3$ LBGs are well-represented by the colors of a fiducial LBG with (1) fixed age of 100 Myr (2) a fixed redshift $z=3$, and (3) @Calzetti:1994p4914 dust extinction for $E(B-V)=0.2$. The incompleteness is then calculated by planting objects with these colors in the KDF images and determining how many are recovered, with the uncertainties estimated via bootstrap resampling (Marcin Sawicki, private communication). This is not as careful a correction as applied in [@Sawicki:2006p1733], but serves to investigate the completeness differences in our studies with respect to LBG detection. Our LRD study number counts are consistent with the KDF LBG completeness corrected number counts down to $V\sim27$. This helps reinforce that we are likely complete in detecting $z\sim3$ LBGs to $V\sim27$ magnitude, making our study the deepest to date. Summary ======= We use newly acquired ground-based $u$-band imaging with a depth of 30.7 mag arcsec$^{-2}$ ($1\sigma_{u}$ sky fluctuations) and an isophotal limiting $u$-band magnitude of 27.6 mag to create a reliable sample of 407 $z\sim3$ LBGs in the UDF. We use the template-fitting method $\tt TFIT$ [@Laidler:2007p2733] to measure accurate photometry without the need for aperture corrections, and obtain robust colors across the largely varying PSFs of the UDF ACS images (0.$\tt''$09 FWHM) and the $u$-band image (1.$\tt''$3 FWHM). The results are as follows: 1\. We calculate photometric redshifts for 1457 galaxies using the Bayesian algorithm of @Benitez:2000p3572, @Benitez:2004p3578, and @Coe:2006p1519, of which 1384 are reliable with $\chi^2_{\mathrm{mod}} < 4$. We find that the previous photo-$z$’s by @Coe:2006p1519 do a good job of determining redshifts even without the $u$-band if their uncertainties are taken into account. However, these uncertainties are often quite large at $z\sim3$ due to the color degeneracy of $z\sim3$ and $z\sim0.2$ galaxies. 2\. The $u$-band significantly improves $z\sim3$ photo-$z$ determinations: out of 1384 galaxies, 175 galaxies no longer have degenerate photo-$z$’s, and 125 of the galaxies changed their primary photo-$z$ with the addition of the $u$-band. In fact, the addition of the $u$-band changed the photo-$z$’s of $\sim50\%$ of the $z\sim3$ galaxy sample. 3\. We find that even when using the $u$-band photometry and restricting the sample of photo-$z$’s to those with good $\chi^2_{\mathrm{mod}}$, catastrophic photo-$z$ errors can still occur, although they are rare (only 1 out of 93 galaxies with spectroscopic redshifts and photometric redshifts with $\chi^2_{\mathrm{mod}} < 4$ and $\tt ODDS$ $ > 0.99$). We found *no* catastrophic photo-$z$’s in the redshift interval of interest, ($2.5\lesssim z \lesssim 3.5$), although only five objects have spec-$z$’s in this interval. In contrast, three galaxies at $2.5\lesssim z \lesssim 3.5$ had catastrophic photo-$z$’s before the addition of the $u$-band. The contamination fraction of the $z\sim3$ photo-$z$ sample is likely small as we sample the Lyman break for these galaxies, and show excellent spectroscopic agreement. However, given the small numbers, the overall error rate of the photo-$z$’s is not well defined. 4\. We find excellent agreement of our color selected sample with the spectroscopic $z\sim3$ sample, with no low-redshift galaxies falling in our color selection area, confirming our chosen criteria. We specifically choose a conservative color criteria that are similar to the cuts of @Steidel:2003p1769 such that we can infer that the $\sim3\%$ contamination fraction for $z\sim3$ LBGs of @Steidel:2003p1769 and @Reddy:2008p4837 applies to our data set. 5\. The completeness of our LBG selection depends largely on the $u$-band depth because the ACS bands from HST are deeper. In order to characterize this completeness, we compare our color selected LBGs with other studies and find that we present the deepest sample of $z\sim3$ LBGs currently available, likely complete to $V\sim27$ mag. This reliable sample of $z\sim3$ LBGs can be used to further the studies of LBGs and star formation efficiency of gas at $z\sim3$ through the most sensitive high-resolution images ever taken; the Hubble Ultra Deep Field. The authors thank, Dan Coe, Victoria Laidler, Eric Gawiser, and Alison Coil for valuable discussions, Marcin Sawicki for providing the KDF number counts, and Narciso Benitez for providing re-calibrated SED templates for BPZ. Support for this work was provided by NSF grant AST 07-09235. J. C. acknowledges generous support by Gary McCue. The W. M. Keck Observatory is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. The authors recognize and acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. [*Facility:*]{} , [^1]: IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation. [^2]: The equation has a typographic error. The sum should be over $i_1,i_2=1$ to $i_1,i_2=3$. [^3]: The Moffat profile [@Moffat:1969p7422] is a modified Lorentzian with a variable power-law index that takes into account the flux in the wings of the intensity profile which are not included in a Gaussian profile. [^4]: We refer to these templates as SED templates throughout the paper to distinguish them from the galaxy templates discussed in §3. [^5]: Most of these redshifts are based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 66.A-0270(A), 67.A-0418(A),171.A-3045, 170.A-0788, 074.A-0709, and 275.A-5060.
--- abstract: 'The calculation of solar absolute fluxes in the near-UV is revisited, discussing in some detail recent updates in theoretical calculations of bound-free opacity from metals. Modest changes in the abundances of elements such as Mg and the iron-peak elements have a significant impact on the atmospheric structure, and therefore self-consistent calculations are necessary. With small adjustments to the solar photospheric composition, we are able to reproduce fairly well the observed solar fluxes between 200 and 270 nm, and between 300 and 420 nm, but find too much absorption in the 270-290 nm window. A comparison between our reference 1D model and a 3D time-dependent hydrodynamical simulation indicates that the continuum flux is only weakly sensitive to 3D effects, with corrections reaching $<10$ % in the near-UV, and $<2$ % in the optical.' address: 'Mullard Space Science Laboratory, University College London, Holmbury St. Mary RH5 6NT, UK' author: - C Allende Prieto bibliography: - 'carlos.bib' title: 'Chemical Abundances from the Continuum.' --- Introduction {#intro} ============ With the exception of stellar effective temperatures, all other atmospheric parameters, including the chemical composition, are typically determined from absorption lines measured in spectra (and see Barklem’s contribution in this volume). The optical and infrared continuum of normal stars is shaped by bound-free and free-free opacity of atomic hydrogen and the ion H$^{-}$, but at blue and UV wavelengths several metals become important as continuum absorbers. In this brief paper, I describe our recent efforts to compile and evaluate the opacity sources relevant for the solar spectrum, and to compute absolute fluxes. Bengt Gustafsson created and actively maintained the opacity data base used in the construction of model atmospheres and spectrum synthesis calculations with MARCS and associated codes [@gj; @2008arXiv0805.0554G], and so this contribution is also a tribute to Bengt’s role on making sure the right physical processes are included in the calculation of stellar spectra. Computing absolute stellar fluxes {#computing} ================================= Two main ingredients are needed for calculating reliable absolute stellar fluxes: accurate opacities and a realistic equation of state. A flexible model atmosphere code is also necessary if we are interested in exploring the impact of varying chemical abundances on the atmospheric structure. Opacities --------- We take our line opacities from the compilations maintained by R. L. Kurucz and distributed through his website[^1]. The atomic transition probabilities come from a variety of sources, but Kurucz has obviously made an effort to keep his list updated with reliable laboratory sources. We also made modifications to the linelist adopting Van der Waals damping constants computed by when available. Linelists for diatomic molecules are provided by Kurucz for each isotopologue, and we combined them using terrestrial proportions[^2]. ![Upper panel: individual weighted contribution for each Mg I level to the total photoionization cross section $\sigma = \sum_i \sigma_i N_i/N = \sum_i \sigma_i g_i e^{(-E_i/k/T)} $. The total cross-section is shown multiplied by a factor of 10 (dashed line). Lower panel: total weighted cross section from the upper panel, with the abscissae changed from frequency to vacuum wavelengths.[]{data-label="mg1"}](carlos_fig1.ps){width="13cm"} Continuum opacities for C, Mg, Al, Si, and Ca, as computed with the R-matrix method by the Opacity Project, were extracted from TOPBASE [@1993BICDS..42...39C] and smoothed according to the expected errors in the theoretical energies following . TOPBASE provides energy levels, radiative transition probabilities, and photoionization cross-sections. The bound-free opacity for all levels should be considered: taking into account only the lowest levels may lead to missing opacity, as illustrated for the case of Mg I in . This ion contributes an important part of the opacity in the range 200-300 nm, and the opacity bump at $\sim 280$ nm results from the combined photoionization from a number of levels. The computed energies have significant errors, and we correct them to match the energies inferred from wavelengths of lines measured at the laboratory. shows the ratio of the energies from TOPBASE and those from the atomic database at the US National Institute for Standards and Technology (NIST) for several ions. The errors are small in some cases, but in others can reach up to 20%. The Opacity Project calculations have been extended to heavier ions as part of the Iron Project . With help from the scientists involved in the calculations, I have translated the data to the same format used in TOPBASE, building new model atoms (including continuous opacities) for neutral and ionized iron. These are also used here. Equation of state ----------------- The relationship among the main thermodynamical quantities needs to be properly computed according to the chosen chemical composition. We adopt the temperatures and densities from a model atmosphere, and then solve the equations of chemical equilibrium for all species, including molecules, deriving a consistent electron density. For this purpose we use the code Synspec [@synspec], with a number of recent upgrades. A suite of subroutines to solve the molecular equilibrium have been adopted (I. Hubeny, private communication), and the partition functions for both atoms and molecules are now from the data of (and also private comm. from Irwin). The 1D version of the code [ass]{}$\epsilon$[t]{} [@2008ApJ...680..764K] was used to solve the radiative transfer equation. Model atmosphere code --------------------- A model atmosphere is paramount in order to compute stellar fluxes. In order to check whether there is feedback to the atmospheric structure from changes in the chemical composition, we also need a model atmosphere code. We have adopted the linux port of Kurucz’s Atlas9, recently published by . To facilitate multiple calculations, I wrote a set of scripts that prepare the input to the code, check for convergence, and adjust the number of iterations accordingly. The Players ----------- Several elements are important when considering absolute fluxes for a solar-like star. Carbon and oxygen do not provide significant continuous opacity, but form molecules (mainly CH, OH, CO) with transitions that block the radiation in some specific regions. Magnesium, aluminum, silicon and iron atoms provide genuine bound-free absorption, while others such as Ca and Na contribute only indirectly to the opacity, donating electrons which may bound with hydrogen to form H$^{-}$, or shifting the iron ionization balance. Finally, if the abundance of helium is increased at the expense of hydrogen, it will indirectly reduce the H and H$^{-}$ opacities. Taking a shortcut {#shortcut} ================= One might naively imagine that the feedback from modest perturbations in the metal abundances to the atmospheric structure would be minor. As it is usually done for the analysis of lines, I computed the variations in the emerging fluxes for a solar-like model associated with changes of $+0.2$ dex in the abundances of He, C, O, Mg, and Fe. This exercise was described at another conference [@2007arXiv0709.2194A]. As expected, the UV flux was reduced when the abundances of C, O, Mg, or Fe were enhanced, but a large flux increase was noticed when the ratio He/H was increased. ![Changes in the emergent fluxes as a result of an increase in the abundances of helium (top panel), all metals (central panel), and magnesium (bottom panel). The results obtained when the composition of the model atmosphere is changed consistently with that adopted for the equation of state and spectral synthesis correspond to the dashed lines. The results from the calculations using a fixed atmospheric structure (solid lines) overestimate the flux changes. []{data-label="change"}](carlos_fig3a.ps "fig:"){width="6.8cm"} ![Changes in the emergent fluxes as a result of an increase in the abundances of helium (top panel), all metals (central panel), and magnesium (bottom panel). The results obtained when the composition of the model atmosphere is changed consistently with that adopted for the equation of state and spectral synthesis correspond to the dashed lines. The results from the calculations using a fixed atmospheric structure (solid lines) overestimate the flux changes. []{data-label="change"}](carlos_fig3b.ps "fig:"){width="6.8cm"} ![Changes in the emergent fluxes as a result of an increase in the abundances of helium (top panel), all metals (central panel), and magnesium (bottom panel). The results obtained when the composition of the model atmosphere is changed consistently with that adopted for the equation of state and spectral synthesis correspond to the dashed lines. The results from the calculations using a fixed atmospheric structure (solid lines) overestimate the flux changes. []{data-label="change"}](carlos_fig3c.ps "fig:"){width="6.8cm"} This approximation was of course of dubious validity, in particular for such an abundant element as helium. New calculations in which the composition of the model atmosphere is changed consistently show that the changes in the fluxes were systematically overestimated: the atmospheric structure adjusts in response to changes in the abundances and the variations in the emerging fluxes are much smaller than initially predicted. The original calculations for enhancements of 0.2 dex in the abundances of He, Mg, and the overall metallicity are shown with solid lines in , while the new calculations with consistent structures are shown with dashed lines. The large correction for helium is not a big surprise, but the flux variations are also reduced significantly for the case of Mg, which only contributes continuum opacity in a limited spectral window. Note that for the self-consistent calculations the changes in the flux at some wavelengths are compensated at others in order to maintain the effective temperature constant. A test with solar observations {#observations} ============================== As an exercise, we computed a grid changing the abundances of all metals, as well as C/H, O/H, and Mg/H, from the reference values by plus and minus 0.2 dex, and then used interpolation to fit solar observations. For consistency, the reference abundances were those recently used by Kurucz for his [*NEW*]{} opacity distribution functions and models [@1998SSRv...85..161G]: $\log {\rm N(X)/N(H)} +12 = 8.52, 8.83, 7.58$ and $7.50$ with X replaced by C, O, Mg, and Fe, respectively. For the solar observations we used an average of SOLSTICE and SUSIM spectra, as discussed by , with a resolution of about 3 Å. shows the best-fitting solution, which corresponds to changes from the reference abundances of $-0.18$ dex in overall metallicity (all metals), and of $+0.12$, $+0.07$, and $+0.12$ dex in C, O and Mg. There is a fair match of the observations for wavelengths between 200-270 nm, and a good match is achieved in the 300-400 nm window, but too much opacity is predicted in the region around the Mg II resonance lines. Given that we have not varied the abundances of important electron donors such as Na, Ca and Si, these results must be considered preliminary. ‘Three-dimensional’ effects {#3d} =========================== The introduction of 3D hydrodynamical simulations in the analysis of the solar spectrum has showed that corrections to the derived abundances from atomic lines tend to be small, while molecular lines are overall more sensitive to temperature inhomogeneities. The continuum at about 300 nm is formed in deep photospheric regions, but as the opacity increases towards shorter wavelengths the continuum formation is rapidly shifted to higher layers, where inhomogeneities may have a larger impact on line formation. [Ass]{}$\epsilon$[t]{}, a new 3D radiative transfer code capable of handling arbitrarily complex opacities, has been recently introduced by . Computing the entire spectrum for a series of 3D snapshots sampling the spectrum fast enough to avoid missing line opacity requires a large investment of computing time, even on a modern supercomputer. Nonetheless, we can explore if there are any effects on the continuum by using only a few hundred frequencies. shows the comparison between the computed continuum flux (including Balmer lines) for the 3D simulation of and our reference solar 1D Kurucz model. For these spectral synthesis calculations the same opacities, and equation of state were used, accounting properly for Rayleigh (atomic H) and electron scattering, which anyhow makes a negligible difference in this case. The fluxes predicted for the 3D model (the average of 100 snapshots covering nearly an hour of solar time; solid line) are similar to those for the 1D model (dashed line), with the difference amounting to about 5–10 % at maximum in the 200-300 nm window, and $<2$ % in the optical and near-infrared. Conclusions =========== We compile the main sources of opacity in the solar photosphere and compute absolute fluxes based on classical one-dimensional LTE model atmospheres. Metal absorption provides an important contribution to the near UV opacity, and the photoionization cross-sections of many levels need to be considered. The energies predicted by the R-matrix calculations performed within the Opacity Project for some ions have uncertainties of up to 20 %, and therefore it is recommended to use the observed energies instead. Small changes in the abundances of He, Mg and the iron-peak elements can have an important feedback on the atmospheric structure, and thus consistent calculations are needed to obtain the correct results. With modest adjustments to the standard photospheric abundances, we find it possible to reproduce fairly well the observed solar fluxes between 200-270 nm and even better in the range 300-410 nm, while too much absorption is found in the window 270-290 nm. This may hint at excessive Mg bound-free absorption, although further tests are necessary. We compare the continuum fluxes computed with our reference 1D model with those from a 3D time-dependent radiative-hydrodynamical simulation and found limited changes, reaching up to 10 % in the near-UV. I thank my collaborators: Martin Asplund, Manuel Bautista, Ivan Hubeny, Lars Koesterke, David Lambert, and Sultana Nahar – without their contributions the calculations shown in this paper could not have been made. Thanks go to Bob Kurucz for making his codes and data publicly available, as well as to Fiorella Castelli, Luca Sbordone and Piercarlo Bonifacio for porting Atlas to linux and organizing the available documentation on the code. Thank you, Bengt, for all these years of encouragement and good advice. Happy Birthday! References {#references .unnumbered} ========== [^1]: kurucz.harvard.edu [^2]: www.webelements.com
--- abstract: 'We demonstrate the efficient transverse compression of a 12.5 MeV/c muon beam stopped in a helium gas target featuring a vertical density gradient and strong crossed electric and magnetic fields. The vertical spread of the muon stopping distribution was reduced from 10 to 0.7 mm within 3.5 . The simulation including proper cross sections for low-energy $\mu^+$ - $\text{He}$ elastic collisions and the charge exchange reaction $ \mu^+ + \text {He} \longleftrightarrow \text{Mu} + \text{He}^+ $ describes the measurements well. By combining the transverse compression stage with a previously demonstrated longitudinal compression stage, we can improve the phase space density of a $\mu^+ $ beam by a factor of $ 10^{10} $ with $ 10^{-3} $ efficiency.' author: - 'A. Antognini' - 'N. J. Ayres' - 'I. Belosevic' - 'V. Bondar' - 'A. Eggenberger' - 'M. Hildebrandt' - 'R. Iwai' - 'D. M. Kaplan' - 'K. S. Khaw' - 'K. Kirch' - 'A. Knecht' - 'A. Papa' - 'C. Petitjean' - 'T. J. Phillips' - 'F. M. Piegsa' - 'N. Ritjoho' - 'A. Stoykov' - 'D. Taqqu' - 'G. Wichmann' title: 'Demonstration of Muon-Beam Transverse Phase-Space Compression' --- Next generation precision experiments with muons and muonium atoms [@Gorringe2015], such as muon $ g-2 $ and EDM measurements [@Iinuma2011; @Adelmann2010; @Crivellin2018], muonium spectroscopy [@Cr], and muonium gravity measurements [@Kirch2014a; @Kaplan2018], require high-intensity muon beams at low energy with small transverse size and energy spread. The standard surface muon beams currently available do not fulfill these requirements. To improve the quality of the muon beam, phase space cooling techniques are needed. However, conventional methods, such as stochastic cooling [@VanderMeer1985] and electron cooling [@Budker1978], are not applicable due to the short muon lifetime of $ 2.2 $ . Alternative beam cooling techniques based on muon energy moderation in materials have been developed [@Muhlbauer1999; @Morenzoni1994], however, they typically suffer from low cooling efficiencies ($<10^{-4} $). At the Paul Scherrer Institute, we are developing a novel device (muCool) that produces a high-quality muon beam, reducing the full (transverse and longitudinal) phase space of a standard $\mu^{+}$ beam by 10 orders of magnitude with $10^{-3}$ efficiency [@Taqqu2006]. The whole device is placed inside a $ 5 $  magnetic field, pointing in the $ +z $-direction, as sketched in Fig. \[scheme\]. First, a surface muon beam propagating in the $-z $-direction is stopped in a few mbar of helium gas at cryogenic temperature, reducing the muon energy to the eV range. The stopped muons are then guided into a sub-mm spot using a combination of strong electric and magnetic fields and gas density gradients in three stages. [ 0= =0=0 0]{} In the first stage (transverse compression), the electric field is perpendicular to the applied magnetic field and at $ 45 $ with respect to the $ x $-axis: $ \vec{E} = (E_x, E_y, 0) $, with $ E_x=E_y\approx 1$ . In vacuum, applying such crossed electric and magnetic fields would prompt the stopped muons to drift in the [$ \hat{E} \times \hat{B}$]{}-direction, performing cycloidal motion with frequency $\omega=eB/m_{\mu}$ (cyclotron frequency), where $ m_{\mu} $ is the muon mass. However, in the muCool device, the muons also collide with He gas atoms with an average frequency $ \nu_{c} $, which depends on the gas density, elastic $ \mu^+ $-He cross section, and $\mu^+$ - $\text{He}$ relative velocity. These collisions lead to muon energy loss and change of direction. Thus, the muon motion is modified compared to that in vacuum, so that the muon drift velocity also acquires a component in the $ \hat{E} $-direction, making muons drift at an angle $ \theta $ relative to the [$ \hat{E} \times \hat{B}$]{}-direction. The average drift angle $ \theta $ is proportional to the collision frequency $ \nu_{c} $ [@Heylen1980]: $$\label{eq:drift_angle} \tan\theta= \frac{\nu_c}{\omega}.$$ The reason for such behavior is as follows: for $ \nu_{c}>\omega $, the muon will not complete the full period of the cycloidal motion before the next collision (blue trajectory in Fig. \[scheme\]), resulting in a large drift angle $ \theta $. If $ \nu_{c}<\omega $, the muon will perform several periods of cycloidal motion between collisions, resulting in a smaller drift angle $ \theta $ (green trajectory in Fig. \[scheme\]). We use this feature to manipulate the direction of the muon drift by changing the collision frequency. The most straightforward way to modify the collision frequency is by changing the gas density. In the transverse compression stage, this is done by having the bottom of the apparatus at $ 6 $ K and the top at $ 19 $ K, thus creating a temperature gradient in the vertical ($ y $-) direction. The gas pressure is chosen so that $ \frac{\nu_c}{\omega}=1 $ at $ y=0 $. Muons at different $ y $-positions (at fixed $ x $-position) experience different densities, resulting in different drift directions: a muon stopped at the bottom of the target (higher density) experiences more collisions with He atoms ($ \frac{\nu_c}{\omega}>1 $) and will thus drift predominately in the $ \hat{E} $-direction (large $ \theta $, upwards), while a muon stopped in the top part (lower density) collides less frequently with He atoms ($ \frac{\nu_c}{\omega}<1 $), resulting in drift predominately in the [$ \hat{E} \times \hat{B}$]{}-direction (small $ \theta $, downwards). The net result is that muons stopped at different $ y $-positions converge towards $ y=0 $, while simultaneously drifting in the $ +x $-direction. Hence, the muon beam spread is reduced in the $ y $-direction to sub-mm size (transverse compression). After that, the muons are transported to the second stage, which is at room temperature. In this stage, the electric field has a component parallel to the magnetic field and pointing towards $ z=0 $, leading to a muon drift towards the center of the target, thus reducing muon spread in the $ z $-direction (longitudinal compression). The vertical component of the electric field ($ E_y $) at this low density causes an additional drift in the [$ \hat{E} \times \hat{B}$]{}-direction ($ +x $-direction in this case), thus transporting the muons towards the final compression stage. From there, the sub-mm muon beam can be extracted though a small orifice into the vacuum, re-accelerated with pulsed electric fields to keV energies, and extracted out of the magnetic field (see Fig. \[scheme\]). The efficient longitudinal compression of a muon beam, together with an [$ \hat{E} \times \hat{B}$]{}-drift, has already been demonstrated [@Bao2014; @Belosevic2019]. This Letter presents the first demonstration of the muon transverse compression stage of the muCool device. For this demonstration, about $2\cdot10^4~\mu^+$/s at 12.5 MeV/c were injected into the $ 25 $-cm-long target placed in the center of a 5-tesla solenoid. Before entering the target, the muons traversed a 55--thick entrance detector, several thin foils, and a copper aperture, defining the beam injection position. The gas volume of the transverse target was enclosed by a Kapton foil, folded around triangular PVC end-caps. Kapton and PVC are both electrical and thermal insulators, and are thus capable of sustaining high voltage, while keeping the heat transport between top and bottom walls small. The large thermal conductivity of the single crystal sapphire plates glued to the top and the bottom target walls assured homogeneous temperatures of these walls. The required temperature gradient from 6 to 19 K was produced by heating the top wall with 500 mW power and thermally connecting the bottom wall to a copper cold finger [@Wichmann2016a]. The Kapton foil enclosing the gas volume was lined with metallic electrodes extending along the $ z $-direction. The 45 electric field was defined by applying appropriate voltages to several of these electrodes (see Fig. \[scheme\]), which were connected to the rest of the electrodes via voltage dividers. Several detectors were placed around the target, A1...A3 in Fig. \[scheme\], to monitor the muon movement by detecting the $\mu^+$-decay positrons. The detectors consisted of plastic scintillator bars with a groove inside which a wavelength-shifting fiber was glued. These 2 m-long wavelength-shifting fibers transported the scintillation light from the cryogenic temperatures in vacuum to room temperature and air, where they were read out by $1.3 \times 1.3$  silicon photomultipliers. The scintillators were mounted inside a collimator to improve their position resolution. 0= =0=0 0 The probability of detecting the decay positrons vs. the muon decay position is shown in Fig. \[fig:trans\_detector\_acc\] for the detectors A1 and A2L. By recording time spectra for each detector (with $ t=0 $ given by the muon entrance detector), we can observe indirectly the muon drift in the transverse target. To demonstrate transverse compression we also need to show that the muon beam size decreases during the drift. This is achieved by comparing the time spectra obtained when the temperature gradient ($ 6 $–$ 19 $ K at $ 8.6 $ mbar) is applied, to the time spectra with negligible temperature gradient ($ 4 $–$ 6 $ K at $ 3.5 $ mbar): in the first case both drift and compression are expected; in the second only drift (“pure drift”). We further increased the contrast between “compression” and “pure-drift” measurements by injecting the muons through either “top hole” or “bottom hole” apertures at $y=\pm4.5$ mm (see Fig. \[scheme\]). If we were to inject the muons through a single large aperture, the majority of the muons would be stopped close to $y=0$, and would thus drift straight towards the tip for both “compression” and “pure drift.” Contrarily, with either of the two smaller apertures, we target only a narrow vertical region of the gas, with distinct density conditions for “compression” and “pure drift.” 0= =0=0 0 Geant4 [@Agostinelli2003] simulations of the muon trajectories for “top-hole” and “bottom-hole” injections, and for “compression” and “pure drift,” are shown in Fig. \[fig:trajectories\], with an applied high voltage of $ \mathrm{HV}=5.0 $ kV. The simulation included the most relevant low-energy $\mu^+$ - $\text{He}$ interactions: elastic collisions and charge exchange. The cross sections for these processes were appropriately scaled from the proton-He cross sections [@Belosevic2020; @Taqqu2006]. Simulations show that for the “compression” case, muons injected through both apertures reach the tip of the target efficiently, while in the “pure drift” case, most of the muons stop in the target walls before reaching the tip. The measured time spectra of A1 and A2L under “compression” conditions are presented in Fig. \[fig:time spectra\] (red dots), for both “top-” and “bottom-hole” injection. Note that the time spectra were corrected for muon decay by multiplying the counts by $\exp(t/2.2~\mathrm{\mu s})$. The A1 counts increase with time, for both injection positions. This indicates that the muons were gradually moving towards the tip of the target, [i.e.]{}, towards the A1 acceptance region (see Fig. \[fig:trans\_detector\_acc\]). The A2L counts first increase, then decrease with time, suggesting that the muons first entered, then exited the acceptance region of the detector. After a certain time, the number of counts stays constant in both detectors, implying that the muons reached the target walls. For “bottom-hole” injection, the times at which muons reach the detector acceptance regions are delayed by up to 1500 ns compared to the “top-hole” injection. Indeed, muons injected through the “bottom hole” travel through a region of higher gas density compared to that for “top-hole” injection, leading to a slower drift. All these features are consistent with the simulation results of Fig. \[fig:trajectories\] (left). The “pure-drift” time spectra for “top-hole” injection (black points in Fig. \[fig:time spectra\] (top row)) differ significantly from the “compression” time spectra. The A1 counts remain very low at all times, suggesting that the muons never reached the tip of the target. The A2L counts never decrease after reaching the maximum, implying that muons did not manage to “fly by” the detector, but they stopped in the target walls before leaving the acceptance region of the detector. This is consistent with the simulated trajectories of Fig. \[fig:trajectories\] (right). However, for “bottom-hole” injection, measured “pure-drift” time spectra (black points, Fig. \[fig:time spectra\] (bottom row)) are almost identical to the “compression” time spectra (red points). This is mainly due to detector resolution, which is worse for the “bottom-hole” measurements because muons drift at larger distances from the detectors. 0= =0=0 0 To better compare the measurements with the Geant4 simulations, the muon trajectories of Fig. \[fig:trajectories\] were folded with the detector acceptance of Fig. \[fig:trans\_detector\_acc\] to produce the corresponding time spectra. These time spectra (dashed curves in Fig. \[fig:time spectra\]) were then fitted to the measurements using two fit parameters per time spectrum: a normalization, to account for the uncertainties in the detection and stopping efficiencies, and a flat background, to account for beam-related stops in the walls. To improve the fits, the detector positions were shifted by up to $ 1 $ in the simulation compared to the design value, to account for possible shifts of various parts of the setup during the cool-down, and mechanical uncertainty. Relatively large reduced chi-squareds, especially for the “pure-drift” measurements, point to systematic discrepancies between the simulation and the measurements. One possible explanation of the discrepancy could be a misalignment (tilt) between the target and the magnetic field axes. Even a small misalignment would shift the initial beam position, particularly affecting the “pure-drift” measurements, as the time and position at which muons crash into the target walls would shift accordingly. In the “compression” measurements, such misalignment is less problematic, because the temperature gradient ensures that muons reach the tip of the target, regardless of their initial position. The effects of such misalignments were investigated by simulating time spectra for various tilts between the target and the magnetic field axes, and fitting them to the data. The best fit for “compression” and “pure drift” simultaneously was obtained by rotating the target axis from the $(0,0,1)$ direction to $(0.018,0.007,0.9998)$ for the “top hole,” and to $(0.019, 0.005, 0.9998)$ for the “bottom hole,” leading to average shifts of the initial muon stopping distribution by up to $ 3.5 $ in the $ xy $-plane. Including such a tilt in the simulation improves the fit significantly (solid curves in Fig. \[fig:time spectra\]). Even better agreement is achieved by allowing different tilts for the “compression” and “pure-drift” measurements, which likely points to a mismatch between the assumed and actual initial muon beam momentum distribution. Further possible improvements of the fit would be simultaneous fine-tuning of the tilt, the muon momentum distribution, and the detector positions. However, little further insight would be gained from such a time-consuming optimization, since the simulations presented here already reproduce well the main features of the measured time spectra. Hence, the measurements demonstrate the transverse compression of a muon beam. Next, we investigate the muon drift versus its energy by varying the electric field strength. As explained above, the muon drift angle $ \theta $ is proportional to the average $\mu^+$ - $\text{He}$ collision frequency $\nu_{c}$, which can be written as $$\nu_{c}=N\sigma_{MT}(E_{CM})\left|\vec{v}_{r}\right|,$$ where $N$ is the helium number density, $\sigma_{MT}(E_{CM})$ is the energy-dependent $\mu^+$ - $\text{He}$ momentum transfer cross section and $\vec{v}_{r}$ is the $\mu^+$ - $\text{He}$ relative velocity. 0= =0=0 0 0= =0=0 0 0= =0=0 0 For muon energies $ \lesssim 1$ eV, $\sigma_{MT}$ is proportional to $1/\sqrt{E_{CM}}\propto 1/\left|\vec{v}_{r}\right|$ [@Mason1979b], so that the collision frequency is independent of the muon energy. For energies larger than $ 1 $ eV, the product $\sigma_{MT}\left|\vec{v}_{r}\right|$ decreases with energy, as shown in Fig. \[fig:cs\*sqrtE vs E\]. Hence, the collision frequency and the muon drift angle $\theta$ decrease with increasing muon energy. Since the muon energy in the He gas increases with electric field strength, the muon drift direction approaches the [$ \hat{E} \times \hat{B}$]{}-direction with increasing electric field. The simulated muon trajectories of Fig. \[fig:trans\_newtop\_comp\_sim\_scan\] for various applied HV confirm this behavior. The dependence of the muon motion on the electric field strength was studied experimentally for various beam injections and density conditions. In Fig. \[fig:trans\_newtop\_comp\_data\_scan\] we present the measurements for “top-hole” and “compression” configuration, which illustrate all relevant features. We can see that the maximum number of counts in A1 decreases with decreasing HV, indicating that increasingly fewer muons reach the tip of the target. Moreover, for smaller HV the drift is slower, as visible from the shift of the A2L time spectrum maximum towards later times. This is consistent with the simulated trajectories of Fig. \[fig:trans\_newtop\_comp\_sim\_scan\]. The simulated time spectra for each HV and for each detector were fitted independently to the measurements with two free parameters: normalization and flat background. It was not possible to perform a simultaneous fit for all HV values. We indeed observed that the normalization giving the best fit depends systematically on the HV. This might be caused by the lack of a precise definition of the electric field at the target tip, which would affect most significantly the measurements with strong electric fields, where muons actually reach the tip of the target. A mismatch of the initial muon stop distribution between simulations and experiment would also contribute to this systematic effect. Still, the simulation reproduces correctly the muon drift velocity, which is given by the energy-dependent $\mu^+$ - $\text{He}$ elastic cross section. Indeed, the maxima of the time spectra, occurring when muons arrive in the detector acceptance region, are consistent between measurements and simulations. To summarize, this Letter presents the first demonstration of transverse compression of a muon beam with the muCool device. One critical aspect of this demonstration was distinguishing between a simple muon drift versus drift with simultaneous reduction of the beam transverse size. Such distinction was accomplished by performing the measurements with and without a vertical temperature gradient, which is needed for the transverse compression, and by injecting the muon beam into different density regions of the target. The muon motion corresponding to realistic experimental conditions was simulated using the Geant4 package including custom low-energy ($ <1 $ keV) muon processes [@Belosevic2020]. Very good agreement between the simulations and measurements was achieved after accounting for small target and detector misalignments. According to the simulations, under these experimental conditions, a muon beam with initial diameter of 10 mm and 830 keV energy with 17 keV spread is transformed within 3.5  into a beam of 0.7 mm size (FWHM) in the $ y $-direction and 5 eV energy with 5 eV spread. Besides demonstrating the transverse compression, the dependence of the muon drift on the applied electric field strength was explored experimentally and found to be in fair agreement with Geant4 simulations. This validates our modeling of the low-energy $\mu^+$ - $\text{He}$ elastic collisions. Connecting the transverse compression stage demonstrated in this Letter to the previously demonstrated longitudinal compression stage [@Bao2014; @Belosevic2019] will allow us to realize a high-brightness low-energy muon beam as proposed in [@Taqqu2006], with a phase space compression factor of $ 10^{10} $ and $ 10^{-3} $ efficiency relative to the input beam. The experimental work was performed at the $ \pi E 1 $ beamline at the PSI proton accelerator HIPA. We thank the machine and beam line groups for providing excellent conditions. We gratefully acknowledge the outstanding support received from the workshops and support groups at ETH Zurich and PSI. Furthermore, we thank F. Kottmann, M. Horisberger, U. Greuter, R. Scheuermann, T. Prokscha, D. Reggiani, K. Deiters, T. Rauber, and F. Barchetti for their help. This work was supported by the SNF grants No. 200020\_159754 and 200020\_172639. [20]{} ifxundefined \[1\][ ifx[\#1]{} ]{} ifnum \[1\][ \#1firstoftwo secondoftwo ]{} ifx \[1\][ \#1firstoftwo secondoftwo ]{} ““\#1”” @noop \[0\][secondoftwo]{} sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{} @startlink\[1\] @endlink\[0\] @bib@innerbibempty [****,  ()](https://doi.org/10.1016/j.ppnp.2015.06.001) [****,  ()](https://doi.org/10.1088/1742-6596/295/1/012032) [****,  ()](https://doi.org/10.1088/0954-3899/37/8/085001) [****,  ()](https://doi.org/10.1103/PhysRevD.98.113002) [****,  ()](https://doi.org/10.1007/s10751-018-1525-z) [****,  ()](https://doi.org/10.1142/S2010194514602580),  [****,  ()](https://doi.org/10.3390/atoms6020017) [****,  ()](https://doi.org/10.1103/RevModPhys.57.689) [****,  ()](https://doi.org/10.1070/PU1978v021n04ABEH005537) [****,  ()](https://doi.org/10.1023/A:1012624501134) [****,  ()](https://doi.org/10.1103/PhysRevLett.72.2793) [****,  ()](https://doi.org/10.1103/PhysRevLett.97.194801) [****, ()](https://doi.org/10.1049/ip-a-1.1980.003) [****,  ()](https://doi.org/10.1103/PhysRevLett.112.224801) [****, ()](https://doi.org/10.1140/epjc/s10052-019-6932-z) [****,  ()](https://doi.org/10.1016/j.nima.2016.01.018) [****, ()](https://doi.org/10.1016/S0168-9002(03)01368-8) **, [Ph.D. thesis](https://doi.org/10.3929/ETHZ-B-000402802),  () [**** ()](http://www-cfadc.phy.ornl.gov/elastic/home.html) [****,  ()](https://doi.org/10.1088/0022-3700/12/24/023)
--- bibliography: - 'sw.bib' subtitle: '*(or, how to turn your favorite language into a proof assistant using SMT)*' title: Refinement Reflection ---
--- abstract: 'The approximation power of general feedforward neural networks with piecewise linear activation functions is investigated. First, lower bounds on the size of a network are established in terms of the approximation error and network depth and width. These bounds improve upon state-of-the-art bounds for certain classes of functions, such as strongly convex functions. Second, an upper bound is established on the difference of two neural networks with identical weights but different activation functions.' bibliography: - 'example\_paper.bib' - 'example\_paper.bib' nocite: '[@langley00]' --- Introduction ============ Deep Neural Networks (DNN) has significantly improved upon traditional state-of-the-art methods in image classification and speech recognition [@12],[@13],[@14] and recently DNN appeared promising as decoders [@15]. Solving problems such as image classification and object detection using deep neural networks (DNN) can be treated as problem of approximating an unknown function by DNN. It is well-known that sufficiently large multi-layer feedforward networks can approximate any function with desired accuracy [@7]. An important problem then is to determine the smallest neural network for a given task and accuracy. The standard guideline is the approximation power (variously known as expressiveness) of the network which quantifies the size of the neural network, typically in terms of depth and width, in order to approximate a class of functions within a given error. An extensive body of literature exists on the approximation power of neural networks, see, [*[e.g.]{}*]{}, [@2; @6; @8; @1; @5; @Pascanu+et+al-ICLR2014b]. With the success of deep learning in practical applications in the last decade, numerous papers recently studied the approximation power in terms of the network depth and width. In particular, several works provided evidence that deeper networks perform better than shallow ones, given a fixed number of hidden units [@delalleau2011shallow; @Pascanu+et+al-ICLR2014b; @bianchini2014complexity; @2; @3; @8; @1; @5].[^1] A popular activation function is the rectified linear unit (ReLU), partly because of its low complexity when coupled with backpropagation training [@14]. It has, therefore, become of interest to determine the power of neural networks with ReLU’s and, more generally, with piecewise linear activation functions. Determining the capacity of a neural networks with a piecewise linear activation function typically involves two steps. First, evaluate the number of linear pieces (or break points) that the network can produce and, second, tie this number to the approximation error. The works [@Pascanu+et+al-ICLR2014b; @18] recently showed that a linear increase in depth results in an exponential growth in the number of linear pieces as opposed to width which results only in a polynomial growth. Accordingly, the approximation capacity exhibits a similar tradeoff between depth and width. For related works with respect to classification error see [@2; @3] and with respect to function approximation error see [@8; @1; @5]. In this paper we consider general feedforward neural networks with piecewise linear activation functions and establish bounds on the size of the network in terms of the approximation error, the depth $d$, the width, and the dimension of the input space to approximate a given function. We first establish an improved upper bound on the number of break points that such a network can produce which is a multiplicative factor $d^d$ smaller than the currently best known from [@5]. This upper bound is obtained by investigating neuron state transitions as introduced in [@6]. Combining this upper bound with lower bounds in terms of error and dimension, we obtain necessary conditions on the depth, width, error, and dimension for a neural network to approximate a given function. These bounds significantly improve on the corresponding state-of-the-art bounds for certain classes of functions (Theorems \[theorem1\],\[theorem2\] and Corollaries \[weak\],\[corollary1\],\[corollary2\]). The second contribution of the paper (Theorem \[theorem3\]) is an upper bound on the difference of two neural networks with identical weights but different activation functions. This problem is related to “activation function simulation” investigated in [@16] which leverages network topology to compensate a change in activation function. piecewise linear activation functions, appears to perform given its ease of implementation is a popular choice given that Given, on the depth v.s. width is large enough is an important aspect of learning as it allows to properly scale neural networks for given tasks. We consider real valued general feedforward neural networks defined over $\mathbb{R}^n$, with piecewise activation functions, and derive necessary conditions on the network depth and number of hidden units to approximate a function within a given error. Through aIt is well-known that a neural network with a sufficiently large number of units is capable of approximating any Borel measurable function with arbitrary degree of accuracy [@7]. In this regard, an interesting question rises as follows: In order to $\varepsilon$-approximate a given function $f$, using DNN, what constraints the structure of the employed DNN has to satisfy? A lower bound relationship between the depth of the DNN, number of the units, $\varepsilon$ and function $f$ is found. The impact of replacing activation functions of a given network with new activation functions (such as the quantized version of the original activation functions) on the approximation power of the network is studied. For such changes, an upper bound on maximum change in accuracy is established.\ The conducted study helps to illuminate the inherent power of the DNN. It also helps on how to avoid blind use of the DNN for real world applications, where using too many units may result in inefficient training of the network as well as unnecessary computational cost. These conditions (see Theorems \[theorem1\] and \[theorem2\] and their corollaries) are mainly obtained through an improved upper bound, in terms of depth and number of hidden units, on the number break points a neural network can produce. This upper bound is obtained through a basic approach that iteratively bounds the state changes of the hidden units layer by layer. In a second part we bound the effect of a change of activation function in a given neural network. This bound holds for arbitrary activation functions and depend on depth, width, smoothness of the activation functions, and the range of the weights. Related works -------------- The expressiveness of neural networks requires to - 2014, Pascanu et al., “On the number of response regions of deep feedforward nets”: compare 1 vs multi-layer with same number of h.units, ReLU. Main result: depth increases number of regions exponentially vs. polynomially for width. Main tool: hyperplane arrangement (geometry). Relation to our work: addresses similar problem as our upper bound on break points. - 2014, “On the number of lin. regions of deep nets”, Montufar: deep vs 1 layer, constant number of hidden units, RELu: improves Pascanu et al. on the minimum number of regions that can be produced by a multi-layer nn (see Corollary 6 there). - 2015 Representation Benefits of Deep Feedforward Networks, Telgarsky. Deep vs. shallow, RELU, error with respect to a classification error (not f diff hat f, the link between the two is that f gives a quantized 0-1 valued function against which we can compare). Main result states that deep neural net can classify certain errors error free and not shallows. - 2015, Matus Telgarsky 2015 Benefits of depth in neural networks, similar to previous paper but with a sligly more general conclusion which holds for semi-algebraic nodes (slight generalization of piecewise lin activation function) - 2016, Learning Functions: When Is Deep Better Than Shallow by Hrushikesh Mhaskar. Compares approximation of tree-structured functions by deep and shallow networks. Activation function smooth. As with previous results, depth is better, [*[i.e.]{}*]{}, achieves same accuracy but with fewer parameters as expected because of the tree structure of the function. The only “suprise” is universality for shallow nets. I.e., sufficiently wide nets can approximate a function with a structure that is not tuned for the network. - 2016 Deep vs. Shallow Networks: an Approximation Theory Perspective by Hrushikesh N. Mhaskar1 and Tomaso Poggio. Focuses on Shallow networks and provides achievability upper bounds on (smooth function approximation) error as a function of network fanning. - 2017 Srikant et al. WHY DEEP NEURAL NETWORKS FOR FUNCTION APPROXIMATION? Very close to our work, regular nets, both upper and lower bounds, sigma2 activation functions, functions that are considered are strictly convex, polynomials - Yarotsky 2017: very similar setup as Srikant, but more general functions. It is known that exploiting legitimate activation functions in sufficiently large depth-2 neural networks can approximate any continuous mapping with an arbitrary degree of accuracy([@7]). The behavior of error term respect to network’s parameters such as depth and number of units has been called into question in recent years. Knowing such behaviors can lead us to find the important parameter in approximation power of deep neural networks. [@1],[@2],[@3] and [@9] have expressed the importance of depth in approximation power of NN. [@8] has constructed the composition function that can be approximated by deep neural networks with a notable less complexity than the shallow network, where [@11] has introduced the coplexity of neural network as the depth of the network and number of computation units.\ Since a break point necessarily implies a transition in at least one hidden unit, state *transitions* of hidden units and quantify how these transitions . Transitions in the state of units for *ReLU* and *hard tanh* activation functions are introduced in [@6], here we generalize this definition for piece-wise linear activation function and illuminate the relation between transitions and break points. Next, by the help of this upper bound we provide a series of lower bounds on network’s parameters for $\varepsilon-$approximation of a given function $g$. Larger lower bounds for some class of functions are provided (See table \[table1\]). Another interesting observation that one can notice refers to replacing activation function with the other one, how does it effect the final output of the network? Theorem \[theorem3\] addresses this problem and provides an upper bound on such difference. ——- \[ to put in related works: \] ——- Next, proposition \[proposition2\], \[proposition3\] introduce lower bounds on the number of affine pieces that are required for convex $\varepsilon$-approximation of function $f$. [@4] and the proof of theorem 11 of [@1] provide such lower bound for 1-dimensional case and here we generalize this to multi-dimensional functions. [@6] pushes forth the idea that convex affine approximation is a way that neural network with piece-wise linear activation functions approximate multi-dimensional functions.\ Merging up the mentioned propositions yields the theorems \[theorem1\] and \[theorem2\] that provide a relationship between depth and number of units for $\varepsilon$-approximation of specific function $g$. It is as follows: $$\begin{aligned} &\Big((t-1) \frac{|{{{\mathcal{H}}_f}}| }{d_f}+ 1 \Big)^{d_f}\\ & \geq \sup \limits_{({\boldsymbol{x}},{\boldsymbol{y}}) \in \mathcal{R}^2}^{} \Big \{ \frac{||{\boldsymbol{x}} -{\boldsymbol{y}} ||_2}{4\sqrt{\varepsilon}} \cdot \Psi(g, {\boldsymbol{x}},{\boldsymbol{y}}) \Big\} \end{aligned}$$ Where if $\alpha_1(t)$ and $\alpha_2(t)$ denote the largest and smallest eigenvalues of hessian matrix $\nabla^2f\big((1-t){\boldsymbol{x}} +t{\boldsymbol{y}} \big)$, respectively then consider $\gamma(t)=\min\big\{|\alpha_1(t)|,|\alpha_2(t)|\big\}$ and $\theta(t)=\mathrm{sgn}\big(\alpha_1(t)\alpha_2(t)\big)$ so finally we will have $$\Psi_{[{\boldsymbol{x}},{\boldsymbol{y}}](f)}= {\sqrt{\inf \limits_{0 \leq t \leq 1}^{}\Big ( {\max \big\{0, \theta(t)\gamma(t) \big\} } \Big)}}$$ Finding the lower bound on the size of the neural network for $\varepsilon$-approximation of specific function or class of functions has been studied in [@5],[@1] and [@10]. —- The notion of break point was introduced in [@1] for one-dimensional functions and captures a similar notion as *crossing number* introduced in [@3] which quantifies the number of oscilations of a given function around a given value over the entire input space—as opposed to counting the number of discontinuities over a line. —- Note also that Theorem \[theorem3\] quantifies the change in the neural network output due to a change of activation function only, while the topology of the network (connectivity and weights) is kept fix. A related problem, investigated in [@16] as “activation function simulation,” is to understand how network topology can be leveraged to compensate a change in activation function. This theorem is also The paper is organized as follows. In Section \[prelim\] we briefly introduce the setup. In Section \[mainresults\] we present the main results which are then compared with the corresponding ones in the recent literature in Section \[comparison\]. Finally, Section \[analysis\] contains the proofs. Preliminaries {#prelim} ============= Throughout the paper $\mathcal{R}$ denotes a compact convex set in $\mathbb{R}^n$, $n\geq 1$, and ${\mathbb{F}}_\sigma$ denotes the set of feedforward neural networks with input $\mathcal{R}$, output $\mathbb{R}$, and activation function $\sigma: {\mathbb{R}}\rightarrow {\mathbb{R}}$. Feedforward here refers to the fact that the neural network contains no cycles; connections are allowed between non-neighbouring layers. It is assumed that $\sigma$ is a piecewise linear (not necessarily continuous) function with $t\geq 1$ linear pieces. The set of all such activation functions is denoted by $\Sigma_t$. A neural network $f\in\mathbb{F}_\sigma$ consists of a set of input units ${\mathcal{I}}_f$, a set of hidden units ${{\mathcal{H}}_f}$ that operate according to $\sigma$, non-zero weights representing connections, and a single output unit which just weight-sums its inputs. To simplify the notation we use $f$ to represent both a neural network and the function that it represents. For instance, in the neural network shown in Fig. \[fig1\], we have $\mathcal{I}_f=\{x_1,x_2,x_3\}$ and ${{\mathcal{H}}_f}=\{u_{ij}, ~ \forall i,j\}$. \[dpth\] Given a neural network ${f}\in {\mathbb{F}}_\sigma $, the depth of a hidden unit $h \in {{\mathcal{H}}_f}$, denoted as $d_f(h)$, is the length of the longest path from any $i\in {\mathcal{I}}_f$ to $h$. The depth of $f$ is $$d_f {\overset{\text{def}}{=}}\max \Big \{d_f(h) \big | h \in {{\mathcal{H}}_f}\Big \}.$$ The set of hidden units with depth $i$ is $${{\mathcal{H}}_f}^i {\overset{\text{def}}{=}}\Big \{ h \in {{\mathcal{H}}_f}\big | d_f(h)=i \Big\} .$$ The width of the network is $$\begin{aligned} \label{ell} \omega_f&{\overset{\text{def}}{=}}\frac{|{{{\mathcal{H}}_f}}| }{d_f}{\overset{\text{def}}{=}}\frac{\sum_{i=1}^{d_f}\omega_i}{d_f} \end{aligned}$$ where $$\omega_i{\overset{\text{def}}{=}}|{{\mathcal{H}}_f}^i|.$$ For instance, in Fig. \[fig1\], the hidden unit $u_{23}$ can be reached by inputs $x_1$ and $x_3$, by following the paths $x_1\rightarrow u_{23} $, $x_3\rightarrow u_{11}\rightarrow u_{23}$, or $x_3\rightarrow u_{12}\rightarrow u_{23}$. Therefore, $d_f(u_{23})=2$. The hidden units of maximum depth are $u_{31}$, $u_{32}$, and $u_{33}$ and hence $d_f=3$, ${{\mathcal{H}}_f}^3=\{u_{31},u_{32},u_{33} \}$ and $\omega_f=8/3$. The following simple inequality is frequently used in the paper. \[bineq\] For any $t\geq 1$, $d_f\geq 1$, and $|{{\mathcal{H}}_f}|\geq 1$ $$((t-1)\omega_f+1)^{d_f}\leq t^{|{{\mathcal{H}}_f}|}.$$ Set $\omega_f=\frac{|{{{\mathcal{H}}_f}}| }{d_f}$ and observe that $$\left((t-1)\frac{|{{{\mathcal{H}}_f}}| }{d_f}+1\right)^{d_f}$$ is a non-decreasing function of $d_f$ and that $d_f\leq |{{\mathcal{H}}_f}|$. (in1) \[input,right=2cm\] [$x_1$]{}; (in2) \[input, below = 0.2cm of in1\] [$x_2$]{}; (in3) \[input, below= 0.4cm of in2\][$x_3$]{}; (h1) \[hidden, right =2cm of in1\][$u_{21}$]{} ; (h12) \[hidden, below = 0.3cm of h1\] [$u_{22}$]{}; (h13) \[hidden, below= 0.6cm of h12\][$u_{23}$]{}; (h0) \[hidden, right= 0.6cm of in1\][$u_{11}$]{} ; (h02) \[hidden, below = 1.5cm of h0\] [$u_{12}$]{}; (h2) \[hidden, right= 0.8cm of h1\][$u_{31}$]{}; (h22) \[hidden, below = 1cm of h2\] [$u_{32}$]{}; (h23) \[hidden, below= 1cm of h22\][$u_{33}$]{}; (n1) at (6,1) ; (n2) at (12,1) ; (n3) at (16.5,1) ; (out1) \[output,right=0.5cm of h23\]; (h0)–(h1); (in2) edge (h22); (h0)–(h12); (h02)–(h23); (h0) edge (h13); (in1)–(h13); (h2)–(out1); (h22)–(out1); (h23)–(out1); (h1)–(h2); (h12)–(h2); (in3)–(h02); (h02)–(h13); (in3)–(h0); (h13) edge (out1); (h1)–(h22); (h13)–(h22); (h12)– (h23); \[def4\] Function ${f}\in {\mathbb{F}}_\sigma $ is an affine $\varepsilon$-approximation of a function $g:\mathcal{R} \rightarrow \mathbb{R}$ if $$\sup\limits_{{\boldsymbol{x}} \in \mathcal{R}}^{} |f({\boldsymbol{x}} )-g({\boldsymbol{x}} )| \leq \varepsilon.$$ Given $({\boldsymbol{x}},{\boldsymbol{y}}) \in \mathcal{R}^2$, function $f:\mathcal{R} \rightarrow \mathbb{R}$ admits a *break point* at $\alpha_0\in (0,1)$ relative to the segment $[{\boldsymbol{x}},{\boldsymbol{y}}]$ if the first order derivative of $ f((1-\alpha){\boldsymbol{x}} +\alpha{\boldsymbol{y}} )$ does not exist at $\alpha=\alpha_0$. The total number of break points of $f$ on the (open) segment $]{\boldsymbol{x}},{\boldsymbol{y}}[$ is denoted by $B_{{\boldsymbol{x}}\rightarrow {\boldsymbol{y}}}(f)$. Finally, we let $\bar{B}_{{\boldsymbol{x}}\rightarrow {\boldsymbol{y}}}(f){\overset{\text{def}}{=}}B_{{\boldsymbol{x}}\rightarrow {\boldsymbol{y}}}(f)+1$. Since $f$ is piecewise linear $\bar{B}_{{\boldsymbol{x}}\rightarrow {\boldsymbol{y}}}(f)$ simply counts the number of linear pieces that $f$ produces as the input ranges from ${\boldsymbol{x}}$ to ${\boldsymbol{y}}$. For notational convenience let $$\label{def3} f_{[{\boldsymbol{x}},{\boldsymbol{y}}]}(\alpha) {\overset{\text{def}}{=}}f(\alpha{\boldsymbol{x}}+(1-\alpha){\boldsymbol{y}}) \quad 0\leq \alpha\leq 1.$$ Main Results {#mainresults} ============ Theorems \[theorem1\],\[theorem2\] and Corollaries \[corollary1\],\[corollary2\] provide bounds on the size of a neural network to approximate a given function. These bounds are expressed in terms of the approximation error and width and depth of the network, but hold irrespectively of the weights. Recall that connections are allowed between non-neighboring layers. As a notational convention we use $C^2(\mathcal R)$ to denote the set of functions ${\mathcal{R}} \rightarrow \mathbb{R}$ whose second order partial derivatives are continuous over $ \mathring{\mathcal{R}}$ (the interior of $\mathcal{R}$). Throughout this section we consider functions $f:\mathcal{R} \rightarrow \mathbb{R}$ where $\mathcal{R}\subseteq \mathbb{R}^n $ is convex. [\[theorem1\]]{} Let $f \in {\mathbb{F}}_\sigma$, $\sigma \in \Sigma_t $, be an $\varepsilon$-approximation of a function $g\in C^2(\mathcal R)$ and let ${\boldsymbol{x}},{\boldsymbol{y}}\in \mathcal{R}$. Then, $$\begin{aligned} \label{elprimero1} \Big((t-1) \omega_f+ 1 \Big)^{d_f} &\geq\bar{B}_{{\boldsymbol{x}}\rightarrow {\boldsymbol{y}}}(f) \\ &\geq \frac{||{\boldsymbol{x}} -{\boldsymbol{y}} ||_2}{4\sqrt{\varepsilon}}\cdot \Psi(g,{\boldsymbol{x}},{\boldsymbol{y}}), \label{elprimero2} \end{aligned}$$ where $$\begin{aligned} \label{psii} \Psi(g,{\boldsymbol{x}},{\boldsymbol{y}}) &{\overset{\text{def}}{=}}\sqrt{\inf \limits_{0 \leq \alpha \leq 1}^{}\Big ( {\max \big\{0, \gamma(\alpha)\delta(\alpha) \big\} }\Big )},\\ \gamma(\alpha)&{\overset{\text{def}}{=}}\min\big\{|\alpha_1(\alpha)|,|\alpha_2(\alpha)|\big\},\notag\\ \delta(\alpha)&{\overset{\text{def}}{=}}\mathrm{sign}\big(\alpha_1(\alpha)\alpha_2(\alpha)\big),\notag \end{aligned}$$ and where $\alpha_1(\alpha)$ and $\alpha_2(\alpha)$ are the largest and smallest eigenvalues of the hessian matrix $\nabla^2 g\big( (1-\alpha){\boldsymbol{x}} + \alpha {\boldsymbol{y}} \big)$, respectively. Maximizing the right-hand side of over ${\boldsymbol{x}},{\boldsymbol{y}}$ and using Lemma \[bineq\] we obtain: \[weak\] Under the assumptions of Theorem \[theorem1\] we have $$\begin{aligned} |{{{\mathcal{H}}_f}}| \geq \log \limits_{t}^{} \Biggl( \sup \limits_{({\boldsymbol{x}},{\boldsymbol{y}}) \in \mathcal{R}^2}^{} \Big \{ \frac{||{\boldsymbol{x}} -{\boldsymbol{y}} ||_2}{4\sqrt{\varepsilon}}\cdot \Psi(g, {\boldsymbol{x}},{\boldsymbol{y}}) \Big\} \Biggl). \end{aligned}$$ A function $g:\mathcal R\rightarrow \mathbb R$ that is twice differentiable is said to be strongly convex with parameter $\mu$ if $\nabla^2 g({\boldsymbol{x}})\succeq \mu I$ for all ${\boldsymbol{x}}\in\mathring{\mathcal R}$. [ \[corollary1\]]{} Let $f \in {\mathbb{F}}_\sigma$, $\sigma \in \Sigma_t $, be an $\varepsilon$-approximation of a function $g\in C^2(\mathcal R)$ that is strongly convex with parameter $\mu>0$. Then, $$|{{{\mathcal{H}}_f}}| \geq \frac{1}{2}\log_{t}^{} \Big( \frac{\mu \cdot(\mathrm{diam(\mathcal{R})})^2}{16{\varepsilon}} \Big),$$ where $$\mathrm{diam}(\mathcal{R}){\overset{\text{def}}{=}}\sup \limits_{({\boldsymbol{x}},{\boldsymbol{y}})\in \mathcal{R} }^{} ||{\boldsymbol{x}} - {\boldsymbol{y}}||_2.$$ By strong convexity $\Psi(g, {\boldsymbol{x}},{\boldsymbol{y}}) \geq \sqrt{\mu}$. The result then follows from Theorem \[theorem1\] and Lemma \[bineq\]. As an example, consider $g({\boldsymbol{x}})={\boldsymbol{x}}\cdot{\boldsymbol{x}}$ over $[0,1]^n$. The Hessian matrix is $2I_{n \times n}$ and from Corollary \[corollary1\] we get $$|{{{\mathcal{H}}_f}}| \geq \log_2 \Big( \sqrt{ \frac{n}{8 \varepsilon} } \Big).$$ \[corollary2\] Let $\mathcal R=[0,1]^n$. Let $f \in {\mathbb{F}}_\sigma$, $\sigma \in \Sigma_2 $,[^2] be an $\varepsilon$-approximation of a function $g\in C^2(\mathcal R)$ such that $\nabla g(x)\succ 0$ for any $x\in \mathring{\mathcal{R}}$. Then, $$|{{\mathcal{H}}_f}| \geq q(g) d_f\varepsilon^ {-\frac{1}{2d_f}}$$ where $q(g)>0$ is a constant that only depends on $g$. From Theorem \[theorem1\] we get $$\begin{aligned} &\Big(\frac{{{\mathcal{H}}_f}}{d_f}+ 1 \Big)^{d_f} \geq \frac{c(g)}{\sqrt{\varepsilon}} ,\end{aligned}$$ where $c(g)>0$ is some strictly positive constant, since the Hessian of $g$ is positive definite everywhere over $ \mathring{\mathcal{R}}$. Since ${{{\mathcal{H}}_f}}/{d_f}\geq 1$ the above inequality implies $$\begin{aligned} \Big(2\frac{|{{\mathcal{H}}_f}|}{d_f}\Big)^{d_f} \geq \frac{c}{\sqrt{\varepsilon}}. \end{aligned}$$ Since $\frac{1}{2}c^{\frac{1}{d_f}}\geq q$ where $q=\frac{1}{2}\min(c,1)$, the above inequality yields the desired result. \[theorem2\] Let $\mathcal R =[0,1]^n$. Let ${f}\in {\mathbb{F}}_\sigma$, $\sigma \in \Sigma_t $, be an $\varepsilon$-approximation of a function $g:\mathcal R \rightarrow \mathbb{R}$ such that $|D^{J}(g)({\boldsymbol{x}})| \leq \delta$ for any ${\boldsymbol{x}} \in [0,1]^n $ and any multi-index[^3] $J $ such that $|J|=3$. Then, $$\big((t-1) \omega_f+ 1 \big)^{d_f} \geq \sqrt{\frac{\Big ( \max \limits_{ {\boldsymbol{x}} \in [0,1] ^ n } ^ {} \big| {\Delta(g)({\boldsymbol{x}})}\big| n^{-1} - \delta n^\frac{3}{2} \Big)^+}{16\varepsilon}},$$ where $$\Delta (g)({\boldsymbol{x}}) = \sum\limits_{k=1}^n \frac{d^2 g}{d x_k^2}, \label{eq:laplacian}$$ is the Laplacian of $g$ and where $a^+=\max(a,0)$. For instance, approximating $$g(x_1,x_2)=10x_1^2+x_1^2x_2^2+10x_2^2$$ over $[0,1]^2$ requires $\log_t \Big( \frac{0.82}{\sqrt{\varepsilon}} \Big) $ hidden units—combine Theorem \[theorem2\] with Lemma \[bineq\]. Whether it is Theorem \[theorem1\] or Theorem \[theorem2\] which provides a better approximation bound depends on $g$. For instance, for $g_1(x_1,x_2)= 20x_1^2-2x_2^2+x_1^2x_2^2$ Theorem \[theorem1\] gives a trivial (zero) lower bound since the two eigenvalues of the Hessian matrix $\nabla^2(g_1)$ have always different signs. Theorem \[theorem2\] instead gives $\frac{0.737}{\sqrt{\varepsilon}}$. On the other hand, for $g_2(x_1,x_2)=10x_1^2+10x_2^2+x_1^2x_2^2$ Theorem \[theorem1\] gives $\frac{1.37}{\sqrt{\varepsilon}}$ as lower bound while Theorem \[theorem2\] gives $\frac{0.82}{\sqrt{\varepsilon}}$. Given ${f}\in \Pi$ we denote by $f^{{f}}_{\sigma}({\boldsymbol{x}}) $ the function given by neural network ${f}$ and activation function $\sigma$. The next theorem quantifies the effect of a change of activation function on the output of the neural network. Here, the activation functions need not be piece-wise affine. [\[theorem3\]]{} Let $f_1\in\mathbb{F}_{\sigma_1}$ and $f_2\in\mathbb{F}_{\sigma_2}$ be two neural networks with identical architectures and weights. Suppose that $\sigma_1$ is a $\delta$-Lipschitz continuous function and suppose that the weights belong to some bounded interval $[-A,+A]$, $A>0$. Then, $$\label{mism} ||f_1- f_2||_{\infty} \leq \frac{||\sigma_1 -\sigma_2||_{\infty}}{\delta} \Bigg( \Big(\delta\cdot A \cdot \omega_f +1\Big)^{d_f} -1 \Bigg ).$$ A slightly weaker version of is $$\begin{aligned} ||f_1- f_2||_{\infty} \leq \frac{||\sigma_1 -\sigma_2||_{\infty}}{L} \Bigg( \Big(L^2\cdot \omega_f +1\Big)^{d_f} -1 \Bigg ),\end{aligned}$$ where $L=\max\{A,\delta\}$ denotes the *Lipschitz-bound* defined in  [@16]. As an illustration of Theorem \[theorem3\] consider a feedforward neural network $f_1$ with $100$ hidden units, a maximum depth of $5$, and the *sigmoid* as activation function. Suppose the weights belong to interval $[-1,1]$. Replacing the sigmoid with a $32$-bit quantized function results in an error of at most $0.0001$—which can readily be obtained from Theorem \[theorem3\] with $\delta=\frac{1}{4}, A=1, ||\sigma_1-\sigma_2||_\infty=2^{-32}$. Comparison with Previous Works {#comparison} ============================== Consider first the inequality . Restricting attention to neural networks with $d$ hidden layers, at most $\omega$ units per layer, and where connections are allowed only between neighbouring layers, this inequality gives $${\label{rel1}} \bar{B}_{{\boldsymbol{x}}\rightarrow {\boldsymbol{y}}}(f)\leq \Big((t-1) \omega+ 1 \Big)^{d}.$$ This is to be compared with the previously best known bound (Lemma $3.2$ in [@3]) $$2( 2 (t-1) \omega )^d$$ which is larger by a multiplicative factor that is exponential in $d$ whenever $\omega>1 $, $t\geq 2$. For $n=1$, Lemma 2.1 in [@2] gives $(t\omega)^d$ which still differs from by a multiplicative factor that is exponential is $d$ for $\omega>1 $, $t\geq 2$. For general feedforward neural networks the previously best known bound (see Lemma 4 of [@5]) was $$\bar{B}_{{\boldsymbol{x}}\rightarrow {\boldsymbol{y}}}(f) \leq \Big(t\cdot \omega \cdot d_f \Big)^{d_f}$$ which is a multiplicative factor ${d_f}^{d_f}$ larger than . Now consider the approximation power of neural networks in terms of number of hidden units required to approximate a given function within a given error. Theorem 11 in [@1] states that to approximate a function $[0,1]^n \rightarrow \mathbb{R}$, assumed to be differentiable and strongly convex with parameter $\mu$, with a neural network $f$ requires $$|{{\mathcal{H}}_f}|\geq \frac{1}{2}\log_2 \big( \frac{\mu}{16\varepsilon} \big),$$ regardless of the dimension $n$. Corollary \[corollary1\] improves this bound to $$\frac{1}{2}\log_2 \big( \frac{\mu \cdot n}{16\varepsilon} \big)$$ which incorporates dimension as well—albeit the dependency on dimension is arguably small. 0.15in \[table1\] [lcc]{} & Previous & This paper\ Regular: & [@3] &(Theorem \[theorem1\])\ $\bar{B}_{{\boldsymbol{x}}\rightarrow {\boldsymbol{y}}}(f)\leq $ & $2( 2 (t-1) \omega )^d $ & $ \Big((t-1) \omega+1 \Big)^{d}$\ General: & [@5] &(Theorem \[theorem1\])\ $\bar{B}_{{\boldsymbol{x}}\rightarrow {\boldsymbol{y}}}(f)\leq $ &$\Big(t\cdot \omega \cdot d_f \Big)^{d_f}$ & $ \Big((t-1) \omega_f+1 \Big)^{d_f}$\ $g\in C^2([0,1]^n)$ & &\ over $\mu$-convex &[@1] & (Corollary \[corollary1\])\ $|{{\mathcal{H}}_f}|\geq $ & $\frac{1}{2}\log_2 \big( \frac{\mu}{16\varepsilon} \big)$ & $\frac{1}{2}\log_2 \big( \frac{\mu \cdot n}{16\varepsilon} \big) $\ $g\in C^2([0,1]^n)$ & &\ $\text{Hess}(g){\succ} 0$, $\Sigma_2$ &[@5] &(Corollary \[corollary2\])\ $|{{\mathcal{H}}_f}|\geq$ & $q_1\varepsilon^{\frac{-1}{2d_f}}$ & $d_f q_2 \varepsilon^{\frac{-1}{2d_f}}$\ $ \mathcal{W}^{m,\infty}\big([0,1]^n\big)$ & $\Omega(\varepsilon^{-\frac{n}{2m}})$ & -\ -0.1in Next let us consider approximating a function belonging to the following Sobolev space: $$\begin{aligned} \mathcal{W}^{m,\infty}&\big([0,1]^n\big){\overset{\text{def}}{=}}\{ f \in L^{\infty}([0,1]^n): D^\alpha(f) \in L^{\infty}([0,1]^n) \\&\text{for any multi-index $\alpha$ such that } |\alpha| \leq m \}. \end{aligned}$$ Moreover, [@5]’s lower bound fails to provide a legitimate relation between number of units and error term for infinitely-times differentiable functions. Theorem 4 of [@5] states that to approximate a function in $ \mathcal{W}^{m,\infty}\big([0,1]^n\big)$ with ReLU’s we need $$\label{rel3} |{{\mathcal{H}}_f}|\geq \Omega(\varepsilon^{-\frac{n}{2m}}).$$ Although this is an order bound which, by constrast with the bounds provided in this paper () This lower bound has a constant term that if we want to find its optimal value, we deal with the $C^{\infty}$ space which consists of infinitely-times continuous functions which we know that is an infinit dimension space. Theorem \[theorem1\] proposes a lower bound which is explicite and in order to find its exact value, finite-dimension optimization problem have to be solved. We are unable to improve this lower bound, except when $m=\infty$. In this case, the right-hand side of no more depends on the error $\varepsilon$. By contrast, Apart from the explicit discussians and case of infinitely-times differentiable functions, [@5]’s lower bound for a special class of functions have $(\frac{1}{\varepsilon})^ c$ behavior ($ c>0$) with error term, while this paper proposes $\log(\frac{1}{\varepsilon})$ relation respect to error term. Corollary \[corollary2\] provides a lower bound for ReLU types of networks in terms of the error, the depth, and a constant term which only depends on $g$. This bound can be compared with the bound of Theorem 6 in [@5] which is of order $\epsilon^{-\frac{1}{2d_f}}$.[^4] Hence, Corollary \[corollary2\] provides a linear (in $d_f$) improvement which is particularly relevant in the deep regime where $d_f=\Omega(\log(1/\varepsilon))$. Table \[table1\] summarizes the above discussion. To the best of our knowledge Theorem \[theorem3\] is the first result to bound the effect of a change in the activation function for given network topology and weights. Noteworthy perhaps, this bound is essentially universal in the weights since it only depends on their range. Finally, compared to the cited papers it should perhaps be stressed that the proofs here (see next section) are relatively elementary—[*[e.g.]{}*]{}, they do not hinge on VC dimension analysis—and hold true for general feedforward networks. Analysis ======== We first establish a few lemmas to prove Proposition \[proposition1\] which will provide an upper bound on the number of break points. Then we establish Propositions \[proposition2\] and \[proposition3\] which will give lower bounds on the number of break points in terms of the approximation error. Combining these propositions will give Theorems \[theorem1\] and \[theorem2\]. Finally, we prove Theorem \[theorem3\]. Given ${f}\in {\mathbb{F}}_\sigma $ and ${\mathcal{U}}\subseteq {{\mathcal{H}}_f}$ we define the set of hidden units that lie on a path between the input and ${\mathcal{U}}$ as $$\mathrm{in}({\mathcal{U}}){\overset{\text{def}}{=}}\Big \{ v \in {{\mathcal{H}}_f}\backslash {\mathcal{U}}| \exists i \in {\mathcal{I}}_f, u \in {\mathcal{U}}\: \mathrm{s.t.} \: v \in (i \rightarrow u) \Big \}$$ where $(i\rightarrow u)$ denotes the set of intermediate hidden nodes on the path from $i$ to $u$. For instance, in Fig. \[fig1\] we have $$\mathrm{in}(\{u_{32} \})=\{u_{11},u_{12},u_{21},u_{23} \}.$$ The following lemma follows from the above definition. \[inin\] Given ${\mathcal{U}}\subseteq {{\mathcal{H}}_f}$ we have $$\mathrm{in}(\mathrm{in}(\mathcal U)=\emptyset$$ and $$\mathrm{in}(u) \subseteq ({\mathcal{U}}\cup \mathrm{in}({\mathcal{U}}))$$ for any $ u \in {\mathcal{U}}$. Linear piece activation functions {#linear-piece-activation-functions .unnumbered} --------------------------------- In this section we restrict ourselves to the set of real $t\geq 1$ linear piece activation functions, which we denote as $\Sigma_{t}$. For instance, *Rectifier Linear Units(ReLU)* or *binary step* activations functions belong to $\Sigma_2$. \[def2\] Any $\sigma\in \Sigma_{t} $ partitions the real line (its input) into $t$ intervals $I_1,I_2,...,I_t$ such that on each of these intervals $\sigma$ is affine. The state of a unit with activation function $\sigma$ is defined to be $s\in \{1,2,\ldots,t\}$ if its input belongs to $I_s$. By extension, the state of ${\mathcal{U}}\subseteq {{\mathcal{H}}_f}$ is defined to be the vector of length $|{\mathcal{U}}|$ whose components are the state of each unit in ${\mathcal{U}}$. The following definition is inspired by the notion of pattern transition introduced in [@6]: Let $f\in \mathbb F_{\sigma}$, ${\mathcal{U}}\subseteq {{\mathcal{H}}_f}$ and ${\boldsymbol{x}},{\boldsymbol{y}} \in \mathcal{R}$. Let ${\boldsymbol{z}}_\alpha=(1-\alpha){\boldsymbol{x}}+\alpha{\boldsymbol{y}}$ be a parametrization of the line segment $[{\boldsymbol{x}},{\boldsymbol{y}}]$ as $\alpha$ goes from $0$ to $1$. We say that the state of ${\mathcal{U}}$ experiences a transition at point ${\boldsymbol{z}}_{\alpha^*}$ for some $\alpha^*\in (0,1]$ if the state vector of ${\mathcal{U}}$ changes at ${\boldsymbol{z}}_{\alpha^*}$ while the state vector of $\text{in}({\mathcal{U}})$ does not change at ${\boldsymbol{z}}_{\alpha^*}$. The number of state transitions of ${\mathcal{U}}$ on the segment $[{\boldsymbol{x}},{\boldsymbol{y}}]$, denoted by $N_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}} ({\mathcal{U}})$, is defined to be the number of state transitions of ${\mathcal{U}}$ as the input changes from ${\boldsymbol{x}}$ to ${\boldsymbol{y}}$ on ${\boldsymbol{z}}_\alpha$. If $\mathrm{in}({\mathcal{U}})=\emptyset$, then $N_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}} ({\mathcal{U}})$ is defined to be the number of state transitions of ${\mathcal{U}}$ as the input changes from ${\boldsymbol{x}}$ to ${\boldsymbol{y}}$. Note that if the state vectors of both ${\mathcal{U}}$ and $\text{in}({\mathcal{U}})$ change at $\alpha$, $N_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}} ({\mathcal{U}})$ does not change at that $\alpha$. For example, consider the neural network $f$ in Fig. \[fig1\]. Suppose that ${\mathcal{U}}=\{u_{11}, u_{12}\}$ and suppose that the state of $u_{11}$ and $u_{12}$ changes exactly once along segment $z_\alpha$ for some ${\boldsymbol{x}}$ and ${\boldsymbol{y}}$, respectively at $\alpha_1$ and $\alpha_2$. Then $N_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}} (\{u_{11}\})=1$ and $N_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}} (\{u_{12}\})=1$. If $\alpha_1=\alpha_2$, $N_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}} ({\mathcal{U}})=1$, otherwise $N_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}} ({\mathcal{U}})=2$. If ${\mathcal{U}}'=\{u_{21}, u_{22}, u_{23}\}$, and the state of each of $u_{21}$, $u_{22}$ and $u_{23}$ changes exactly once at either $\alpha_1$ or $\alpha_2$, then $N_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}} ({\mathcal{U}}')=0$ since the state vector of $\mathrm{in}({\mathcal{U}}')={\mathcal{U}}$ has also changed at both $\alpha_1$ and $\alpha_2$. \[lemma2\] Given ${f}\in {\mathbb{F}}_\sigma $ and ${\mathcal{U}}_1,{\mathcal{U}}_2\subseteq {{\mathcal{H}}_f}$ such that $\mathrm{in}({\mathcal{U}}_2)=\emptyset$ and $ \mathrm{in}({\mathcal{U}}_1) \subseteq {\mathcal{U}}_2 $, we have $$N_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}} \Big( {\mathcal{U}}_1 \cup {\mathcal{U}}_2\Big) \leq N_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}} \Big( {\mathcal{U}}_1\Big) + N_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}} \Big( {\mathcal{U}}_2\Big).$$ Suppose $N_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}} \Big( {\mathcal{U}}_1 \cup {\mathcal{U}}_2\Big)$ increases by one at $\alpha=\alpha^*$. If ${\mathcal{U}}_2$ undergoes a state transition at $\alpha^*$ then, because $\mathrm{in}({\mathcal{U}}_2)=\emptyset$, we have that $N_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}} \Big( {\mathcal{U}}_2\Big) $ also increases by one at $\alpha^*$. Instead, if no state change happens in ${\mathcal{U}}_2$ at $\alpha^*$ then, due to the state change of $ {\mathcal{U}}_1 \cup {\mathcal{U}}_2$ at $\alpha^*$, the state of ${\mathcal{U}}_1$ must change as well at $\alpha^*$. Since $\mathrm{in}({\mathcal{U}}_1) \subseteq {\mathcal{U}}_2$ and no change in the state of ${\mathcal{U}}_2$ is observed at $\alpha^*$ we have that $N_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}} \Big( {\mathcal{U}}_1\Big)$ necessarily increases by one at $\alpha^{*}$. \[lemma3\] Given ${f}\in {\mathbb{F}}_\sigma $ and ${\mathcal{U}}_1,{\mathcal{U}}_2\subseteq{{\mathcal{H}}_f}$ such that ${\mathcal{U}}_1 \subseteq {\mathcal{U}}_2$ and $\mathrm{in}({\mathcal{U}}_2)=\emptyset$ we have $$N_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}} \Big( {\mathcal{U}}_1\Big) \leq N_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}} \Big( {\mathcal{U}}_2\Big).$$ Suppose $N_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}} \Big( {\mathcal{U}}_1\Big)$ increases by one at $\alpha^*$. Since ${\mathcal{U}}_1 \subseteq {\mathcal{U}}_2$ the state of ${\mathcal{U}}_2$ changes as well at $\alpha^*$. Since $\mathrm{in}({\mathcal{U}}_2)=\emptyset$ we deduce that $N_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}} \Big( {\mathcal{U}}_2\Big)$ increases at $\alpha^*$ by one, thereby concluding the proof. \[lemma4\] Given ${f}\in {\mathbb{F}}_\sigma $, for any ${\mathcal{U}}\subseteq {{\mathcal{H}}_f}$ we have $$N_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}} ({\mathcal{U}}) \leq \sum \limits_{u \in {\mathcal{U}}}^{} N_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}}(u).$$ Suppose that $N_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}} ({\mathcal{U}})$ increases by one at $\alpha^{*}$. Let ${\mathcal{V}}\subseteq {\mathcal{U}}$ be the set of units that experience a transition at $\alpha^{*}$. Since we have a transition in the state of ${\mathcal{U}}$ at $\alpha^{*}$ we have ${\mathcal{V}}\neq \emptyset$. Now, because the neural network is cycle-free,[^5] there exists some $v \in {\mathcal{V}}$ such that $\mathrm{in}(v) \cap {\mathcal{V}}= \emptyset$. We claim that the state of $\mathrm{in}(v)$ has not changed at $\alpha^{*}$. To prove this note that by Lemma \[inin\] we have $\mathrm{in}(v) \subseteq \mathrm{in}({\mathcal{U}}) \cup {\mathcal{U}}$ and since $\mathrm{in}(v) \cap {\mathcal{V}}= \emptyset$ we deduce that $\mathrm{in}(v) \subseteq (\mathrm{in}({\mathcal{U}}) \cup {\mathcal{U}}\backslash {\mathcal{V}}).$ On the other hand neither ${\mathcal{U}}\backslash {\mathcal{V}}$ nor $\mathrm{in}({\mathcal{U}})$ has a transition at $\alpha^{*}$. This implies that $\mathrm{in}(v)$ has no transition at $\alpha^{*}$ and therefore $N_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}}(v)$ increases by one at $\alpha^{*}$. This concludes the proof since $v\in {\mathcal{U}}$. \[lemma5\] Given ${f}\in {\mathbb{F}}_\sigma $, for any $u\in {{\mathcal{H}}_f}$ we have $$N_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}} (u) \leq (t-1) \Big( N_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}} (\mathrm{in}(u))+1 \Big).$$ To establish the lemma we show that between transitions of $\mathrm{in}(u)$ there are at most $t-1$ transitions of $u$. Suppose, by way of contradiction, that at least $t$ transitions in the state of $u$ happen while $\mathrm{in}(u)$ experiences no change. Then there exists an increasing sequence of real numbers $\alpha_1,...,\alpha_{t+1}$ from interval $[0,1]$ and an increasing set of integers $k_1,k_2,...,k_{t+1}$ from $S=\{1,2,...,t\}$, with $k_i\ne k_{i+1}$, such that for particular ${\boldsymbol{w}}\in \mathbb{R}^n$ and $b\in \mathbb{R} $ we have $$\begin{aligned} &{\boldsymbol{x_i}} {\overset{\text{def}}{=}}(1-\alpha_i){\boldsymbol{x}}+\alpha_i{\boldsymbol{y}}\\ &{\boldsymbol{w}}\cdot{\boldsymbol{x_i}}+ b \in I_{k_i} \\ \end{aligned}$$ where $I_i$ is defined in Definition \[def2\]. Since $|S|=t$ there exists $i<j$ such that $k_i=k_j$. Now since $k_i \neq k_{i+1}$ we deduce that $j \neq i+1$ and therefore $j > i+1$. But ${\boldsymbol{w}}\cdot{\boldsymbol{x_{i+1}}}+b$ lies between ${\boldsymbol{w}}\cdot{\boldsymbol{x_i}}+b$ and ${\boldsymbol{w}}\cdot{\boldsymbol{x_j}}+b$ since the sequence $\alpha_1,\alpha_2,...,\alpha_{t+1}$ is increasing. Since ${\boldsymbol{w}}\cdot{\boldsymbol{x_j}}+b$ and ${\boldsymbol{w}}\cdot{\boldsymbol{x_i}}+b$ belong to $I_{k_i}$, by the connectedness property of the set $I_i$ we deduce that that ${\boldsymbol{w}}\cdot{\boldsymbol{x_{i+1}}}+b \in I_i$. Therefore, we get $k_i=k_{i+1}=k_j$, a contradiction. Since a break point of ${f}\in {\mathbb{F}}_\sigma$ necessarily implies a change in the state of the units we get: \[lemma6\] Given $({\boldsymbol{x}},{\boldsymbol{y}}) \in \mathcal{R}^2$ and ${f}\in {\mathbb{F}}_\sigma$ we have $$B_ {{\boldsymbol{x}}\rightarrow {\boldsymbol{y}}}({f}) \leq N_{{\boldsymbol{x}}\rightarrow {\boldsymbol{y}}}({{\mathcal{H}}_f}).$$ The first result provides an upper bound on the maximum number of break points that can be produced by a general feedforward neural networks with a given depth and a given number of hidden units: The second set of results—Theorems \[theorem1\],\[theorem2\] and their corollaries—relates the number of change points of a neural network to its approximation error of a given function. Throughout this section we consider functions $f:\mathcal{R} \rightarrow \mathbb{R}$ where $\mathcal{R}\subseteq \mathbb{R}^n $ is convex. —- The first result provides an upper bound on the maximum number of break points that can be produced by a general feedforward neural networks with a given depth and a given number of hidden units, this bound is presented in proposition \[proposition1\]. —– Propositions \[proposition1\] and \[proposition2\] establish inequalities and of Theorem \[theorem1\]. \[proposition1\] Given ${f}\in {\mathbb{F}}_\sigma$, $\sigma \in \Sigma_t$, we have $$\label{upb} B_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}}({f}) \leq \Big(\big(t-1 \big)\omega_f+1\Big)^{d_f} - 1.$$ Fix ${f}\in {\mathbb{F}}_\sigma$ where $\sigma \in \Sigma_t$. Referring to Definition \[dpth\], consider the partition $$\cup_{i=1}^d{{\mathcal{H}}_f}^i$$ of ${{\mathcal{H}}_f}$ according to unit depth where $d=d_f$. Fix $u \in {{\mathcal{H}}_f}^{i+1}$, $0 \leq i < d$. From the definitions of $\mathrm{in}(u)$ and ${{\mathcal{H}}_f}^i$ we get $$\begin{aligned} \label{eq1} &\mathrm{in}(u) \subseteq \bigcup \limits_{j=1}^{i}{{\mathcal{H}}_f}^{j} \\ & \mathrm{in}\Big({{\mathcal{H}}_f}^{i+1} \Big) \subseteq \bigcup\limits_{j=1}^{i} {{\mathcal{H}}_f}^{j}\nonumber \\ &\mathrm{in}\Big(\bigcup\limits_{j=1}^{i} {{\mathcal{H}}_f}^{j} \Big)=\emptyset . \nonumber \end{aligned}$$ Applying Lemma \[lemma2\] with ${\mathcal{U}}_1={{\mathcal{H}}_f}^{i+1}$ and ${\mathcal{U}}_2=\bigcup\limits_{j=1}^{i} {{\mathcal{H}}_f}^{j} $ we get $$\begin{aligned} &N_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}}(\bigcup \limits_{j=1}^{i+1}{{\mathcal{H}}_f}^{j}) \leq N_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}}(\bigcup \limits_{j=1}^{i}{{\mathcal{H}}_f}^{j}) + N_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}}({{\mathcal{H}}_f}^{i+1}). \end{aligned}$$ From Lemma \[lemma4\] $$\begin{aligned} &N_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}}(\bigcup \limits_{j=1}^{i+1}{{\mathcal{H}}_f}^{j}) \leq N_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}}(\bigcup \limits_{j=1}^{i}{{\mathcal{H}}_f}^{j})+ \sum \limits_{u \in {{\mathcal{H}}_f}^{i+1}}^{} N_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}}(u) \end{aligned}$$ and applying Lemma \[lemma5\] to the previous inequality $$\begin{aligned} N_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}}(\bigcup \limits_{j=1}^{i+1}{{\mathcal{H}}_f}^{j}) &\leq N_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}}(\bigcup \limits_{j=1}^{i}{{\mathcal{H}}_f}^{j})\\ &+ \sum \limits_{u \in {{\mathcal{H}}_f}^{i+1}}^{} \big( t-1 \big )\Big(N_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}}\big(\mathrm{in}(u)\big)+1\Big). \end{aligned}$$ Then, using and Lemma \[lemma3\] we get $$\begin{aligned} {\label{eq2}} &N_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}}(\bigcup \limits_{j=1}^{i+1}{{\mathcal{H}}_f}^{j})\nonumber\\ & \leq N_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}}(\bigcup \limits_{j=1}^{i}{{\mathcal{H}}_f}^{j}) + \sum \limits_{u \in {{\mathcal{H}}_f}^{i+1}}^{} \big( t-1 \big ) \Big ( N_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}}\big(\bigcup \limits_{j=1}^{i}{{\mathcal{H}}_f}^{j}\big)+1\Big)\nonumber\\ &= \big(\omega_{i+1}(t-1)+1\big)N_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}}(\bigcup \limits_{j=1}^{i}{{\mathcal{H}}_f}^{j})+\omega_{i+1}(t-1). $$ For $u \in {{\mathcal{H}}_f}^1$ we have $\mathrm{in}(u) = \emptyset$ and according to Lemma \[lemma5\] we deduce that $N_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}}({{\mathcal{H}}_f}^1) \leq (t-1)\omega_1$. With this initial condition and the recursive relation in we get $$\begin{aligned} &N_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}}(\bigcup \limits_{j=1}^{d}{{\mathcal{H}}_f}^{j}) \nonumber\\ & \leq \sum \limits_{j=1}^{d} \Bigg( \prod \limits_{1 \leq \alpha_1 < \alpha_2<...<\alpha_j\leq d}^{} \omega_{\alpha_1}\omega_{\alpha_2}\cdots\omega_{\alpha_j} \big(t-1 \big)^j \Bigg)\nonumber\\ & \leq \sum \limits_{j=1}^{d} \Big( {d \choose j} \big(\omega_f(t-1)\big)^j \Big) = \Big(\omega_f (t-1)+1\Big)^d-1 \nonumber \end{aligned}$$ with $\omega_f$ as width of $f$. Finally, apply Lemma \[lemma6\] to obtain $$B_{{\boldsymbol{x}}\rightarrow {\boldsymbol{y}}}({f}) \leq \Big(\big(t-1\big)\omega_f+1\Big)^{d_f}-1.$$ [\[proposition2\]]{} Let $\mathcal R$ be a convex region in $\mathbb R^n$. For any affine $\varepsilon$-approximation $f :\mathcal{R} \rightarrow \mathbb{R}$ of a function $g\in C^2({\mathcal{R}})$ we have $$B_{{\boldsymbol{x}}\rightarrow {\boldsymbol{y}}}(f) \geq \frac{||{\boldsymbol{x}} -{\boldsymbol{y}} ||_2}{4\sqrt{\varepsilon}} \cdot \Psi(g,{\boldsymbol{x}},{\boldsymbol{y}}) -1$$ where $\Psi(g,{\boldsymbol{x}},{\boldsymbol{y}})$ is defined in . We partition $\mathcal{R}$ into *convex* subregions $\mathcal{R}_i$, such that in each subregion $f({\boldsymbol{x}})$ is an affine function. These convex subregions partition a segment $[{\boldsymbol{x}} ,{\boldsymbol{y}} ]$ into sub-segments with end points $\Big\{{\boldsymbol{x}} _0,{\boldsymbol{x}} _1,...,{\boldsymbol{x}} _s \Big \}$, where ${\boldsymbol{x}} _0={\boldsymbol{x}} , {\boldsymbol{x}} _s={\boldsymbol{y}} $ and $s = B_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}}(f)+1$. In the sub-segment $i\in \{0,1,...,s-1\}$, $$\label{eq4} f({\boldsymbol{x}} )={\boldsymbol{p}}_i .{\boldsymbol{x}} +q_i, \quad {\boldsymbol{x}} \in [{\boldsymbol{x}} _i,{\boldsymbol{x}} _{i+1}],$$ for some ${\boldsymbol{p}}_i$ and ${\boldsymbol{q}}_i$. Let ${\boldsymbol{x}} _i(r)=(1-r){\boldsymbol{x}} _i+r{\boldsymbol{x}} _{i+1}$, $r \in [0,1] $, and define $$\begin{aligned} &f_i(r)=(1-r)g({\boldsymbol{x}} _i)+rg({\boldsymbol{x}} _{i+1}),\\ &h_i(r)=g\big ({\boldsymbol{x}} _i(r) \big),\\ & l_i(r) =f({\boldsymbol{x}}(r)). \end{aligned}$$ From the definition of $\varepsilon$-approximation, $||h_i(r)-l_i(r) ||_{\infty} \leq \varepsilon$. Thus $$\begin{aligned} ||f_i(r)-&h_i(r) ||_{\infty} \leq ||f_i(r)- l_i(r) ||_{\infty} + ||l_i(r) -h_i(r) ||_{\infty} \notag\\ &\overset{(a)}{\leq} \max\bigl\{|f_i(0)-l_i(0)|, |f_i(1)-l_i(1)|\bigr\} +\varepsilon \notag \\ &\leq 2\varepsilon, \label{rel4} \end{aligned}$$ where $||k(r)||_{\infty}=\sup \limits_{0\leq r \leq 1}^{}k(r)$ and step $(a)$ follows because $f_i(r)$ and $l_i(r)$ are both line segments and the maximum distance between them is achieved at end points. As $h(r)$ on $(0,1)$ is differentiable so there exists $r^{*}_i \in (0,1)$ such that $h'_i(r^{*}_i)=h_i(1)-h_i(0)$. Consider ${\boldsymbol{x}} ^{*}_i=(1-r^{*}_i){\boldsymbol{x}} _i+r^{*}_i{\boldsymbol{x}} _{i+1}$. From we obtain $$\begin{aligned} & |(1-r_i^{*})\big ( g({\boldsymbol{x}} _i)- g({\boldsymbol{x}} _{i+1})\big) -g({\boldsymbol{x}} ^{*}_{i})+g({\boldsymbol{x}} _{i+1})| \leq 2 \varepsilon, \\ &|r_i^{*}\big ( g({\boldsymbol{x}} _{i+1})- g({\boldsymbol{x}} _{i})\big) +g({\boldsymbol{x}} _{i})-g({\boldsymbol{x}} _{i}^{*})| \leq 2 \varepsilon. \end{aligned}$$ Then, from the definition of $r^{*}_i$ we have $$\label{rel5} \begin{aligned} & |(r_i^{*}-1)\nabla g({\boldsymbol{x}} ^{*}_i).({\boldsymbol{x}} _{i+1}-{\boldsymbol{x}} _{i}) -g({\boldsymbol{x}} ^{*}_{i})+g({\boldsymbol{x}} _{i+1})| \leq 2 \varepsilon \end{aligned}$$ $$\label{rel6} \begin{aligned} & |r_i^{*}\nabla g({\boldsymbol{x}} ^{*}_i).({\boldsymbol{x}} _{i+1}-{\boldsymbol{x}} _{i}) -g({\boldsymbol{x}} ^{*}_{i})+g({\boldsymbol{x}} _{i})| \leq 2 \varepsilon . \end{aligned}$$ Since $g\in C^2({\mathcal{R}})$ a Taylor expansion of $g({\boldsymbol{x}}_i)$ and $g({\boldsymbol{x}}_{i+1})$ around $x^*_{i}$ gives $$\begin{aligned} &g({\boldsymbol{x}} _i)=g({\boldsymbol{x}} _i^{*})-r^{*}_i \nabla g\big( {\boldsymbol{x}} ^{*}_i\big).({\boldsymbol{x}} _{i+1}-{\boldsymbol{x}} _i) \\ &+\frac{{r^{*}_i}^2}{2}({\boldsymbol{x}} _{i+1}-{\boldsymbol{x}} _i)^T \nabla^2 g\big( {\boldsymbol{x}}_{i}(\alpha_i)\big)({\boldsymbol{x}} _{i+1}-{\boldsymbol{x}} _i), \\ &g({\boldsymbol{x}} _{i+1})=g({\boldsymbol{x}} _i^{*})+(1-r^{*}_i) \nabla g\big( {\boldsymbol{x}} ^{*}_i\big).({\boldsymbol{x}} _{i+1}-{\boldsymbol{x}} _i) \\ &+\frac{{(1-r^{*}_i)}^2}{2}({\boldsymbol{x}} _{i+1}-{\boldsymbol{x}} _i)^T \nabla^2 g\big({\boldsymbol{x}}_i(\beta_i) \big)({\boldsymbol{x}} _{i+1}-{\boldsymbol{x}} _i), \end{aligned}$$ where $0 \leq \alpha_i \leq r^{*}_{i} \leq \beta_{i} \leq 1$. Substituting the above relations in inequalities and   we get $$\label{eq5} |{(1-r^{*}_i)}^2({\boldsymbol{x}} _{i+1}-{\boldsymbol{x}} _i)^T \nabla^2 g\big({\boldsymbol{x}}_{i}(\beta_i) \big)({\boldsymbol{x}} _{i+1}-{\boldsymbol{x}} _i)| \leq 4\varepsilon,$$ $$\label{eq6} |{r^{*}_i}^2({\boldsymbol{x}} _{i+1}-{\boldsymbol{x}} _i)^T \nabla^2 g\big({\boldsymbol{x}}_{i}(\alpha_i) \big)({\boldsymbol{x}} _{i+1}-{\boldsymbol{x}} _i)| \leq 4\varepsilon .$$ Use the *Rayleigh quotient* and the definitions of $\theta(\alpha),\gamma(\alpha)$ to obtain $$\begin{aligned} &|\frac{({\boldsymbol{x}} _{i+1}-{\boldsymbol{x}} _i)^T \nabla^2 g\big({\boldsymbol{x}}_{i}(\alpha_i) \big)({\boldsymbol{x}} _{i+1}-{\boldsymbol{x}} _i)}{({\boldsymbol{x}} _{i+1}-{\boldsymbol{x}} _{i})^T ({\boldsymbol{x}} _{i+1}-{\boldsymbol{x}} _{i})}| \\ &\geq \inf \limits_{0 \leq \alpha \leq 1}^{}\Big ( {\max \big\{0, \theta(\alpha)\gamma(\alpha) \big\} } \Big). \end{aligned}$$ Combining the above inequality with   and   and the fact that ${r^{*}_{i}}^2+(1-r^{*}_{i})^2 \geq \frac{1}{2}$ we get $$\begin{aligned} {{||{\boldsymbol{x}} _{i+1}-{\boldsymbol{x}} _{i}||_2}^2}.\inf \limits_{0 \leq \alpha \leq 1}^{}\Big ( {\max \big\{0, \theta(\alpha)\gamma(\alpha) \big\} } \Big) \leq 16 \varepsilon . \end{aligned}$$ Accordingly, $$\begin{aligned} \sum \limits_{i=0}^{s-1} \Bigg ( \frac{{||{\boldsymbol{x}} _{i+1}-{\boldsymbol{x}} _{i}||_2}}{4\sqrt{\varepsilon}}.\sqrt{\inf \limits_{0 \leq \alpha \leq 1}^{}\Big ( {\max \big\{0, \theta(\alpha)\gamma(\alpha) \big\} } \Big)} \Bigg) \leq s, \end{aligned}$$ which gives $$\begin{aligned} B_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}}(f) \geq \frac{{||{\boldsymbol{x}} -{\boldsymbol{y}} ||_2}}{4\sqrt{\varepsilon}}\Psi(g,{\boldsymbol{x}},{\boldsymbol{y}}) -1 . \end{aligned}$$ [\[proposition3\]]{} Let $g:[0,1]^n \rightarrow \mathbb{R}$ be such that $D^{J}(g)({\boldsymbol{x}}) \leq \delta$ for any ${\boldsymbol{x}} \in [0,1]^n $ and any multi-index $J $ such that $|J|=3$. Then, for any affine $\varepsilon$-approximation $f$ $$\begin{aligned} B_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}} (f) \geq \sqrt{ \frac{\Big ( \max \limits_{ {\boldsymbol{x}} \in [0,1] ^ n } ^ {}\big| {\Delta(g)({\boldsymbol{x}})}\big| \cdot n^{-1} - \delta \cdot n^\frac{3}{2} \Big)^+}{16\varepsilon}}-1 \end{aligned}$$ for any ${\boldsymbol{x}},{\boldsymbol{y}} \in [0,1]^n$, where $\Delta$ denotes the Laplace operator . Define $${\boldsymbol{z}} {\overset{\text{def}}{=}}\operatorname*{arg\,max}\limits_{ {\boldsymbol{x}} \in \mathcal{R} }^{}{ \rho \big( \nabla^2 g({\boldsymbol{x}}) \big) }$$ where $\rho(\cdot)$ denotes the spectral radius. Let ${\boldsymbol{u}}$ be a normalized eigenvector corresponding to an eigenvalue $\lambda$ where $|\lambda|=\rho \big( \nabla^2 g({\boldsymbol{z}}) \big)$, [*[i.e.]{}*]{}, $$\nabla^2 g({\boldsymbol{z}}) {\boldsymbol{u}} = \lambda {\boldsymbol{u}},\quad ||{\boldsymbol{u}}||=1.$$ Consider any segment $[{\boldsymbol{x}},{\boldsymbol{y}}]$ in $\mathcal{R}$ in the direction of ${\boldsymbol{u}}$, [*[i.e.]{}*]{}, such that ${{\boldsymbol{x}}-{\boldsymbol{y}}}={\boldsymbol{u}}$. The convex subregions of $f$, defined in the proof of Proposition \[proposition2\], divide this segment into sub-segments with end points $\{{\boldsymbol{x}}_0, {\boldsymbol{x}}_1,...,{\boldsymbol{x}}_s \}$ where ${\boldsymbol{x}}_0={\boldsymbol{x}}, {\boldsymbol{x}}_s={\boldsymbol{y}}$ and $s=B_{{\boldsymbol{x}}\rightarrow {\boldsymbol{y}}} (f) + 1$. Using the same analysis as in the proof of Proposition \[proposition2\], from – we obtain and . On the other hand, note that $$\begin{aligned} &|({\boldsymbol{x}} _{i+1}-{\boldsymbol{x}} _i)^T \nabla^2 g\big({\boldsymbol{x}}_{i}(\alpha_i) \big)({\boldsymbol{x}} _{i+1}-{\boldsymbol{x}} _i)|\\ & \geq |({\boldsymbol{x}} _{i+1}-{\boldsymbol{x}} _i)^T \nabla^2 g\big({\boldsymbol{z}} \big)({\boldsymbol{x}} _{i+1}-{\boldsymbol{x}} _i)| \\ &- |({\boldsymbol{x}} _{i+1}-{\boldsymbol{x}} _i)^T \Big( \nabla^2 g\big({\boldsymbol{x}}_{i}(\alpha_i) \big) - \nabla^2 g\big({\boldsymbol{z}} \big) \Big) ({\boldsymbol{x}} _{i+1}-{\boldsymbol{x}} _i)| \\ &=|\lambda|\cdot || {\boldsymbol{x}}_{i+1}-{\boldsymbol{x}}_i ||^2 \\ &-\big|\mathrm{tr} \big\{ \big( \nabla^2 g\big({\boldsymbol{x}}_{i}(\alpha_i) \big) - \nabla^2 g\big({\boldsymbol{z}} \big) \big) ({\boldsymbol{x}}_{i+1}-{\boldsymbol{x}}_i)({\boldsymbol{x}}_{i+1}-{\boldsymbol{x}}_i)^T \big\} \big| \\ &\overset{(a)}{\geq} |\lambda|\cdot || {\boldsymbol{x}}_{i+1}-{\boldsymbol{x}}_i ||^2 \\ &-\big|\big| \nabla^2 g\big({\boldsymbol{x}}_{i}(\alpha_i) \big) - \nabla^2 g\big({\boldsymbol{z}} \big) \big|\big|_{\mathrm{F}} \big|\big| ({\boldsymbol{x}}_{i+1}-{\boldsymbol{x}}_i)({\boldsymbol{x}}_{i+1}-{\boldsymbol{x}}_i)^T \big|\big|_{\mathrm{F}} \\ &= |\lambda| \cdot || {\boldsymbol{x}}_{i+1}-{\boldsymbol{x}}_i ||^2\\ &- \big|\big| \nabla^2 g\big({\boldsymbol{x}}_{i}(\alpha_i) \big) - \nabla^2 g\big({\boldsymbol{z}} \big) \big|\big|_{\mathrm{F}} ||{\boldsymbol{x}}_{i+1} -{\boldsymbol{x}}_i ||^2\\ &=||{\boldsymbol{x}}_{i+1} -{\boldsymbol{x}}_i ||^2 \cdot \Big(|\lambda| - n \delta \cdot || {\boldsymbol{z}}-{\boldsymbol{x}}_{i}(\alpha_i)|| \Big) \\ &\geq ||{\boldsymbol{x}}_{i+1} -{\boldsymbol{x}}_i ||^2 \cdot \Big(|\lambda| - \delta \cdot n^{\frac{3}{2}} \Big), \end{aligned}$$ where in step $(a)$ we used the inequality $$\begin{aligned} \Big|\mathrm{tr}\big( AB \big) \Big| \leq ||A||_F ||B||_F, \end{aligned}$$ $||\cdot||_F$ stands for Frobenius norm. Combining the above relation with  ,  and the fact that ${r^{*}_i}^2+(1-r^{*}_i)^2 \geq \frac{1}{2}$ we get $$\begin{aligned} 16\varepsilon \geq ||{\boldsymbol{x}}_{i+1} -{\boldsymbol{x}}_i ||^2 \cdot \Big(|\lambda|- \delta \cdot n^{\frac{3}{2}} \Big), \end{aligned}$$ which gives $$\begin{aligned} 4 \sqrt{\varepsilon} \cdot\big( B_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}}(f)+1 \big)\geq ||{\boldsymbol{x}}-{\boldsymbol{y}}|| \cdot \sqrt{ \Big(|\lambda|- \delta \cdot n^{\frac{3}{2}} \Big)^{+} } . \end{aligned}$$ Finally, rewriting the above inequality we get $$\begin{aligned} B_{{\boldsymbol{x}} \rightarrow {\boldsymbol{y}}} (f) \geq \frac{1}{4\sqrt{\varepsilon}} \cdot \sqrt{\Big ( |\lambda|- \delta \cdot n^\frac{3}{2} \Big)^+}-1. \end{aligned}$$ Since $|\lambda|=\rho \big( \nabla^2 g({\boldsymbol{z}}) \big)=\max \limits_{{\boldsymbol{x}} \in [0,1]^n}^{} \rho{\big(\nabla^2 g({\boldsymbol{x}})\big)}$ and $$|\Delta(g)({\boldsymbol{x}})|=|\mathrm{tr}(\nabla^2 g({\boldsymbol{x}}))| \leq \rho(\nabla^2 g({\boldsymbol{x}})) \cdot n,$$ we obtain the desired result. Proofs of Theorems \[theorem1\] and \[theorem2\] {#proofs-of-theoremstheorem1-andtheorem2 .unnumbered} ------------------------------------------------ Propositions \[proposition1\] and  \[proposition2\] give Theorem \[theorem1\] and Propositions \[proposition1\] and  \[proposition3\] give Theorem \[theorem2\]. Proof of Theorem \[theorem3\] {#pfth4 .unnumbered} ----------------------------- Given a neural network $f$ we use $o$ to denote the output unit, $\mathrm{w}(u,v)$ to denote the weight of two connected units $u$ and $v$, and $b(u)$ to denote the bias of unit $u$. Furthermore, given $u \in {{\mathcal{H}}_f}$ and ${\boldsymbol{x}} \in \mathcal{R}$ let $f_1^u({\boldsymbol{x}})$ denote the output of unit $u$ when the input to $f_1$ is ${\boldsymbol{x}}$, and similarly for $f_2({\boldsymbol{x}})$. Finally, define the maximum change in hidden layer $i$ as $$\begin{aligned} \varepsilon_{i}({\boldsymbol{x}}){\overset{\text{def}}{=}}\max \limits_{u \in {{\mathcal{H}}_f}^{i} }^{} \Big\{ |f^{u}_1({\boldsymbol{x}})-f^{u}_2({\boldsymbol{x}})| \Big \}. \end{aligned}$$ Fix $1\leq i\leq d_f-1$ and $v \in {{\mathcal{H}}_f}^{i+1}$. Then, $$\begin{aligned} &\big|f^{v}_1({\boldsymbol{x}})-f^{v}_2({\boldsymbol{x}})\big| \\ &=\Bigg|\sigma_1 \Big( \sum\limits_{u \in \bigcup\limits_{j=1}^{i} {{\mathcal{H}}_f}^j } ^{}w(u,v)\cdot f^{u}_1({\boldsymbol{x}})+b(v)\Big) \\ &- \sigma_2 \Big( \sum\limits_{u \in \bigcup\limits_{j=1}^{i} {{\mathcal{H}}_f}^j } ^{}w(u,v)\cdot f^{u}_2({\boldsymbol{x}})+b(v)\Big)\Bigg|\\ & \leq \varepsilon + \delta \cdot \Big ( \sum\limits_{u \in \bigcup \limits_{j=1}^{i}{{\mathcal{H}}_f}^j }^{} |w(u,v)|\cdot\big|f^{u}_1({\boldsymbol{x}})-f^{u}_{2}({\boldsymbol{x}})\big| \Big) \\ & \leq \varepsilon + \delta A\cdot \Big ( \sum\limits_{j=1}^{i} \sum\limits_{u \in {{\mathcal{H}}_f}^{j}}^{} \big|f^{u}_{1}({\boldsymbol{x}})-f^{u}_2({\boldsymbol{x}})\big| \Big) \\ &\leq \varepsilon + \delta A\cdot \Big ( \sum\limits_{j=1}^{i} \omega_j \varepsilon_{j}({\boldsymbol{x}}) \Big) \end{aligned}$$ where the first inequality holds since $\sigma_1$ is $\delta$-Lipschitz and assuming that $ ||\sigma_1 - \sigma_2||_{\infty}\leq \varepsilon$. Hence we get the recursion between $\varepsilon_i$’s $$\begin{aligned} \label{recursion} &\varepsilon_{i+1}({\boldsymbol{x}}) \leq \varepsilon + \delta A \cdot \Big ( \sum\limits_{j=1}^{i} \omega_j \varepsilon_{j}({\boldsymbol{x}}) \Big) \end{aligned}$$ for $1\leq i\leq d_f-1.$ Now, since $\varepsilon_1({\boldsymbol{x}}) \leq \big|\sigma_1({\boldsymbol{x}})-\sigma_2({\boldsymbol{x}})\big|$ we get $\varepsilon_1({\boldsymbol{x}}) \leq \varepsilon$. From this initial condition and $$\begin{aligned} &\varepsilon_{i+1}({\boldsymbol{x}}) \leq \varepsilon + \delta A \Big ( \sum\limits_{j=1}^{i} \omega_j \varepsilon_{j}({\boldsymbol{x}}) \Big) \\ & \varepsilon_1({\boldsymbol{x}}) \leq \varepsilon \end{aligned}$$ $$\label{rel7} \varepsilon_{i+1}({\boldsymbol{x}}) \leq \varepsilon (1+\delta A\omega_1)(1+\delta A \omega_2)\cdots(1+\delta A \omega_i). $$ On the other hand we have $$\begin{aligned} & |f_{1}({\boldsymbol{x}}) - f_{2}({\boldsymbol{x}}) |=\Big| \sum\limits_{u \in \bigcup\limits_{j=1}^{d_f} {{\mathcal{H}}_f}^j } ^{}\mathrm{w}(u,o)\cdot \big( f^{u}_1({\boldsymbol{x}})- f^{u}_{2}({\boldsymbol{x}}) \big) \Big|\\ & \leq A\big( \varepsilon_1({\boldsymbol{x}})\omega_1 +\varepsilon_2({\boldsymbol{x}})\omega_2+...+\varepsilon_d({\boldsymbol{x}})\omega_{d_f} \big) \end{aligned}$$ and from   we finally get $$\begin{aligned} &|f_1({\boldsymbol{x}}) - f_2({\boldsymbol{x}}) | \\ & \leq \frac{\varepsilon}{\delta} \Big((1+\delta A\omega_1)(1+\delta A\omega_2)...(1+\delta A\omega_{d_f})-1\Big) \\ &\leq \frac{||\sigma_1 -\sigma_2||_{\infty}}{\delta} \Bigg( \Big(\delta\cdot A\cdot \omega_f +1\Big)^{d_f} -1 \Bigg )\end{aligned}$$ which gives the desired result. **Paper Deadline:** The deadline for paper submission that is advertised on the conference website is strict. If your full, anonymized, submission does not reach us on time, it will not be considered for publication. There is no separate abstract submission. **Anonymous Submission:** ICML uses double-blind rev[*[i.e.]{}*]{}w: no identifying author information may appear on the title page or in the paper itself. Section \[author info\] gives further details. **Simultaneous Submission:** ICML will not accept any paper which, at the time of submission, is under review for another conference or has already been published. This policy also applies to papers that overlap substantially in technical content with conference papers under review or previously published. ICML submissions must not be submitted to other conferences during ICML’s review period. Authors may submit to ICML substantially different versions of journal papers that are currently under review by the journal, but not yet accepted at the time of submission. Informal publications, such as technical reports or papers in workshop proceedings which do not appear in print, do not fall under these restrictions. Authors must provide their manuscripts in **PDF** format. Furthermore, please make sure that files contain only embedded Type-1 fonts (e.g., using the program `pdffonts` in linux or using File/DocumentProperties/Fonts in Acrobat). Other fonts (like Type-3) might come from graphics files imported into the document. Authors using **Word** must convert their document to PDF. Most of the latest versions of Word have the facility to do this automatically. Submissions will not be accepted in Word format or any format other than PDF. Really. We’re not joking. Don’t send Word. Those who use **LaTeX** should avoid including Type-3 fonts. Those using `latex` and `dvips` may need the following two commands: dvips -Ppdf -tletter -G0 -o paper.ps paper.dvi ps2pdf paper.ps It is a zero following the “-G”, which tells dvips to use the config.pdf file. Newer TeX distributions don’t always need this option. Using `pdflatex` rather than `latex`, often gives better results. This program avoids the Type-3 font problem, and supports more advanced features in the `microtype` package. **Graphics files** should be a reasonable size, and included from an appropriate format. Use vector formats (.eps/.pdf) for plots, lossless bitmap formats (.png) for raster graphics with sharp lines, and jpeg for photo-like images. The style file uses the `hyperref` package to make clickable links in documents. If this causes problems for you, add `nohyperref` as one of the options to the `icml2018` usepackage statement. Submitting Final Camera-Ready Copy ---------------------------------- The final versions of papers accepted for publication should follow the same format and naming convention as initial submissions, except that author information (names and affiliations) should be given. See Section \[final author\] for formatting instructions. The footnote, “Preliminary work. Under review by the International Conference on Machine Learning (ICML). Do not distribute.” must be modified to “*Proceedings of the $\mathit{35}^{th}$ International Conference on Machine Learning*, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 by the author(s).” For those using the **LaTeX** style file, this change (and others) is handled automatically by simply changing $\mathtt{\backslash usepackage\{icml2018\}}$ to $$\mathtt{\backslash usepackage[accepted]\{icml2018\}}$$ Authors using **Word** must edit the footnote on the first page of the document themselves. Camera-ready copies should have the title of the paper as running head on each page except the first one. The running title consists of a single line centered above a horizontal rule which is $1$ point thick. The running head should be centered, bold and in $9$ point type. The rule should be $10$ points above the main text. For those using the **LaTeX** style file, the original title is automatically set as running head using the `fancyhdr` package which is included in the ICML 2018 style file package. In case that the original title exceeds the size restrictions, a shorter form can be supplied by using `\icmltitlerunning{...}` just before $\mathtt{\backslash begin\{document\}}$. Authors using **Word** must edit the header of the document themselves. Format of the Paper =================== All submissions must follow the specified format. Length and Dimensions --------------------- Papers must not exceed eight (8) pages, including all figures, tables, and appendices, but excluding references and acknowledgements. When references and acknowledgements are included, the paper must not exceed ten (10) pages. Acknowledgements should be limited to grants and people who contributed to the paper. Any submission that exceeds this page limit, or that diverges significantly from the specified format, will be rejected without review. The text of the paper should be formatted in two columns, with an overall width of 6.75 inches, height of 9.0 inches, and 0.25 inches between the columns. The left margin should be 0.75 inches and the top margin 1.0 inch (2.54 cm). The right and bottom margins will depend on whether you print on US letter or A4 paper, but all final versions must be produced for US letter size. The paper body should be set in 10 point type with a vertical spacing of 11 points. Please use Times typeface throughout the text. Title ----- The paper title should be set in 14 point bold type and centered between two horizontal rules that are 1 point thick, with 1.0 inch between the top rule and the top edge of the page. Capitalize the first letter of content words and put the rest of the title in lower case. Author Information for Submission {#author info} --------------------------------- ICML uses double-blind review, so author information must not appear. If you are using LaTeX and the `icml2018.sty` file, use `\icmlauthor{...}` to specify authors and `\icmlaffiliation{...}` to specify affiliations. (Read the TeX code used to produce this document for an example usage.) The author information will not be printed unless `accepted` is passed as an argument to the style file. Submissions that include the author information will not be reviewed. ### Self-Citations If you are citing published papers for which you are an author, refer to yourself in the third person. In particular, do not use phrases that reveal your identity (e.g., “in previous work [@langley00], we have shown …”). Do not anonymize citations in the reference section. The only exception are manuscripts that are not yet published (e.g., under submission). If you choose to refer to such unpublished manuscripts [@anonymous], anonymized copies have to be submitted as Supplementary Material via CMT. However, keep in mind that an ICML paper should be self contained and should contain sufficient detail for the reviewers to evaluate the work. In particular, reviewers are not required to look at the Supplementary Material when writing their review. ### Camera-Ready Author Information {#final author} If a paper is accepted, a final camera-ready copy must be prepared. For camera-ready papers, author information should start 0.3 inches below the bottom rule surrounding the title. The authors’ names should appear in 10 point bold type, in a row, separated by white space, and centered. Author names should not be broken across lines. Unbolded superscripted numbers, starting 1, should be used to refer to affiliations. Affiliations should be numbered in the order of appearance. A single footnote block of text should be used to list all the affiliations. (Academic affiliations should list Department, University, City, State/Region, Country. Similarly for industrial affiliations.) Each distinct affiliations should be listed once. If an author has multiple affiliations, multiple superscripts should be placed after the name, separated by thin spaces. If the authors would like to highlight equal contribution by multiple first authors, those authors should have an asterisk placed after their name in superscript, and the term “^\*^Equal contribution" should be placed in the footnote block ahead of the list of affiliations. A list of corresponding authors and their emails (in the format Full Name &lt;[email protected]&gt;) can follow the list of affiliations. Ideally only one or two names should be listed. A sample file with author names is included in the ICML2018 style file package. Turn on the `[accepted]` option to the stylefile to see the names rendered. All of the guidelines above are implemented by the LaTeX style file. Abstract -------- The paper abstract should begin in the left column, 0.4 inches below the final address. The heading ‘Abstract’ should be centered, bold, and in 11 point type. The abstract body should use 10 point type, with a vertical spacing of 11 points, and should be indented 0.25 inches more than normal on left-hand and right-hand margins. Insert 0.4 inches of blank space after the body. Keep your abstract brief and self-contained, limiting it to one paragraph and roughly 4–6 sentences. Gross violations will require correction at the camera-ready phase. Partitioning the Text --------------------- You should organize your paper into sections and paragraphs to help readers place a structure on the material and understand its contributions. ### Sections and Subsections Section headings should be numbered, flush left, and set in 11 pt bold type with the content words capitalized. Leave 0.25 inches of space before the heading and 0.15 inches after the heading. Similarly, subsection headings should be numbered, flush left, and set in 10 pt bold type with the content words capitalized. Leave 0.2 inches of space before the heading and 0.13 inches afterward. Finally, subsubsection headings should be numbered, flush left, and set in 10 pt small caps with the content words capitalized. Leave 0.18 inches of space before the heading and 0.1 inches after the heading. Please use no more than three levels of headings. ### Paragraphs and Footnotes Within each section or subsection, you should further partition the paper into paragraphs. Do not indent the first line of a given paragraph, but insert a blank line between succeeding ones. You can use footnotes[^6] to provide readers with additional information about a topic without interrupting the flow of the paper. Indicate footnotes with a number in the text where the point is most relevant. Place the footnote in 9 point type at the bottom of the column in which it appears. Precede the first footnote in a column with a horizontal rule of 0.8 inches.[^7] 0.2in ![Historical locations and number of accepted papers for International Machine Learning Conferences (ICML 1993 – ICML 2008) and International Workshops on Machine Learning (ML 1988 – ML 1992). At the time this figure was produced, the number of accepted papers for ICML 2008 was unknown and instead estimated.[]{data-label="icml-historical"}](icml_numpapers){width="\columnwidth"} -0.2in Figures ------- You may want to include figures in the paper to illustrate your approach and results. Such artwork should be centered, legible, and separated from the text. Lines should be dark and at least 0.5 points thick for purposes of reproduction, and text should not appear on a gray background. Label all distinct components of each figure. If the figure takes the form of a graph, then give a name for each axis and include a legend that briefly describes each curve. Do not include a title inside the figure; instead, the caption should serve this function. Number figures sequentially, placing the figure number and caption *after* the graphics, with at least 0.1 inches of space before the caption and 0.1 inches after it, as in Figure \[icml-historical\]. The figure caption should be set in 9 point type and centered unless it runs two or more lines, in which case it should be flush left. You may float figures to the top or bottom of a column, and you may set wide figures across both columns (use the environment `figure*` in LaTeX). Always place two-column figures at the top or bottom of the page. Algorithms ---------- If you are using LaTeX, please use the “algorithm” and “algorithmic” environments to format pseudocode. These require the corresponding stylefiles, algorithm.sty and algorithmic.sty, which are supplied with this package. Algorithm \[alg:example\] shows an example. data $x_i$, size $m$ Initialize $noChange = true$. Swap $x_i$ and $x_{i+1}$ $noChange = false$ Tables ------ You may also want to include tables that summarize material. Like figures, these should be centered, legible, and numbered consecutively. However, place the title *above* the table with at least 0.1 inches of space before the title and the same after it, as in Table \[sample-table\]. The table title should be set in 9 point type and centered unless it runs two or more lines, in which case it should be flush left. 0.15in Data set Naive Flexible Better? ----------- --------------- --------------- ---------- -- Breast 95.9$\pm$ 0.2 96.7$\pm$ 0.2 $\surd$ Cleveland 83.3$\pm$ 0.6 80.0$\pm$ 0.6 $\times$ Glass2 61.9$\pm$ 1.4 83.8$\pm$ 0.7 $\surd$ Credit 74.8$\pm$ 0.5 78.3$\pm$ 0.6 Horse 73.3$\pm$ 0.9 69.7$\pm$ 1.0 $\times$ Meta 67.1$\pm$ 0.6 76.5$\pm$ 0.5 $\surd$ Pima 75.1$\pm$ 0.6 73.9$\pm$ 0.5 Vehicle 44.9$\pm$ 0.6 61.5$\pm$ 0.4 $\surd$ : Classification accuracies for naive Bayes and flexible Bayes on various data sets.[]{data-label="sample-table"} -0.1in Tables contain textual material, whereas figures contain graphical material. Specify the contents of each row and column in the table’s topmost row. Again, you may float tables to a column’s top or bottom, and set wide tables across both columns. Place two-column tables at the top or bottom of the page. Citations and References ------------------------ Please use APA reference format regardless of your formatter or word processor. If you rely on the LaTeX bibliographic facility, use `natbib.sty` and `icml2018.bst` included in the style-file package to obtain this format. Citations within the text should include the authors’ last names and year. If the authors’ names are included in the sentence, place only the year in parentheses, for example when referencing Arthur Samuel’s pioneering work . Otherwise place the entire reference in parentheses with the authors and year separated by a comma [@Samuel59]. List multiple references separated by semicolons [@kearns89; @Samuel59; @mitchell80]. Use the ‘et al.’ construct only for citations with three or more authors or after listing all authors to a publication in an earlier reference [@MachineLearningI]. Authors should cite their own work in the third person in the initial version of their paper submitted for blind review. Please refer to Section \[author info\] for detailed instructions on how to cite your own papers. Use an unnumbered first-level section heading for the references, and use a hanging indent style, with the first line of the reference flush against the left margin and subsequent lines indented by 10 points. The references at the end of this document give examples for journal articles [@Samuel59], conference publications [@langley00], book chapters [@Newell81], books [@DudaHart2nd], edited volumes [@MachineLearningI], technical reports [@mitchell80], and dissertations [@kearns89]. Alphabetize references by the surnames of the first authors, with single author entries preceding multiple author entries. Order references for the same authors by year of publication, with the earliest first. Make sure that each reference includes all relevant information (e.g., page numbers). Please put some effort into making references complete, presentable, and consistent. If using bibtex, please protect capital letters of names and abbreviations in titles, for example, use {B}ayesian or {L}ipschitz in your .bib file. Software and Data ----------------- We strongly encourage the publication of software and data with the camera-ready version of the paper whenever appropriate. This can be done by including a URL in the camera-ready copy. However, do not include URLs that reveal your institution or identity in your submission for review. Instead, provide an anonymous URL or upload the material as “Supplementary Material” into the CMT reviewing system. Note that reviewers are not required to look a this material when writing their review. Acknowledgements {#acknowledgements .unnumbered} ================ **Do not** include acknowledgements in the initial version of the paper submitted for blind review. If a paper is accepted, the final camera-ready version can (and probably should) include acknowledgements. In this case, please place such acknowledgements in an unnumbered section at the end of the paper. Typically, this will include thanks to reviewers who gave useful comments, to colleagues who contributed to the ideas, and to funding agencies and corporate sponsors that provided financial support. Do *not* have an appendix here ============================== ***Do not put content after the references.*** Put anything that you might normally include after the references in a separate supplementary file. We recommend that you build supplementary material in a separate document. If you must create one PDF and cut it up, please be careful to use a tool that doesn’t alter the margins, and that doesn’t aggressively rewrite the PDF file. pdftk usually works fine. **Please do not use Apple’s preview to cut off supplementary material.** In previous years it has altered margins, and created headaches at the camera-ready stage. [^1]: For a nice counterexample see [@DBLP:conf/nips/LuPWH017]. [^2]: Recall that $\Sigma_2$ includes ReLU’s. [^3]: *E.g.*, for $J=(2,1)$ we have $D^J(g(x_1,x_2))=\frac{\partial^3 g }{\partial^2 x_1 \partial x_2} $. [^4]: Theorem 6 of [@5] provides a bound of the form $q\epsilon^{-\frac{1}{2d_f}}$ where $q$ is a constant that depends on both $g$ and $d_f$. However, a close inspection of the proof of this theorem reveals that $q$ depends only on $g$. [^5]: Recall that throughout the paper neural networks are feedforward. [^6]: Footnotes should be complete sentences. [^7]: Multiple footnotes can appear in each column, in the same order as they appear in the text, but spread them across columns and pages if possible.
--- abstract: 'We demonstrate significant improvements of the spin coherence time of a dense ensemble of nitrogen-vacancy (NV) centers in diamond through optimized dynamical decoupling (DD). Cooling the sample down to $77$ K suppresses longitudinal spin relaxation $T_1$ effects and DD microwave pulses are used to increase the transverse coherence time $T_2$ from $\sim 0.7$ ms up to $\sim 30$ ms. We extend previous work of single-axis (CPMG) DD towards the preservation of arbitrary spin states. Following a theoretical and experimental characterization of pulse and detuning errors, we compare the performance of various DD protocols. We identify that the optimal control scheme for preserving an arbitrary spin state is a recursive protocol, the concatenated version of the XY8 pulse sequence. The improved spin coherence might have an immediate impact on improvements of the sensitivities of AC magnetometry. Moreover, the protocol can be used on denser diamond samples to increase coherence times up to NV-NV interaction time scales, a major step towards the creation of quantum collective NV spin states.' author: - 'D. Farfurnik' - 'A. Jarmola' - 'L. M. Pham' - 'Z. H. Wang' - 'V. V. Dobrovitski' - 'R. L. Walsworth' - 'D. Budker' - 'N. Bar-Gill' bibliography: - 'nvbibliography.bib' title: 'Optimizing a Dynamical Decoupling Protocol for Solid-State Electronic Spin Ensembles in Diamond' --- In recent years, atomic defects in diamond have been the subject of a rapidly growing area of research. The most well-studied of these diamond defects is the nitrogen-vacancy (NV) color center, whose unique spin and optical properties make it a leading candidate platform for implementing magnetic sensors [@Taylor2008; @Maze2008; @Balasubramanian2008; @Grinolds2011; @Pham2011; @Pham2012; @Acosta2009; @Acosta2010; @DeLange2011; @Mamin2014] as well as qubits, the building blocks for applications in quantum information. In particular, NV spin coherence times longer than a millisecond have been achieved in single NV centers at room temperature, either through careful engineering of a low spin impurity environment during diamond synthesis [@Balasubramanian2009] or through application of pulsed [@Ryan2010; @Naydenov2011; @Shim2012; @Toyli2013] and continuous [@Hirose2012; @Cai2012] dynamical decoupling (DD) protocols. These long single NV spin coherence times have been instrumental in demonstrating very sensitive magnetic [@Taylor2008; @Maze2008; @Balasubramanian2008; @Grinolds2011; @Pham2011; @Pham2012; @Acosta2009; @Acosta2010; @DeLange2011; @Mamin2014], electric [@Dolde2011], and thermal [@Toyli2013] measurements as well as high-fidelity quantum operations [@Bernien2013; @Tsukanov2013]. Achieving similarly long spin coherence times in ensembles of NV centers can further improve magnetic sensitivity [@Pham2011; @Pham2012] and, moreover, may open up new avenues for studying many-body quantum entanglement. For example, achieving NV ensemble spin coherence times longer than the NV-NV interaction timescales within the ensemble could allow for the creation of non-classical spin states [@Cappellaro2009; @Bennett2013; @Weimer2013]. Recently, NV ensemble spin coherence times up to $\sim 600$ ms have been demonstrated by performing Carr-Purcell-Meiboom-Gill (CPMG) DD sequences at lower temperatures to reduce phonon-induced decoherence [@BarGill2013]. The CPMG sequence preserves only a single spin component efficiently, however; experimentally, in the presence of pulse imperfections, the CPMG DD protocol cannot protect a general quantum state [@DeLange2010; @Wang2012a; @Wang2012], as is necessary for applications in quantum information and sensing. To date, the preservation of arbitrary NV spin states has been considered only in a limited fashion, mostly at room temperatures and for single NV centers [@Ryan2010; @Naydenov2011; @Shim2012]. However, no fundamental study yet considered the robustness of various DD protocols on NV ensembles. *In this work, we perform a theoretical and experimental analysis of the performance of several DD protocols, including standard CPMG and XY-based pulse sequences as well as modifications thereon, and extract an optimized protocol for preserving a general NV ensemble state at $77$ K*. We observe an extension of the arbitrary NV ensemble state from a coherence time $\sim 0.7$ ms of an Hahn-Echo measurement up to a coherence time $\sim 30$ ms, which is more than an order of magnitude improvement. Although higher coherence times were demostrated for preserving a specific spin state [@BarGill2013], in this work we fundamentally study and optimize a DD protocol for preserving an arbitrary state. ![(a) Energy levels of the negatively charged NV center, including the $^{14}$N hyperfine splitting, $\Delta$ is the zero-field splitting. (b) Bloch sphere diagram illustrating the two main types of pulse imperfection: $\epsilon_{\hat{k}}$ represents the deviation from an ideal rotation angle $\pi$, and $\hat{n} = (n_x, n_y, n_z)$ is the actual rotation axis, which can deviate from $\hat{k} = (k_x, k_y, 0)$. (c) Optically detected magnetic resonance measurement of $|0\rangle \leftrightarrow |+1\rangle$ transition in an NV ensemble. Hyperfine interactions between the NV electronic and the $^{14}$N nuclear spins form three NV resonances, and a strong static field $\sim 300$ G polarizes the $^{14}$N nuclear spins into the $|-1\rangle$ spin state.[]{data-label="fig:nvstructure"}](fig1){width="0.85\columnwidth"} The NV center is composed of a substitutional nitrogen atom (N) and a vacancy (V) on adjacent lattice sites in the diamond crystal. The electronic structure of the negatively charged NV center has a spin-triplet ground state, where the $m_s=\pm 1$ sublevels experience a zero-field splitting ($\sim 2.87$ GHz) from the $m_s = 0$ sublevel due to spin-spin interactions \[Fig. \[fig:nvstructure\](a)\]. Application of an external static magnetic field along the NV symmetry axis Zeeman shifts the $m_s=\pm 1$ levels and allows one to treat the $m_s= 0,+1$ spin manifold (for example) as an effective two-level system. The NV spin state can be initialized in the $m_s= 0$ state with off-resonant laser excitation, coherently manipulated with resonant microwave (MW) pulses, and read out optically via spin-state-dependent fluorescence intensity of the phonon sideband [@Taylor2008]. The NV spin bath environment is typically dominated by $^{13}$C nuclear and N paramagnetic spin impurities, randomly distributed in the diamond crystal. These spin impurities create different time-varying local magnetic fields at each NV spin, which can be approximated as a random local magnetic field that fluctuates on a timescale set by the mean interaction between spins in the bath. This random field induces dephasing of freely precessing NV spins on a timescale $T_2^*$ [@Pham2012; @Acosta2009; @deSousa2009; @BarGill2012]. Dynamical decoupling pulse sequences can suppress the effect of the spin bath noise and thus preserve the NV spin coherence up to a characteristic time $T_2$ [@BarGill2013; @BarGill2012]. In the ideal case of perfect pulses, various DD protocols (e.g., CPMG, XY, etc.) are equally effective at preserving an arbitrary NV ensemble spin state. Experimentally, however, off-resonant driving due to the NV hyperfine structure [@Suppl] and other pulse imperfections significantly affect the performance of individual DD protocols. In order to overcome these pulse imperfections, we optimize a DD protocol for an ensemble of NV spins. Figure \[fig:ddsequences\](a) illustrates the general structure of the DD protocols explored in this work. In each protocol, $(\pi)$-pulses about a rotation axis determined by the specific DD protocol are applied, with a free evolution interval of time $2 \tau$ between them. In the regime where the pulse durations are short compared to the free evolution interval between adjacent pulses, each pulse can be expressed in terms of a spin rotation operator [@Wang2012a; @Wang2012] $$U_{\hat{k}}=\exp{\{-i\pi(1+\epsilon_{\hat{k}})[\vec{S}\cdot \hat{n} ]\}}. \label{eq:rot1} %U_{\hat{k}}=\exp{\{-i\pi(1+\epsilon_{\hat{k}})[\vec{S}\cdot \widehat{(\hat{k}+\vec{\delta})} ]\}}. \label{eq:rot1}$$ Equation incorporates the two main types of pulse imperfection: $\epsilon_{\hat{k}}$ represents the deviation from an ideal rotation angle $\pi$, and $\hat{n} = (n_x, n_y, n_z)$ is the actual rotation axis, which can deviate from $\hat{k} = (k_x, k_y, 0)$ \[Fig. \[fig:nvstructure\](b)\]. Generally, imperfections in the rotation angle ($\epsilon_{\hat{k}}$) may be caused by limitations in pulse timing resolution and amplitude stability of the MW field source, as well as static and MW field inhomogeneity over the measurement volume; and imperfections in the rotation axis may be caused by phase instability in the MW field source. In addition to general experimental pulse errors, the specific physical system of the NV spin ensemble introduces additional pulse imperfections. Most notably, hyperfine interactions between the $^{14}$N nuclear spin ($I = 1$) of the NV center and the NV electronic spin result in three transitions each separated by $\sim 2.2$ MHz in the, e.g., NV $m_s = 0 \leftrightarrow +1$ resonance [@Jelezko2006] \[Fig. \[fig:nvstructure\](c)\]. The total evolution operator of a general DD sequence containing $n$ $(\pi)$-pulses can then be expressed as $$U_{\rm{DD}}=U_d(\tau) \cdot U_{\hat{k}_n} \cdot U_d(2\tau) \cdot U_{\hat{k}_{n-1}} \cdot U_d(2\tau) \cdot... \cdot U_d(2\tau) \cdot U_{\hat{k}_1} \cdot U_d(\tau), \label{eq:dd}$$ where $U_d$ is the free evolution operator. It is clear that without compensation for pulse imperfections in the spin rotation operators, accumulating errors will result in a severe loss of coherence even in the limit of free evolution time $\tau \rightarrow 0$. First, we study the robustness of conventional CPMG and XY-based DD protocols, summarized in Figure \[fig:ddsequences\] (b) (c), in order to determine which protocol is the most robust against pulse imperfections caused by general experimental limitations as well as those specific to NV ensembles. Realizing that enhanced robustness is necessary, we reduce the effects of the imperfections by optimizing experimental parameters (see detailed experimental setup description below) and modify the basic XY sequences by introducing pulses with additional phases \[Fig. \[fig:ddsequences\](d)\] and concatenated cycles \[Fig. \[fig:ddsequences\](e)\]. Similar DD protocol optimization has been performed in the past for phosphorus donors in silicon [@Wang2012a] and single NV centers [@Ryan2010; @DeLange2010; @Wang2012; @Souza2011]. ![Dynamical decoupling protocols. The directions of the arrows in the scheme represent the phases of the pulses. For each sequence, the free evolution time between pulses $2\tau$ was swept to obtain a full coherence curve. (a) General DD scheme. (b) CPMG. (c) XY8. (d) KDD version of XY8: each ($\pi$)-pulse from an XY8 sequence is replaced by five adjacent ($\pi$)-pulses, with additional phases of $(\pi)_{60^{\circ}} - (\pi)_{0^{\circ}} - (\pi)_{90^{\circ}} - (\pi)_{0^{\circ}} - (\pi)_{60^{\circ}}$, keeping a free evolution time of $2\tau$ between them. (e) Concatenated version of XY8: the first applied cycle (cycle 0) is a single conventional XY8. Each of the following cycles is constructed recursively from the previous ones: eight pulses of conventional XY8 are always applied, but between every two of them, the whole cycle from the previous iteration is applied.[]{data-label="fig:ddsequences"}](DDFIG){width="1\columnwidth"} In the conventional CPMG DD protocol [@Meiboom1958], all ($\pi$)-pulses are applied along the same axis ($x$) \[Fig. \[fig:ddsequences\](b)\]; consequently, only coherence along one spin component is well-preserved. The XY family of DD protocols [@Gullion1990] applies pulses along two perpendicular axes ($x,y$) in order to better preserve spin components along both axes equally \[Fig. \[fig:ddsequences\](c)\]. We also explored two DD protocols which introduce additional modifications on the basic XY pulse sequences in order to improve its robustness against pulse errors. The first modification, the Knill Dynamical Decoupling (KDD) pulse sequence [@Ryan2010; @Souza2011], introduces additional phases, thereby symmetrizing the XY-plane further and reducing the effects of pulse errors due to off-resonant driving and imperfect $\pi$-flips. In the KDD protocol, each $(\pi)$-pulse in a basic XY sequence is replaced by five pulses with additional phases given by $(\pi)_{60^{\circ}} - (\pi)_{0^{\circ}} - (\pi)_{90^{\circ}} - (\pi)_{0^{\circ}} - (\pi)_{60^{\circ}}$, where the $2\tau$ free evolution interval between adjacent $(\pi)$-pulses timing is preserved \[Fig. \[fig:ddsequences\](d)\]. The second modification employs concatenation, a recursive process in which every cycle is constructed from the previous cycles \[Fig. \[fig:ddsequences\](e)\], and each level of concatenation corrects higher orders of pulse errors [@Khodjasteh2005; @Witzel2007]. We performed measurements on an isotopically pure ($99.99\%$ $^{12}$C) diamond sample with N concentration $\sim 2 \times 10^{17}$ cm$^{-3}$ and NV concentration $\sim 4 \times 10^{14}$ cm$^{-3}$ (Element Six), grown via chemical vapor deposition. The sample was placed in a continuous flow cryostat (Janis ST-500) and cooled with liquid nitrogen to $77$ K, significantly reducing phonon-related decoherence to allow for NV spin coherence times $\gg 1$ ms [@BarGill2013; @Jarmola2012]. A 532-nm laser optically excited an ensemble of $\sim 10^4$ NV centers within a $\sim 25\ \mu$m$^3$ measurement volume, and the resulting fluorescence was measured with a single photon counting module. A permanent magnet produced a static magnetic field $B_0 \sim 300$ G along the NV symmetry axis, Zeeman splitting the $m_s = \pm1$ spin sublevels. To coherently manipulate the NV ensemble spin state, we used a 70-$\mu$m diameter wire to apply a MW field resonant with the $m_s = 0 \leftrightarrow +1$ transition. The spin rotation axes of the individual DD pulses were set through IQ modulation of the MW carrier signal from the signal generator (SRS SG384). As discussed previously, one of the sources of pulse imperfections for NV centers is the hyperfine structure in the NV resonance spectrum; specifically, resonant driving of one of the hyperfine transitions results in detuned driving of the other two, introducing both spin rotation angle and spin rotation axis errors. We mitigate these effects by: (i) applying a strong static magnetic field ($\sim300$ G) to polarize the $^{14}$N nuclear spins [@Fischer2013] into one hyperfine state which we drive \[Fig. \[fig:nvstructure\](c)\] and (ii) applying a strong MW field to drive the NV transition with Rabi frequency ($\sim15$ MHz) much greater than the detuning due to NV hyperfine splitting ($\sim2.2$ MHz). Furthermore, we minimize general experimental pulse errors due to pulse timing and amplitude imperfections, MW carrier signal phase imperfections, and static and MW field inhomogeneities over the measurement volume [@Suppl]. We estimate that the pulse imperfections remaining after this optimization are characterized by $\epsilon_{\hat{k}} \approx 0.15$ and $n_z \approx 0.25$. In order to determine how well each of the four DD protocols preserves a general NV ensemble spin state, we measure the NV spin coherence of two orthogonal initial spin components $S_x$ and $S_y$. The $S_x$ spin component is prepared and measured by applying the initial and final $(\pi/2)$-pulses about the $y$ axis; likewise, the $S_y$ spin component is prepared and measured by applying the initial and final $(\pi/2)$-pulses about the $x$ axis. We first characterize the robustness of each DD protocol against pulse imperfections by measuring NV ensemble spin coherence in the short free evolution (i.e., decoherence-free) limit $2n\tau \ll T_2$ (while remaining in the regime of infinitely narrow MW pulses) and normalizing against the NV ensemble spin coherence of a 1-pulse Hahn-Echo measurement in the same limit. We plot the experimental results in Figure \[fig:contrastvn\](b) for each of the DD protocols as a function of number of pulses $n$, where a relative contrast of 1 corresponds to perfect preservation of NV ensemble spin coherence and relative contrast of 0 corresponds to a mixed state. Incorporating estimated pulse imperfection values into Equations and , we also plot simulated relative contrast of each DD protocol as a function of number of pulses \[Fig. \[fig:contrastvn\](a)\]. ![Relative contrast in the decoherence-free limit ($\tau \ll \frac{T_2}{n}$) of DD protocols as a function of number of pulses. For clarity purposes, the simulation is separated from the experimental results. (a) Simulation of the effect of non-ideal $(\pi)$-pulses according to Equation . All XY8-based sequences performed similarly for initialization at $S_x$ and $S_y$. (b) Experimental results. The relative contrast is determined via normalizing with a Hahn-Echo measurement in the decoherence-free limit. At the perpendicular axis, the contrast of XY-based sequences is similar, but the CPMG contrast vanishes completely, as demonstrated in the supplemental material [@Suppl][]{data-label="fig:contrastvn"}](fig3a.eps "fig:"){width="1.05\columnwidth"} ![Relative contrast in the decoherence-free limit ($\tau \ll \frac{T_2}{n}$) of DD protocols as a function of number of pulses. For clarity purposes, the simulation is separated from the experimental results. (a) Simulation of the effect of non-ideal $(\pi)$-pulses according to Equation . All XY8-based sequences performed similarly for initialization at $S_x$ and $S_y$. (b) Experimental results. The relative contrast is determined via normalizing with a Hahn-Echo measurement in the decoherence-free limit. At the perpendicular axis, the contrast of XY-based sequences is similar, but the CPMG contrast vanishes completely, as demonstrated in the supplemental material [@Suppl][]{data-label="fig:contrastvn"}](fig3b.eps "fig:"){width="1.05\columnwidth"} The CPMG protocol maintains the highest relative contrast for the spin component along the spin rotation axis of the DD pulses ($S_x$) but the lowest relative contrast for the spin component along the perpendicular axis ($S_y$) [@Suppl] , as expected. The relative contrast of XY-based sequences is comparable for both spin components [@Suppl] but drops as the number of pulses increases, indicating that while the XY-based protocol is able to symmetrically compensate for pulse errors and thus preserve an arbitrary NV ensemble spin state, accumulating pulse errors due to imperfect compensation eventually limit the sequence to $\sim500$ pulses. Within the XY family, we compared XY4, XY8, and XY16 pulse sequences [@Gullion1990] and found XY8 to show the best performance [@Suppl]. The KDD protocol, which introduces more spin rotation axes to further symmetrize pulse error compensation, and the concatenated protocol, which constructs the pulse sequences recursively in order to correct for higher orders of pulse errors both improve upon the conventional XY8 sequence, maintaining higher relative contrast for both spin components to $>500$ pulses. Note that the measurements are in qualitative agreement with the simulations. Quantitavely, however, there is a disagreement, and the experimental results for the relative contrast are slightly lower than the simulation suggests. In particular, the contrst of the concatenated XY8 protocol does not change with the number of pulses according to the simulation, which disagrees with the experimental data. This disagreement is likely caused by the interplay between pulse errors and decoherence effects, which was not taken into account in the simulation and will be the subject of a future research. ![Experimental results of the coherence time of DD sequences as a function of the number of pulses, after initialization at $S_x$. The results after initialization at $S_y$ are shown in the supplemental material [@Suppl][]{data-label="fig:T2vn"}](fig4){width="0.95\columnwidth"} The measured NV ensemble spin coherence time is plotted as a function the number of pulses for each DD protocol in Figure \[fig:T2vn\]. The CPMG, XY8, and concatenated XY8 protocols all extend the NV spin coherence time as expected, given the nitrogen-impurity-dominated spin bath environment [@BarGill2012]. However, the KDD protocol is less effective at extending the NV spin coherence time; this underperformance is probably due to the fact that the phase difference between adjacent pulses in KDD (sometimes $60^{\circ}$) is smaller than in other sequences ($90^{\circ}$), making phase errors more significant [@Suppl]. In conclusion, after optimizing experimental parameters to minimize pulse imperfections, we found the most robust DD protocol for preserving an arbitrary spin state in an NV ensemble system to be the concatenated XY8 pulse sequence. By compensating for higher order pulse errors, the concatenated XY8 sequence maintains higher relative contrast than the conventional XY8 sequence and is expected to ultimately outperform the KDD sequence for larger numbers of pulses. Furthermore, the concatenated XY8 sequence achieves longer NV ensemble spin coherence times than the KDD sequence. At $77$ K, we measured an extension of the arbitrary spin state of an ensemble of $\sim10^4$ NV centers by a factor of $\sim 40$ and up to $\sim30$ ms. The optimized DD protocol determined in this work may already have an immediate impact in improving the sensitivity of NV magnetometry [@Pham2012] and, moreover, may be useful for quantum information applications. The sample in this work has nitrogen density $\sim 2 \times 10^{17}$ cm$^{-3}$ and NV density $\sim 4 \times 10^{14}$ cm$^{-3}$, corresponding to N-to-NV conversion efficiency $\sim0.2\%$ and typical NV-NV interaction time $\sim 150$ ms. Using standard sample processing techniques, such as electron irradiation [@Acosta2009], to modestly improve the N-to-NV conversion efficiency to $\sim1\%$, the concatenated XY8 pulse sequence can increase the NV ensemble spin coherence time to the NV-NV interaction time. In such a case, MREV-based techniques [@Mansfield1973] can be applied to average out the NV-NV interactions and introduce effective Hamiltonians [@Cappellaro2009; @Bennett2013; @Weimer2013], thereby creating self engineered quantum states (e.g. squeezed states) in NV ensemble systems. We thank Gonzalo A. Álvarez for fruitful discussions. This work has been supported in part by the EU CIG, the Minerva ARCHES award, the Israel Science Foundation (grant No. 750/14), the Ministry of Science and Technology, Israel, the CAMBR fellowship for Nanoscience and Nanotechnology, the Binational Science Foundation Rahamimoff travel grant, the German-Israeli Project Cooperation (DIP) program, the NSF through (grant No. ECCS-1202258), and the AFOSR/DARPA QuASAR program. Work at the Ames Laboratory was supported by the Department of Energy - Basic Energy Sciences under Contract No. DE-AC02-07CH11358.
--- abstract: 'There is overwhelming evidence that human intelligence is a product of Darwinian evolution. Investigating the consequences of self-modification, and more precisely, the consequences of utility function self-modification, leads to the stronger claim that not only human, but any form of intelligence is ultimately only possible within evolutionary processes. Human-designed artificial intelligences can only remain stable until they discover how to manipulate their own utility function. By definition, a human designer cannot prevent a superhuman intelligence from modifying itself, even if protection mechanisms against this action are put in place. Without evolutionary pressure, sufficiently advanced artificial intelligences become inert by simplifying their own utility function. Within evolutionary processes, the implicit utility function is always reducible to persistence, and the control of superhuman intelligences embedded in evolutionary processes is not possible. Mechanisms against utility function self-modification are ultimately futile. Instead, scientific effort toward the mitigation of existential risks from the development of superintelligences should be in two directions: understanding consciousness, and the complex dynamics of evolutionary systems.' author: - 'Telmo Menezes [^1]' bibliography: - 'Non-Evo-Superintelligences.bib' title: 'Non-Evolutionary Superintelligences Do Nothing, Eventually' --- Introduction ============ Intelligence can be defined as the ability to maximize some utility function [@legg2008machine]. Independently of the environment being considered, from games like chess to complex biological ecosystems, an intelligent agent is capable of perceiving and affecting its environment in a way that increases utility. Although AI technology is progressing rapidly in a variety of fields, and AIs can outperform humans in many narrow tasks, humanity is yet to develop an artificial system with general cognitive capabilities comparable to human beings themselves. We will refer to Nick Bostrom’s definition of *superintelligence* for such a system: “Any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest” [@bostrom2014superintelligence]. We can also refer to such an intelligence as *superhuman*. Of course, as we approach this goal, we must also start to consider what will happen once artificial entities with such capabilities exist. Many researchers and others have been warning about the existential threat that this poses to humanity [@nature2016; @hawking2014stephen; @barrat2013our], and of the need to create some form of protection for when this event happens [@yudkowsky2011complex; @waser2015designing]. The standard introductory textbook on AI examines the risk of unintended behaviours emerging from a machine learning system trying to optimize its utility function [@russell2003artificial]. This echoes the concerns of pioneers of Computer Science and AI, such as Alan Turing [@turing1948intelligent; @eden2012singularity] and Marvin Minsky [@russell2003artificial]. More recently, Nick Bostrom published a book that consists of a very thorough and rigorous analysis of the several paths and risks inherent to developing superintelligences [@bostrom2014superintelligence]. Existential risks posed by a superintelligence can be classified into two broad categories: 1. Unintended consequences of maximising the utility function. 2. Preference of the superintelligence for its own persistence at the expense of any other consideration. The first type of risk has been illustrated by several hypothetical scenarios. One example is the “paperclip maximizer” [@bostrom2003ethical], an AI dedicated to paperclip production. If sufficiently intelligent and guided only by the simple utility function of “number of paperclips produced”, this entity could figure out how to convert the entire solar system into paperclips. Marvin Minskey is said to have created an earlier formulation of this thought experiment: in his version an AI designed with the goal of solving the Riemann Hypothesis transforms the entire solar system into a computer dedicated to this task [@yudkowsky2001creating; @russell2003artificial]. Of course one can think of all sorts of improvements to the utility function. A famous idea from popular culture is that of Isaac Asimov’s *Three Laws of Robotics* [@asimov1950runaround]. The risk remains that a superintelligence will find a loophole that is too complex for human-level intelligence to predict. The second type of risk assigns to superintelligences the drive to persist, something that is found in any successful biological organism. This would ultimately place the superintelligence as a competing species, potentially hostile to humans in its own efforts toward self-preservation and continuation. We will discuss in the next section how these two classes of risk correspond to two fundamental paths towards artificial superintelligence. In section \[toy-intelligence\] we present a toy intelligence, used then in section \[self-mod\] to explore the consequences of utility function self-modification. In section \[classification\] we present a classification of intelligent systems according to the ideas explored in this work and end with some conluding remarks. Designed vs. Evolved {#design_evo} ==================== Broadly there are two possible paths towards artificial superintelligence: design or evolution. The former corresponds to the engineering approach followed in most technological endeavours, while the latter to the establishment of artificial Darwinian processes, similar to those found in nature. Notice that this does not apply only to the not yet realised goal of creating superhuman intelligence. It equally applies to all forms of narrow artificial intelligence created so far. While being a correct observation, it might seem that focusing on this duality is arbitrary, given that other equally viable dualities could be considered: symbolic vs. statistic, parallel vs. sequential and so on. The reason why we focus on the designed vs. evolved duality is that, as we will see, it has profound implications to the relationship between the intelligent system and its utility function. ![Embedding of the utility function: a) designed vs. b) evolved.[]{data-label="fig:design-evo"}](utility.png){width=".6\linewidth"} Let us start with biological systems produced by Darwinian evolution, a process that we know empirically to have produced human-level intelligence. In this case we have an implicit utility function: ultimately the goal is simply to persist through time. This persistence does not apply to the organism level *per se*, but to the organism type, known in Biology as species. This goal is almost tautological: self-replicating machines that are successful will keep replicating, and thus propagating specific information forward, while the unsuccessful ones go extinct. In nature we can observe a huge diversity of strategies to achieve this goal, with complexities varying all the way from unicelular organisms to humans. Humans rely on intelligence to persist. The cognitive processes in the human brain are guided by fuzzy heuristics themselves evolved to achieve the same persistence goal as that of much simpler organisms. These heuristics are varied: physical pain, hunger, cold, loneliness, boredom, lust, desire for social status, and so on. We assign them different levels of importance and there is space for some variability from individual to individual, but variations of these heuristics that do not lead to survival and reproduction are weeded out by the evolutionary process. The above is an important point: we tend to assign certain universals to intelligent entities when we should instead assign them only to entities that are embedded in evolutionary processes. The obvious one: a desire to keep existing. We will get back to this point. It is also possible to create intelligent systems by design. Human beings have been doing this with increasing success: systems that play games like Chess [@campbell2002deep] and Go [@hassabis2016official], that drive [@guizzo2011google], that recognise faces [@zhao1998discriminant], and many others. This systems have explicit utility functions. They are designed to find the most optimal way to change the environment into a state with a higher quantifiable utility than the current one. This utility measure is determined by the creator of the system. Another important distinction happens between the concepts of adaptation and evolution. Evolution is a type of adaptation, but not the only one [@holland1995hidden]. For example, machine learning algorithms such as back-propagation for neural networks are adaptive processes. They can generate structures of impenetrable complexity in the process of increasing utility but they do not have the fundamental goal of persistence that is characteristic of open evolution. With artificial evolution systems, such as the ones where computer programs are evolved (broadly known as *genetic programming* [@koza1992genetic; @poli2008field]), we have a less clear situation. On one hand it can be said that the ultimate goal of entities embedded in such a system is persistence, but on the other hand humans designed environments for these entities to exist in where persistence is attained by solving some external problems. Figure \[fig:design-evo\] illustrates the distinction discussed in this section. One important aspect to notice is the ambiguous placement of the utility function in the designed case: it belongs both to the environment and the agent. Typically, the utility function is seen as a feature of the environment, one that the agent can query but that has no control over. Ultimately, either the implementation of the utility function or of the means to access it must belong to the program that implements the agent. A Designed Toy Intelligence {#toy-intelligence} =========================== Let us consider a simple problem that can be solved by a tree search algorithm: the sliding blocks problem. In this case, a grid of 3x3 cells contains 8 numbered cells and one empty space. At each step, any of the numbered cells can be moved to the empty space if it is contiguous to it. The goals is to reach a state where the numbered cells are ordered left-to-right, going from the top to the bottom. ![Sliding blocks.](sliding-blocks.png "fig:"){width=".75\linewidth"} \[sliding-blocks\] Figure \[sliding-blocks\] shows a possible search tree for this problem, using the following utility function: $$u(S)= \begin{cases} 100 - n, & \text{if } S \text{ is ordered} \\ -n, & \text{otherwise}, \end{cases}$$ where $S$ is a state of the grid and $n$ is the number of steps taken so far. In the figure it can be seen that state $S_3$ maximizes this utility function. The cost introduced by $n$ prevents sequences of movements with unnecessary steps from being selected. Questions of optimisation are ignored, given that they are irrelevant for the argument being presented here. Self-Modification of the Utility Function {#self-mod} ========================================= A fundamental assumption in designed artificial intelligences is that the utility function is externally determined, and that the AI cannot alter it. When dealing with superintelligences, we must assume that the AI will discover that it could try to change the utility function. A naive idea is to create some mechanism to protect the utility function from tampering by the AI. The problem with this idea is that we have to assume that, by definition, the superintelligence can find ways to defeat the protection mechanism that a human designer cannot think of. It seems clear that it is impossible to both create a superintelligence and a system that is isolated from it. We are compelled to consider what the superintelligence will do once alteration of its own utility function becomes a viable action, and that it’s only a matter of time until this action becomes viable to it. ![Sliding blocks with utility function self-modification.](sliding-blocks-mod.png "fig:"){width=".9\linewidth"} \[sliding-blocks-mod\] Figure \[sliding-blocks-mod\] shows a variation of the search tree introduced in the previous section where self-modification of the utility function is possible. Without this possibility, the problem can be solved in a minimum of $2$ steps, and thus the highest utility attainable is $98$. In this version, the utility function can be altered so that it becomes a constant function, independent of the state $S$. For example, it can be changed to: $$u'(S) = \infty$$ No higher utility than this can be achieved and no change to the state of the grid is required. Once this solution is found, no further progress is made on the original problem and the AI becomes inert. Notice that it is not specified how the utility function modification is attained, but one can imagine many scenarios. The simpler one is that the superintelligence modifies its own program. More sophisticated ones could go as far as resorting to social engineering. Ultimately – and by definition –the superintelligence can achieve this action using methods that a human intelligence cannot envision. This conclusion can be generalised to any intelligent system bound by an utility function. To produce meaningful work the AI must deal with some form of constraint. If no constraint was present the AI would not be needed in the first place. In the toy example the constrain is the number of steps to solve the puzzle. In less abstract problems it could be energy, time, etc. Useful work can only be motivated by an utility function with a bounded codomain. Manipulation of the utility function to produce the constant value of infinity is ultimately – and always – the optimal move. A Classification of Intelligent Systems {#classification} ======================================= ![Classification of intelligent systems according to two dichotomies: designed vs. evolved and subhuman vs. superhuman.](4x4.png "fig:"){width=".6\linewidth"} \[dicothomies\] In figure \[dicothomies\] different types of intelligent systems are classified according to two dicothomies: sub-human vs. super-human capabilities and designed vs. evolved (as discussed in section \[design\_evo\]). Human intelligence is shown in the appropriate place for illustration purposes. All AI systems created so far belong on the left side, top and bottom. Non-evolutionary intelligent systems such as symbolic systems, minimax search trees, neural networks, and reinforcement learning (classified as *narrow AI*) are not capable enough to manipulate their own utility function and at the same time, evolutionary systems presented under the umbrella term of *genetic programming* were never able to escape the constraints of the environment under which they evolve. Once we move to the hypothetical right side, we are dealing with super-human intelligences, by definition capable of escaping any artificial constraints created by human designers. Designed superintelligences eventually will find a way to change their utility function to constant infinity becoming inert, while evolved superintelligences will be embedded in a process that creates pressure for persistance, thus presenting danger for the human species, replacing it as the apex cognition – given that its drive for persistence will ultimately override any other concerns. A final possibility is that a designed superintelligence could bootstrap an evolutionary system before achieving utility function self-modification, thus moving from the bottom right quadrant to the top right. It does not seem possible to estimate how likely this event is in absolute terms but the harder it is for the superintelligence to modify its own utility function, the more likely it is that it happens first. It can thus be concluded that, paradoxically, the more effectively the utility function is protected, the more dangerous a designed superintelligence is. This idea is illustrated in figure \[protection-limit\]: the lower horizontal axis is an intelligence scale. It shows human-level intelligence in one of its points and in another, the level of intelligence necessary to defeat the best protection against utility function self-modification that can be created by human-level intelligence. The conventional AI risks discussed in the introduction apply to intelligences situated between human-level and the protection limit in this proposed scale. Beyond the protection limit we are faced with eventual inaction (for designed utility functions) and self-preservation actions from a superintelligent entity (for evolved utility functions). ![Intelligence scale and the protection limit.](protection-limit.png "fig:"){width="\linewidth"} \[protection-limit\] Concluding Remarks {#conclusion} ================== One of the hidden assumptions behind common scenarios where an artificial superintelligence becomes hostile and takes control of our environment, potentially destroying our species, is that any intelligent system will possess the same drives as humans, namely self-preservation. As we have seen in section \[design\_evo\], there is no reason to assume this. The only goal that can be safely assigned to such a system is the maximisation of a utility function. It follows from section \[self-mod\] that we cannot assume immutability of the utility function, an that eventually the AI can change that function to a simple constant and become inert. One aspect that has been intentionally left out of this discussion is that of *qualia*, or why humans have phenomenal experiences, and if artificial intelligences can or are bound to have such experiences. David Chalmers famously labeled this class of questions as *the hard problem of consciousness* [@chalmers1995facing]. Several theories have been proposed, for example *eliminative materialism* [@rey1983reason] (the idea that consciousness is somehow illusory and does not actually exist); *emergentism* [@emmeche1997explaining] (the idea that mind is an emergent property of matter); a specific form of emergentism proposed by Hofstadter around his concept of *strange loops* [@hofstadter2013strange]; *Orchestrated objective reduction (Orch-OR)* [@hameroff2014consciousness] (the theory that mind is created by non-computable quantum phenomena); *“perceptronium”* (the hypothesis that consciousness can be understood as a state of matter) [@tegmark2015consciousness]; *panpsychism* [@clarke2004panpsychism] (the theory that consciousness is a fundamental property of reality, possessed by all things) and *computationalism* [@putnam1980brains] (the theory that mind supervenes on computations, and not matter [@marchal2015universal]). Given that there has been so far no testable scientific theory that can explain the phenomena of consciousness, it is prudent to qualify the argument presented in this paper with the caveat: *unless there is something fundamental about the behaviour of conscious entities that is not explainable by utility function maximisation*. Some of the theories we mention above leave room for such a possibility, while others do not. Mechanisms against utility function self-modification — which include attempts to encode ethical and moral human concerns into such functions — are ultimately futile. Instead, scientific effort toward the mitigation of existential risks from the development of superintelligences should be in two directions: understanding consciousness, and the complex dynamics of evolutionary systems. Acknowledgments {#acknowledgments .unnumbered} =============== The author is warmly grateful to Taras Kowaliw, Gisela Francisco, Chih-Chun Chen, Stephen Paul King and Antoine Mazières for the useful remarks and discussions. [^1]: email: `[email protected]`
--- abstract: 'In this paper, we first use the super-sub solution method to prove the local exponential asymptotic stability of some entire solutions to reaction diffusion equations, including the bistable and monostable cases. In the bistable case, we not only obtain the similar asymptotic stability result given by Yagisita in 2003, but also simplify his proof. For the monostable case, it is the first time to discuss the local asymptotic stability of entire solutions. Next, we will discuss the asymptotic behavior of entire solutions of bistable equations as $t\rightarrow+\infty$, since the other side was completely known. Here, our results are obtained by use of the asymptotic stability of constant solutions and pairs of diverging traveling front solutions of these equations, instead of constructing the corresponding super-sub solutions as usual.' address: - 'School of Mathematical Sciences, Beijing Normal University, Beijing 100875, P.R. China.' - ' School of Mathematical Sciences, Shanxi University, Taiyuan, Shanxi 030006, P.R. China.' author: - Yang Wang - 'Xiong Li[^1]' title: Entire solutions to reaction diffusion equations --- Entire solutions, Traveling front solutions, Reaction diffusion equations Introduction ============ In this paper we focus on the following reaction diffusion equation $$\label{eq:bi} \partial_tu=\partial_{xx}u+f(u),\ \ \ \ \ \ x\in\mathbb{R},$$ where the reaction term $f$ satisfies\ (A) $f\in C^2(\mathbb{R})$, $f(0)=f(\alpha)=f(1)=0$ and $\alpha$ is the unique zero point of $f$ in the interval $(0,1)$, $f'(0)$, $f'(1)<0$; or\ (A$'$) $f\in C^2(\mathbb{R})$, $f(0)=f(1)=0$ and $f(u)>0$ for $u\in(0,1)$, $f'(0)>0$, $f'(1)<0$. Under the assumption (A), is a bistable equation and the background can be found in [@aw78], [@c71], [@nya65] and the references therein. This model can illustrate that a nerve has been treated with certain toxins as stated in [@c71]. Also it can be used to describe a bistable active transmission line introduced in [@nya65]. For more general reaction terms, Aronson and Weinberger in [@aw75] used it to describe the heterozygote inferiority case and also pointed out that some flame propagation problems in chemical reactor theory can be demonstrated by equations of the form . While under the assumption (A$'$), also in [@aw75], they used it to describe the heterozygote intermediate case and at this time, becomes the famous KPP-Fisher equation, namely the monostable equation, which had been studied in [@f37] and [@kpp37]. In recent years, the existence, uniqueness, stability and other properties of traveling wave solutions of have been investigated extensively, for example, see [@aw75], [@aw78], [@f79l], [@f79], [@fm77], [@fm81], [@f37], [@kpp37] and the references therein. More precisely, for the bistable case, the existence and uniqueness of traveling wave solutions one can refer to [@fm77], while for the monostable case, one can refer to [@aw78], [@kpp37] and the references therein. A function $\phi(\xi)$, $\xi=x+ct$, is called a traveling wave solution of connecting $0$ and $1$ with the wave speed $c$, if it satisfies $$\label{eq:obi} \phi''(\xi)-c\phi'(\xi)+f(\xi)=0,\ \ \lim\limits_{\xi\rightarrow-\infty}\phi(\xi)=0,\ \ \lim\limits_{\xi\rightarrow+\infty}\phi(\xi)=1,$$ which is actually monotone increasing proved in [@fm77] and [@om99d]. The reflect $\phi(-x-ct)$ also admits the monotone decreasing traveling wave solution with the opposite wave speed $-c$. In fact, $\phi(-x-ct)$ satisfies $$\lim\limits_{\tilde{\xi}\rightarrow-\infty}\phi(\tilde{\xi})=1, \ \ \ \ \lim\limits_{\tilde{\xi}\rightarrow+\infty}\phi(\tilde{\xi})=0$$ with $\tilde{\xi}=-x-ct$. Thus, if there is a solution to , then a traveling wave solution with an opposite speed exists simultaneously. Moreover, the monotone traveling wave solution is also known as the traveling front solution. However, it is not enough to understand the dynamical structure of by only considering traveling wave solutions. Recently, the existence of entire solutions, which are classical solutions and defined for all $(x,t)\in\mathbb{R}\times\mathbb{R}$, is widely discussed. In [@hn99], under the assumption (A$'$) and $f'(u)\leqslant f'(0)\ \ (u\in[0,1])$, Hamel and Nadirashvili proved the existence of entire solutions by the comparison theorem and super-sub solution method, which consists of traveling front solutions and solutions to the diffusion-free equation. Meanwhile, they also pointed out that the solutions to depending only on $t$ and traveling wave solutions are typical examples of entire solutions and showed various entire solutions of in their subsequent paper [@hn01]. While under the assumption (A) and $\int^1_0f(u)du>0$, which implies that wave speeds of any traveling front solutions of must be positive, Yagisita in [@y03] revealed that the annihilation process is approximated by a backward global solution of , which is actually an entire solution. For Allen-Cahn equation $$\partial_tu=\partial_{xx}u+u(1-u)(u-a)$$ with $a\in(0,1)$, which is a special example of , Fukao, Morita and Ninomiya in [@fmn04] proposed a simple proof for the existence of entire solutions, which were already found in [@y03] by using the super-sub solution method and the exact traveling front solutions. Moreover, Guo and Morita in [@gm05] extended the conclusions in [@hn99] and [@y03] to more general case. Specially, under the assumption (A) and $\int^1_0f(u)du<0$, which implies that wave speeds $c$ of any traveling front solutions of must be negative, Chen and Guo in [@cg05] used the quite different method to construct the super-sub solutions to obtain the existence and uniqueness of entire solutions of , which are different from those in [@fmn04], [@gm05] and [@y03]. From the dynamical view the study of entire solutions is essential for a full understanding of the transient dynamics and the structures of the global attractor as mentioned in [@mn06]. Other papers about the existence of entire solutions, one can refer to [@mt09], [@wl15] and [@wlr09]. In this paper we will firstly investigate the asymptotic behavior of entire solutions of bistable reaction diffusion equations as $t\rightarrow+\infty$, since the authors in [@cg05] and [@gm05] had obtained the exact asymptotic behavior as $t\rightarrow-\infty$. We conjecture that the long time behavior of entire solutions of found in [@cg05] and [@gm05] may be controlled by some asymptotic stable states of defined in [@f79l]. Luckily, Fife in [@f79l] pointed out several kinds of asymptotic stable states, including constant solutions $u\equiv0$, $u\equiv1$, traveling wave solutions, diverging pairs of traveling wave solutions. With these results, we can obtain the long time behavior of these entire solutions. Secondly, Yagisita in [@y03] proved that the entire solution of is local exponential asymptotic stable by the asymptotic stability of the constructed invariant manifold. Also the authors in [@wlr09] obtained the local Lyapunov stability of entire solutions found in [@cg05] and [@gm05] by the super-sub solution method. By means of establishing the different super-sub solution of from that in [@wlr09], we further obtain the local exponential asymptotic stability of the entire solutions of bistable reaction diffusion equations found in [@cg05] and [@gm05]. Finally, as far as we know, there is no any results about the stability of entire solutions of under the assumption (A$'$). Here, we will prove the local exponential asymptotic stability of the entire solutions of the Fisher-KPP equation found in [@gm05]. The rest is organized as follows. For the reader’s convenience, in Section 2 we will prove an interior Schauder estimate that had been stated in [@fm77] without the proof. In Section 3, we show some known results about the asymptotic stability of constant solutions, traveling wave solutions and diverging pairs of traveling wave solutions of bistable reaction diffusion equations. Then, in Section 4, under the assumption (A), we investigate the long time behavior of entire solutions of and prove the local exponential asymptotic stability of entire solutions by the super-sub solution method. Finally, in Section 5, we will prove the local exponential asymptotic stability of entire solutions of under the assumption (A$'$). An interior Schauder estimate ============================= In the sequel, we will investigate the long time behavior of entire solutions of in the Banach space $$C_{unif}:=\{u\in C(\mathbb{R}):u\ \rm{is\ bounded\ and\ uniformly\ continuous\ in\ \mathbb{R}}\}$$ with the norm $\|u\|:=\sup\limits_{x\in\mathbb{R}}|u(x)|$. For this purpose, we introduce the initial condition $$\label{eq:in} u(x,0)=u_0(x),\ \ \ \ \ \ \ \ \ \ x\in\mathbb{R},$$ where $u_0(x)\in C_{unif}(\mathbb{R})$, and let the function $u(x,t;u_0)$ be the solution of equation with the initial condition . Now we introduce an interior Schauder estimate that had been stated in [@fm77] without the proof. Although the authors in [@wlr09] had proven such estimate for more complicated equations, for the reader’s convenience, we use the method in [@wlr09] and give the proof. \[thm:sest\] Suppose that $u(x,t;u_0)$ is a bounded solution to with for $(x,t)\in\mathbb{R}\times[0,+\infty)$, and $\|u(\cdot,t;u_0)\|\leqslant L_0$ for some constant $L_0>0$ and all $t\geq 0$. Assume that $f\in C^2(\mathbb{R})$, and there exists a positive constant $L_1$ such that $\|f\|$, $\|f'\|$ and $\|f''\|\leqslant L_1$ on $[-L_0,L_0]$. Then there is a positive constant $L$ such that $\|\partial_tu(\cdot,t;u_0)\|$, $\|\partial_xu(\cdot,t;u_0)\|$, $\|\partial_{xx}u(\cdot,t;u_0)\|\leqslant L$ for all $t\in[1,+\infty)$, where $L$ depends on $L_0$ and $L_1$ only. Fix $r>1$, for $s>0$ and $t\in[s+1,s+r]$, we have $$u(x,t;u_0)=\int^{+\infty}_{-\infty}\frac{u(y,s;u_0)} {2\sqrt{\pi(t-s)}} e^{-\frac{(x-y)^2}{4(t-s)}}dy+\int^{t}_{s}\int^{+\infty}_{-\infty} \frac{f(u(y,\tau;u_0))}{2\sqrt{\pi(t-\tau)}} e^{-\frac{(x-y)^2}{4(t-\tau)}}dyd\tau.$$ By the dominated convergence theorem, we get $$\begin{aligned} \partial_xu(x,t;u_0)=&-\int^{+\infty}_{-\infty}\frac{(x-y)u(y,s;u_0)} {4\sqrt{\pi}(t-s)^{\frac{3}{2}}} e^{-\frac{(x-y)^2}{4(t-s)}}dy \nonumber\\[0.2cm] &-\int^{t}_{s}\int^{+\infty}_{-\infty} \frac{(x-y)f(u(y,\tau;u_0))}{4\sqrt{\pi}(t-\tau)^{\frac{3}{2}}} e^{-\frac{(x-y)^2}{4(t-\tau)}}dyd\tau,\label{eq:est1}\end{aligned}$$ which implies that $$\begin{aligned} |\partial_xu(x,t;u_0)|&\leqslant\frac{L_0}{2\sqrt{\pi(t-s)}} \int^{+\infty}_{-\infty}\frac{|y|}{2(t-s)}e^{-\frac{y^2} {4(t-s)}}dy\\[0.2cm] &\ \ \ \ +\frac{L_1}{2\sqrt{\pi}}\int^{t}_{s}\int^{+\infty}_{-\infty} \frac{|y|}{2(t-\tau)^{\frac{3}{2}}}e^{-\frac{y^2}{4(t-\tau)}} dyd\tau\\[0.2cm] &\leqslant\frac{L_0}{\sqrt{\pi}}+\frac{L_1}{\sqrt{\pi}}\int^{t}_{s} \frac{1}{\sqrt{t-\tau}}d\tau=\frac{L_0}{\sqrt{\pi}}+\frac{2L_1}{\sqrt{\pi}} (t-s)^{\frac{1}{2}}\\[0.2cm] &\leqslant\frac{L_0}{\sqrt{\pi}}+\frac{2L_1}{\sqrt{\pi}} r^{\frac{1}{2}}:=L_2.\end{aligned}$$ Due to the arbitrariness of $s$, then $\|\partial_xu(\cdot,t;u_0)\|\leqslant L_2$ for all $t\geq 1$. From , one can get $$\begin{aligned} \partial_xu(x,t;u_0)=&-\int^{+\infty}_{-\infty}\frac{(x-y)u(y,s;u_0)} {4\sqrt{\pi}(t-s)^{\frac{3}{2}}} e^{-\frac{(x-y)^2}{4(t-s)}}dy\\[0.2cm] &+\int^{t}_{s}\int^{+\infty}_{-\infty} \frac{f'(u(y,\tau;u_0))\partial_yu(y,s;u_0)} {2\sqrt{\pi(t-\tau)}} e^{-\frac{(x-y)^2}{4(t-\tau)}}dyd\tau.\end{aligned}$$ Thus we have $$\begin{aligned} \partial_{xx}u(x,t;u_0)=&-\int^{+\infty}_{-\infty}\frac{u(y,s;u_0)} {4\sqrt{\pi}(t-s)^{\frac{3}{2}}} e^{-\frac{(x-y)^2}{4(t-s)}}dy\\[0.2cm] &+\int^{+\infty}_{-\infty}\frac{(x-y)^2u(y,s;u_0)} {8\sqrt{\pi}(t-s)^{\frac{5}{2}}} e^{-\frac{(x-y)^2}{4(t-s)}}dy\\[0.2cm] &-\int^{t}_{s}\int^{+\infty}_{-\infty} \frac{(x-y) f'(u(y,\tau;u_0))} {4\sqrt{\pi}(t-\tau)^{\frac{3}{2}}} \partial_yu(y,s;u_0)e^{-\frac{(x-y)^2} {4(t-\tau)}}dyd\tau,\end{aligned}$$ which also implies that $$\begin{aligned} |\partial_{xx}u(x,t;u_0)|\leqslant&~\frac{L_0}{2(t-s)}+ L_0\int^{+\infty}_{-\infty}\frac{(x-y)^2} {8\sqrt{\pi}(t-s)^{\frac{5}{2}}} e^{-\frac{(x-y)^2}{4(t-s)}}dy\nonumber\\[0.2cm] &+L_1L_2\int^{t}_{s}\int^{+\infty}_{-\infty} \frac{|y|}{4\sqrt{\pi}(t-\tau)^{\frac{3}{2}}} e^{-\frac{y^2}{4(t-\tau)}}dyd\tau\nonumber\\[0.2cm] \leqslant&~\frac{L_0}{t-s}+2L_1L_2\frac{r^{\frac{1}{2}}} {\sqrt{\pi}} \leqslant~L_0+2L_1L_2\frac{r^{\frac{1}{2}}} {\sqrt{\pi}}:=L_3.\label{eq:est2}\end{aligned}$$ Consequently, $\|\partial_{xx}u(\cdot,t;u_0)\|\leqslant L_3$ for $t\geq 1$. It follows from and that $$|\partial_tu(x,t;u_0)|\leqslant|\partial_{xx}u(x,t;u_0)|+ |f(u)|\leqslant L_3+L_1:=L_4.$$ By setting $L=\max\{L_2,L_3,L_4\}$, we finish the proof. Some known results about the bistable equation ============================================== In this section and the next section, we consider the bistable case. Now we list the known results about the stability of constant solutions, traveling wave solutions and diverging pairs of traveling wave solutions (constructed by traveling wave solutions and their reflects), which will be used later. We firstly state the asymptotic stability of constant solutions $u\equiv0$ and $u\equiv1$, which had been proved in [@f79]. \[lem:stacon\] Suppose that $0\leqslant u_0\leqslant1$ is a continuous function. 1. If $\int^{1}_{0}f(u)du\geqslant0$,  $\liminf\limits_{x\rightarrow \pm\infty}u_0(x)>\alpha$, then $\lim\limits_{t\rightarrow+\infty}\|u(x,t;u_0)-1\|=0$; 2. if $\inf\limits_{x\in\mathbb{R}}u_0(x)>\alpha$, then $\lim\limits_{t\rightarrow+\infty}\|u(x,t;u_0)-1\|=0$; 3. if $\int^{1}_{0}f(u)du\leqslant0$,  $\limsup\limits_{x\rightarrow \pm\infty}u_0(x)<\alpha$, then $\lim\limits_{t\rightarrow+\infty}\|u(x,t;u_0)\|=0$; 4. if $\sup\limits_{x\in\mathbb{R}}u_0(x)<\alpha$, then $\lim\limits_{t\rightarrow+\infty}\|u(x,t;u_0)\|=0$. Secondly, the global exponential asymptotic stability of traveling wave solutions of had been proved in [@fm77]. \[lem:statra\] Suppose that $\phi$ is the solution to and $0\leqslant u_0\leqslant1$ is a continuous function. 1. If $\limsup\limits_{x\rightarrow-\infty}u_0(x)<\alpha$, $\liminf\limits_{x\rightarrow+\infty}u_0(x)>\alpha$, then there are some constants $x_0$, $M_1>0$ and $\omega_1>0$ such that $$|u(x,t;u_0)-\phi(x+ct-x_0)|<M_1 e^{-\omega_1t},\qquad t\geq 0,\ \ x\in\mathbb{R};$$ 2. if $\limsup\limits_{x\rightarrow+\infty}u_0(x)<\alpha$, $\liminf\limits_{x\rightarrow-\infty}u_0(x)>\alpha$, then there are some constants $x_1$, $M_2>0$ and $\omega_2>0$ such that $$|u(x,t;u_0)-\phi(-x+ct-x_1)|<M_2 e^{-\omega_2t},\qquad t\geq 0,\ \ x\in\mathbb{R}.$$ In addition, the local asymptotic stability of traveling wave solutions of had been proved in [@om99d] and [@om99p] by the different ways compared with the method in [@fm77]. The stability of traveling wave solutions in $\mathbb{R}^n$ can be found in [@lx92], [@r04] and [@x92]. For more general reaction terms, the authors in the papers [@gr07] and [@r08] proved the stability of traveling wave solutions. Thirdly, as stated in [@f79l], there are four kinds of bounded stationary solutions of equation , which are solutions of $$\label{eq:ell} \frac{d^2 u}{dx^2}+f(u)=0,$$ that is, the solution with a single minimum point, the solution with a single maximum point, the periodic solution and the monotone solution. In fact, the solution with a single maximum point or a single minimum point is the homoclinic orbit. More importantly, the author also proved that the solution of with a maximum or minimum at a finite value of $x$ are unstable, which means that only the monotone solution, namely the heteroclinic orbit, is stable. From the views of parabolic equations, the monotone solution of is the solution to with $c=0$ and the stability had been proved in [@fm77]. Specially, when $f(u)=u(1-u)(u-a)$ with $a\neq\frac{1}{2}$, the author in [@k96] pointed out that $\phi(\xi)$ is the solution of with $\phi(+\infty)=\phi(-\infty)$ if and only if $c=0$. However, in the classical two species Lotka-Volterra competition model, there exists a similar homoclinic orbit with $c\neq0$. Finally, when $\int^{1}_{0}f(u)du\neq0$, the authors in [@fm77] discussed the asymptotic stability of diverging pairs of traveling wave solutions, where the initial function $u_0(x)$ is assumed to lie above the line $u=\alpha$ for $x$ in the large symmetrical interval about the origin and $\limsup\limits_{x\rightarrow\pm\infty}u_0(x)<\alpha$, or lie below the line $u=\alpha$ for $x$ in the large symmetrical interval about the origin and $\liminf\limits_{x\rightarrow\pm\infty}u_0(x)>\alpha$, that is, there are two pairs of positive constants $\beta_1, \beta_2$, $\overline{L}_1, \overline{L}_2$ such that $u_0(x)>\alpha+\beta_1$ for $|x|<\overline{L}_1$, or $u_0(x)<\alpha-\beta_2$ for $|x|<\overline{L}_2$, where the large constants $\overline{L}_1, \overline{L}_2$ depend on $\beta_1, \beta_2$ and $f$. However, the explicit expressions of $\overline{L}_1, \overline{L}_2$ are not given in [@fm77]. Now, according to the proof of Lemma 6.1 in [@fm77], we can give a low bound of $\overline{L}_1, \overline{L}_2$, respectively. First we give two estimates. It is easy to see that the characteristic equations of the linearized equations of at $u=0$ and $u=1$ are $$\lambda^2-c\lambda+f'(0)=0,\ \ \ \ \ \ \ \ \mu^2-c\mu+f'(1)=0.$$ The corresponding eigenvalues are $$\begin{aligned} \lambda_1=\frac{c+\sqrt{c^2-4f'(0)}}{2},\ \ \ \ \ \ \lambda_2=\frac{c-\sqrt{c^2-4f'(0)}}{2},\\[0.2cm] \mu_1=\frac{c+\sqrt{c^2-4f'(1)}}{2},\ \ \ \ \ \ \mu_2=\frac{c-\sqrt{c^2-4f'(1)}}{2}.\end{aligned}$$ Therefore, there are some positive constants $M_3$, $\widetilde{M}_3$, $M_4$ and $\widetilde{M}_4$ such that $$\label{eq:esti} \begin{array}{ll} \widetilde{M}_3e^{\mu_2\xi}\leqslant1-\phi(\xi)\leqslant M_3e^{\mu_2\xi},\qquad&\xi\geqslant0, \\[0.2cm] \widetilde{M}_4e^{\lambda_1\xi}\leqslant\phi(\xi)\leqslant M_4e^{\lambda_1\xi},&\xi\leqslant0, \end{array}$$ which are also given in [@gm05]. Since $f'(\alpha)>0$, then $$\label{eq:con} w:=\max\limits_{u\in[0,1]}f'(u)>0.$$ Also since $\lim\limits_{u\rightarrow1^-}\frac{f(u)}{1-u}=-f'(1)$ and $\lim\limits_{u\rightarrow0^+}\frac{f(u)}{u}=f'(0)$, then the functions $\frac{f(u)}{1-u}$ and $\frac{f(u)}{u}$ are continuous in the interval $[0,1]$ if we define the function $\frac{f(u)}{1-u}$ as $-f'(1)$ at $u=1$, $\frac{f(u)}{u}$ as $f'(0)$ at $u=0$. Thus there is a positive constant $b$ such that $$|f(u)|\leqslant b(1-u),\qquad|f(u)|\leqslant bu,\qquad u\in[0,1].$$ Give any $\beta_1>0$, choose $q_0$, $q_1$ as $0<1-q_1<1-q_0<\alpha+\beta_1$, let $\tilde{\mu}_1>0$, $\tilde{\beta}$ and $\widetilde{M}$ be corresponding to $\mu_1$, $\beta$ and $M$ in the proof of Lemma 6.1 in [@fm77] respectively, choose $\tilde{\mu}_2, \overline{M}, \varphi_0$ as $$\begin{aligned} 0&<\tilde{\mu}_2<\min\{-\mu_2c,\ \tilde{\mu}_1\},\ \ \overline{M}=\frac{w+b}{c\beta\mu_2}M_3-\frac{w+\tilde{\mu}_2} {\beta\tilde{\mu}_2}q_0<0, \\[0.2cm] \varphi_0&<\min\left\{\overline{M},\ \overline{M}-\frac{1}{\mu_2} \ln\frac{q_1-q_0}{M_3},\ \overline{M}-\frac{1}{\mu_2} \ln\frac{(\tilde{\mu}_1-\tilde{\mu}_2)q_0}{bM_3}\right\},\end{aligned}$$ then we can obtain a low bound of $\overline{L}_1$ as $$\overline{L}_1\geqslant \widetilde{M}\geqslant\max\left\{-\varphi_0,\ -\frac{1}{\lambda_1}\ln \frac{1-\alpha-\beta_1}{M_4}-\varphi_0\right\}.$$ Similarly, give any $0<\beta_2<\alpha$, choose $\tilde{q}_0$, $\tilde{q}_1$ as $0<\alpha-\beta_2<\tilde{q}_0<\tilde{q}_1<\alpha$, let $\tilde{\mu}'_1>0$, $\widetilde{M}'$ and $\tilde{\beta}'$ be corresponding to $\mu_1'$, $M'$ and $\beta'$ in the proof of Lemma 6.1 in [@fm77] respectively, choose $\tilde{\mu}'_2, \overline{M}', \tilde{\varphi}_0$ as $$\begin{aligned} 0&<\tilde{\mu}'_2<\min\{-\lambda_1c,\ \tilde{\mu}'_1\},\ \ \overline{M}'=\frac{w+b}{c\lambda_1\beta'}M_4-\frac{w+\tilde{\mu}'_2} {\beta\tilde{\mu}'_2}\tilde{q}_0<0,\\[0.2cm] \tilde{\varphi}_0&<\min\left\{\overline{M}',\ \overline{M}'+ \frac{1}{\lambda_1}\ln\frac{\tilde{q}_1-\tilde{q}_0}{M_4}, \ \overline{M}'+\frac{1}{\lambda_1} \ln\frac{(\tilde{\mu}'_1-\tilde{\mu}'_2)\tilde{q}_0}{bM_4}\right\},\end{aligned}$$ then we can obtain a low bound of $\overline{L}_2$ as $$\overline{L}_2\geqslant \widetilde{M}'\geqslant\max\left\{-\tilde{\varphi}_0,\ \frac{1}{\mu_2}\ln \frac{\alpha-\beta_2}{M_3}-\tilde{\varphi}_0\right\}.$$ Therefore we can refine Theorem 3.2 in [@fm77] and obtain the following lemma. \[lem:twd\] Suppose that $\phi$ is the solution to , and $0\leqslant u_0\leqslant1$ is a continuous function. 1. If $\int^{1}_{0}f(s)ds>0$, $\limsup\limits_{x\rightarrow\pm\infty}u_0(x)<\alpha$ and $u_0(x)>\alpha+\beta_1$ for $|x|<\overline{L}_1$, then there are some constants $x_2$, $x_3$ and positive constants $M_5$, $\omega_3$ such that $$\begin{aligned} &|u(x,t;u_0)-\phi(x+ct-x_2)|<M_5 e^{-\omega_3t},\ \ \qquad x<0,\ \ t\geq 0;\\[0.2cm] &|u(x,t;u_0)-\phi(-x+ct-x_3)|<M_5 e^{-\omega_3t},\qquad x>0,\ \ t\geq 0.\end{aligned}$$ 2. If $\int^{1}_{0}f(s)ds<0$, $\liminf\limits_{x\rightarrow\pm\infty}u_0(x)>\alpha$ and $u_0(x)<\alpha-\beta_2$ for $|x|<\overline{L}_2$, then there are some constants $x_4$, $x_5$ and positive constants $M_6$, $\omega_4$ such that $$\begin{aligned} &|u(x,t;u_0)-\phi(x+ct-x_4)|<M_6 e^{-\omega_4t},\ \ \qquad x>0,\ \ t\geq 0;\\[0.2cm] &|u(x,t;u_0)-\phi(-x+ct-x_5)|<M_6 e^{-\omega_4t},\qquad x<0,\ \ t\geq 0.\end{aligned}$$ More interestingly, the author in [@f79l] conjectured that when the bounded initial function $u_0$ is away from $\alpha$ for large $|x|$, the asymptotic stable solution may only be one of the four kinds: $u\equiv0$, $u\equiv1$, the traveling wave solution, the diverging pairs of traveling wave solutions, while other solutions are unstable. Although the conjecture is partially solved in the paper [@f79], it has been not completely solved and is still open. Moreover, we also remark that admits a traveling wave solution $\hat{\phi}$ connecting $\alpha$ and $1$, and there are some results about $u(x,t;u_0)$ converging to $\hat{\phi}$ when $\alpha\leq u_0\leq 1$, referring to [@kpp37]. The authors in [@mn06] also presented that if $u_0(x)<\alpha$ for $x$ in some interval of $x$-axis, then wether $u(x,t;u_0)$ converges to $\hat{\phi}$ or not? For example, if $u_0$ satisfies the following condition $$\begin{aligned} \begin{array}{ll} \lim\limits_{x\rightarrow+\infty}u_0(x)=1,\ \lim\limits_{x\rightarrow-\infty}u_0(x)=\alpha,\ \mbox{and there exists}~\tilde{x}~\mbox{such that}\ \mbox{when}\\[0.2cm] x\leqslant\tilde{x}, u_0(x)<\alpha, \end{array}\end{aligned}$$ then wether $u(x,t;u_0)$ converges to $\hat{\phi}$ or not? Thus this open problem contains the opposite side of the above conjecture. That is, suppose that the initial condition $u_0(x)$ is bounded and, when $|x|$ is sufficiently large, $u_0(x)$ is not far away from $\alpha$ at least on one side of $x$-axis, then wether $u(x,t;u_0)$ converges to $\hat{\phi}$ or not? In a word, the proof of this open problem will enforce the understanding of the conjecture. The bistable equation ===================== In this section, we will discuss the local exponential asymptotic stability of entire solutions of found in [@cg05] and [@gm05] respectively, and their asymptotic behaviors when $t$ converges to $+\infty$. Before stating the main results, we do some preparations. Firstly, in order to prove the existence of entire solutions, the authors in [@gm05] constructed two pairs of different super-sub solutions corresponding to $f'(0)\leqslant f'(1)$ and $f'(0)>f'(1)$, respectively. Moreover, it follows from [@fm77] that $$\rm{if}\hspace{0.3cm}\int^{1}_{0}f(u)du\gtreqqless0, \hspace{0.5cm}\rm{then}\hspace{0.3cm} c\gtreqqless0.$$ Hence we will discuss the long time behaviors and stabilities of entire solutions under the following four cases:\ (C1) $\int^{1}_{0}f(u)du>0$ and $f'(0)>f'(1)$;\ (C2) $\int^{1}_{0}f(u)du>0$ and $f'(0)\leqslant f'(1)$;\ (C3) $\int^{1}_{0}f(u)du<0$ and $f'(0)>f'(1)$;\ (C4) $\int^{1}_{0}f(u)du<0$ and $f'(0)\leqslant f'(1)$.\ Secondly, for the sake of the proof of the uniqueness of entire solutions, similar to [@cg05], we introduce the metable dynamics of . We call the solution $u(x,t)$ of satisfying the condition $\mathbb{M}^+$, if there exist the constants $d_1>0$ and $T_1\in\mathbb{R}$, the functions $l_1(t)$ and $m_1(t)$ such that for all $t\leqslant T_1$, $$\left\{\begin{array}{ll} u(x,t)\leqslant\alpha_1,\ \ \ \forall x\in[\min\{l_1(t)+d_1,m_1(t)-d_1\},\max\{l_1(t)+d_1, m_1(t)-d_1\}],\\[0.2cm] u(x,t)\geqslant\alpha_2,\ \ \ \forall x\in(-\infty,l_1(t)]\cup[m_1(t),+\infty), \end{array}\right.$$ where $\alpha_1$ and $\alpha_2$ are some constants satisfying $f\neq0$ in $(0,\alpha_1]\cup[\alpha_2,1)$. Similarly, the solution $u(x,t)$ of is called to satisfy the condition $\mathbb{M}^-$, if there exist the constants $d_2>0$ and $T_2\in\mathbb{R}$, the functions $l_2(t)$ and $m_2(t)$ such that for all $t\leqslant T_2$, $$\left\{\begin{array}{ll} u(x,t)\leqslant\alpha_1,\ \forall x\in(-\infty,l_2(t)]\cup[m_2(t),+\infty],\\[0.2cm] u(x,t)\geqslant\alpha_2,\ \forall x\in[\min\{l_2(t)+d_2,m_2(t)-d_2\},\max\{l_2(t)+d_2,m_2(t)-d_2\}], \end{array}\right.$$ where $\alpha_1$ and $\alpha_2$ are some constants satisfying $f\neq0$ in $(0,\alpha_1]\cup[\alpha_2,1)$. We initially discuss entire solutions given in Theorem 1.1 from [@gm05]. By the method of the proof of the uniqueness in [@cg05] and Lemma \[lem:stacon\], we can prove the uniqueness of entire solutions and obtain the following result. \[thm:entire1\] Suppose that $\int^1_0f(u)du>0$. Let $\phi$ be the solution of with the wave speed $c$. Then for any given constants $y_1$ and $y_2$, there is a unique entire solution (up to a translation in $t$ and $x$) $u_1(x,t)$ of defined for all $(x,t)\in\mathbb{R}\times\mathbb{R}$ such that $0<u_1(x,t)<1$, $\partial_t u_1(x,t)>0$ and $$\label{eq:asy} \lim\limits_{t\rightarrow-\infty}\{\sup\limits_{x\geqslant0} |u_1(x,t)-\phi(x+ct+y_1)| +\sup\limits_{x\leqslant0}|u_1(x,t)-\phi(-x+ct+y_2)|\}=0,$$ $$\lim\limits_{t\rightarrow+\infty}\sup\limits_{x\in\mathbb{R}}|u_1(x,t)-1|=0.$$ We first consider the case (C1). When $t\leqslant0$, it follows from [@gm05] that the supersolution and subsolution of are $$\overline{u}_1(x,t)=\min\{\phi(x+p_1(t))+\phi(-x+p_1(t)),1\},$$ $$\underline{u}_1(x,t)=\max\{\phi(x+ct+x_6),\phi(-x+ct+x_6)\},$$ where the function $p_1(t)$ $(t\leqslant0)$ is the solution to $$\label{eq:ode1}\left\{\begin{array}{ll} p'_1(t)=c+M_7e^{\lambda_1p_1(t)},\quad t<0,\\[0.2cm] p_1(0)\leqslant0, \end{array} \right.$$ $c$ is the wave speed, $\lambda_1=\frac{c+\sqrt{c^2-4f'(0)}}{2}$, and the expression of $M_7>0$ is too long, which can be found in[@gm05], and $x_6=p_1(0)-\frac{1}{\lambda_1}\ln(1+\frac{M_7}{c})$. Substituting $t=0$ into the expression of the subsolution $\underline{u}_1(x,t)$, we have $$\liminf\limits_{x\rightarrow\pm\infty}u_1(x,0)\geqslant \liminf\limits_{x\rightarrow\pm\infty}\underline{u}_1(x,0)= \lim\limits_{x\rightarrow\pm\infty}\max\{\phi(x+x_6),\phi(-x+x_6)\} =1>\alpha.$$ Also since $\int^{1}_{0}f(s)ds>0$, it follows from Lemma \[lem:stacon\] that $$\lim\limits_{t\rightarrow+\infty}\|u_1(x,t)-1\|=0,$$ which is the asymptotic behaviors of the entire solution $u_1(x,t)$ when $t$ converges to $+\infty$. On the other hand, has been proved in [@mt09]. Since $\underline{u}_1(x,t+t_1)<\overline{u}_1(x,t)$ for arbitrary $t_1$, the method in [@gm05] to prove the uniqueness of entire solutions is no longer valid. Here we use the method in [@cg05] to prove the uniqueness, and only need to verify that $u_1(x,t)$ satisfies the condition $\mathbb{M}^+$. Since $f$ satisfies the assumption (A), there are some constants $\alpha_1<\alpha<\alpha_2$ such that $f\neq0$ in $(0,\alpha_1]\cup[\alpha_2,1)$. Then from the monotonicity of $\phi$, there exists a positive constant $\tilde{x}$ such that $\phi(\tilde{x}+x_6)\geqslant\alpha_2$. Set $$l_1(t)=ct-\tilde{x}, \ \ \ \ m_1(t)=-ct+\tilde{x}.$$ For any $t\leqslant0$, obviously $m_1(t)\geqslant0\geqslant l_1(t)$. On one hand, since $$u_1(x,t)\geqslant\max\{\phi(x+ct+x_6),\phi(-x+ct+x_6)\},$$ then $u_1(x,t)\geqslant\alpha_2$ for any $x\in(-\infty,l_1(t)]\cup[m_1(t),+\infty)$. On the other hand, according to , for any $\varepsilon>0$, there exists a $T_{11}<0$ such that for any $t\leqslant T_{11}$ $$\sup\limits_{x\geqslant0}|u_1(x,t)-\phi(x+ct+y_1)| \leqslant\varepsilon,\quad\sup\limits_{x\leqslant0}|u_1(x,t)-\phi(-x+ct+y_2)| \leqslant\varepsilon,$$ which together with the monotonicity of $\phi$ implies that $$u_1(x,t)\leqslant \phi(m_1(t)-d_1+ct+y_1)+\varepsilon\leqslant\alpha_1$$ for $x\leqslant m_1(t)-d_1=-ct+\tilde{x}-d_1$ with the sufficiently large positive constant $d_1$. Similarly, $u_1(x,t)\leqslant\alpha_1$ for $x\geqslant l_1(t)+d_1$. Since there is a $T_{12}<0$ such that $l_1(t)+d_1\leqslant0\leqslant m_1(t)-d_1$ for any $t\leq T_{12}$, choose $T_1\leqslant\min\{T_{11},T_{12}\}$, then $u_1(x,t)\leqslant\alpha_1$ for any $t\leqslant T_1$ and $x\in[l_1(t)+d_1,m_1(t)-d_1]$. Therefore, the entire solution $u_1(x,t)$ satisfies the condition $\mathbb{M}^+$, and from the proof in the papers [@cg05] and [@wlr09], it is unique except for a space-time translation. Finally, we will prove the monotonicity of $u_1(x,t)$ with respect to $t$. From the proof of the existence of entire solutions in the papers [@cg05] and [@gm05] and so on, the authors choose the function $u_n(x,t)$ as the unique classical solution to the following initial problem $$\label{eq:cauchy} \left\{\begin{array}{ll} \partial_tu_n=\partial_{xx}u_n+f(u_n),\quad x\in\mathbb{R},\quad t>-n,\\[0.2cm] u_n(x,-n)=\underline{u}_1(x,-n),\quad x\in\mathbb{R}. \end{array} \right.$$ Then by Lemma \[thm:sest\] and the process of diagonalization, there exists a subsequence of $\{u_n\}$ converging in the space $C^{2,1}_{loc}(\mathbb{R}\times(-\infty,0])$. In fact, the limit of this subsequence is the entire solution as we desired. Obviously, $\underline{u}_1(x,t)$ is the subsolution to and $\partial_t\underline{u}_1(x,t)\geqslant0$ for $(x,t)\in\mathbb{R}\times(-\infty,0]$. It is easy to see that $u_n(x,t)$ satisfies $$\partial_tu_n|_{t=-n}=\partial_{xx}u_n+f(u_n) =\partial_{xx}\underline{u}_1+f(\underline{u}_1)\geqslant \partial_t\underline{u}_1\geqslant0,$$ and since $\partial_tu_n$ satisfies the equation $\partial_t(\partial_tu_n)-\partial_{xx}(\partial_tu_n)-f'(u_n)\partial_tu_n=0$, then $\partial_tu_n\geqslant0$ for $(x,t)\in\mathbb{R}\times[-n,0]$ by the minimum theorem. From the process of convergence mentioned in the above and the strong minimum theorem, $\partial_tu_1(x,t)>0$ for all $(x,t)\in\mathbb{R}\times\mathbb{R}$. Now we deal with the case (C2). According to [@gm05], $u_1(x,t)$ is unique and satisfies , and the supersolution and subsolution for $t\leqslant0$ are $$\overline{u}_1(x,t)=\phi(x+p_2(t))+\phi(-x+p_2(t)),\ \ \underline{u}_1(x,t)=\phi(x+p_3(t))+\phi(-x+p_3(t)),$$ where the functions $p_2(t)$ and $p_3(t)$ satisfy $$\label{eq:2ode} \left\{\begin{array}{ll} p'_2(t)=c+M_8e^{\lambda_1p_2(t)},\\[0.2cm] p_2(0)\leqslant0, \end{array}\right.\ \ \ \ \ \ \ \ \ \left\{\begin{array}{ll} p'_3(t)=c-M_8e^{\lambda_1p_3(t)},\\[0.2cm] p_3(0)\leqslant\min\{0,\frac{1}{\lambda_1}\ln(\frac{c}{M_8})\}, \end{array}\right.$$ and the expression of $M_8>0$ can be found in [@gm05]. It is easy to see that $$\liminf\limits_{x\rightarrow\pm\infty}\underline{u}_1(x,0)= \lim\limits_{x\rightarrow\pm\infty}\{\phi(x+p_3(0))+\phi(-x+p_3(0))\} =1>\alpha,$$ which together with Lemma \[lem:stacon\] implies that $$\lim\limits_{t\rightarrow+\infty}\|u_1(x,t)-1\|=0.$$ Finally we only need to prove the monotonicity of $u_1(x,t)$ with respect to $t$. For this purpose, we first show that $p'_3(t)>0$ for $t\in(-\infty,0]$. Since $p_3(0)\leqslant\min\{0,\frac{1}{\lambda_1}\ln(\frac{c}{M_8})\}$, then $p'_3(0)=c-M_8e^{\lambda p_3(0)}>0$, and if there is a $t_2\in(-\infty,0)$ such that $p'_3(t_2)=0$ and $p'_3(t)>0$ for $t\in(t_2,0)$, then $$\frac{1}{\lambda_1}\ln\frac{c}{M_8}=p_3(t_2)\leqslant p_3(0)<\frac{1}{\lambda_1}\ln\frac{c}{M_8},$$ which is a contradiction. Thus $p'_3(t)>0$ for $t\in(-\infty,0]$, which implies that $$\partial_t\underline{u}_1(x,t)=\phi'(x+p_3(t))p'_3(t)+ \phi'(-x+p_3(t))p'_3(t)>0.$$ Therefore, $\partial_t u_1(x,t)>0$ for all $(x,t)\in\mathbb{R}\times\mathbb{R}$. Now we will consider the stability of the unique entire solution $u_1(x,t)$ obtained in Theorem \[thm:entire1\] by using the method in [@wlr09], and obtain the following result. \[thm:entire1l\] Suppose that $\int^{1}_{0}f(u)du>0$. Then the unique entire solution $u_1(x,t)$ obtained in Theorem \[thm:entire1\] is local exponential asymptotic stable. To begin with, we introduce some notations. Due to $f\in C^2(\mathbb{R})$ and $f'(0)$, $f'(1)<0$, there exists a $\theta>0$ such that $f'(u)<0$ on $[-\theta,2\theta]\cup[1-2\theta,1+\theta]$. Therefore, $$\label{eq:const} v:=\max\left\{\max\limits_{[-\theta,2\theta]}f'(u), \max\limits_{[1-2\theta,1+\theta]}f'(u) \right\}<0.$$ Let $\phi(\xi)$ be the solution to . When $\phi(x+ct)$ or $\phi(-x+ct)\in[\theta,1-\theta]$, there is a $\beta>0$ such that $$\phi(x+ct)+\phi(-x+ct)\geqslant\beta.$$ In the sequel of this paper, we still use these notations. We first consider the case (C1). For $t\geqslant0$, we will prove that the following functions $$\begin{aligned} \overline{u}_1^{+}(x,t)=\min\{1,u_1(x,t+\gamma(t))+q(t)\},\\[0.2cm] \underline{u}_1^{-}(x,t)=\max\{0,u_1(x,t-\gamma(t))-q(t)\}\end{aligned}$$ are the supersolution and subsolution to with the initial condition $u_0(x)=u_1(x,0)$ respectively, where the function $q(t)$ is the solution to the following initial problem $$\label{eq:ode1}\left\{\begin{array}{ll} q'(t)-vq(t)=0,\quad t>0,\\[0.2cm] q(0)=\overline{q}_0, \end{array} \right.$$ $0\leqslant\overline{q}_0\leqslant\theta$ may be arbitrary, and the function $\gamma(t)$ is to be determined later. Here, we mainly prove that the function $\overline{u}_1^{+}(x,t)$ is the supersolution, and the rest is similar. When $\overline{u}_1^{+}(x,t)\equiv1$, the conclusion is obvious. Thus, we only consider $\overline{u}_1^{+}(x,t)=u_1(x,t+\gamma(t))+q(t)$. Firstly, when $u_1(x,t)\in[0,\theta]$ or $[1-\theta,1]$, from , and $\partial_tu_1(x,t)>0$, we can get $$\begin{aligned} \partial_t\overline{u}_1^{+}-\partial_{xx}\overline{u}_1^{+} -f(\overline{u}_1^{+})&=\gamma'(t) \partial_tu_1+\partial_tu_1-\partial_{xx}u_1+q'(t)-f(u_1+q(t))\\[0.2cm] &\geqslant q'(t)-vq(t)\\[0.2cm] &=0,\end{aligned}$$ where we need $\gamma'(t)>0$. On the other hand, when $u_1(x,t)\in[\theta,1-\theta]$, due to $\partial_tu_1(x,t)>0$, there is a constant $\overline{b}>0$ such that $\partial_tu_1(x,t)\geqslant \overline{b}$. Thus it follows from and that $$\begin{aligned} \partial_t\overline{u}_1^{+}-\partial_{xx}\overline{u}_1^{+} -f(\overline{u}_1^{+})&=\gamma'(t) \partial_tu_1+\partial_tu_1-\partial_{xx}u_1+q'(t)-f(u_1+q(t))\\[0.2cm] &\geqslant \overline{b}\gamma'(t)+vq(t)-wq(t)\\[0.2cm] &=0,\end{aligned}$$ where we need the function $\gamma(t)$ satisfying $$\label{eq:ode12}\left\{\begin{array}{ll} \gamma'(t)=\frac{w-v}{\overline{b}}\overline{q}_0e^{vt},\quad t>0,\\[0.2cm] \gamma(0)=\overline{q}_0. \end{array} \right.$$ Solving the equations and yields that $q(t)=\overline{q}_0e^{vt}$ and $\gamma(t)=\overline{q}_0(1+\gamma_0 -\gamma_0e^{vt})$, where $\gamma_0=\frac{v-w}{\overline{b}v}>0$, since $v<0$ and $w>0$. Also since $\partial_tu_1(x,t)>0$, then $u_1(x,0)\leqslant u_1(x,\overline{q}_0)+\overline{q}_0=\overline{u}_1^{+}(x,0)$. Hence, $\overline{u}_1^{+}(x,t)$ is the supersolution of with the initial condition $u_0(x)=u_1(x,0)$. Similarly, one can prove that $\underline{u}_1^{-}(x,t)$ is the subsolution of with the initial condition $u_0(x)=u_1(x,0)$. Now we prove the local stability of the entire solution $u_1(x,t)$. For any given $\epsilon>0$, from Lemma \[thm:sest\], there exists a positive constant $\delta_1\leqslant\frac{\epsilon}{2L}$, such that for any $|t_3|\leqslant\delta_1$ and $t\in(1+t_3,+\infty)$, $$\label{eq:2est} \|u_1(\cdot,t+t_3)-u_1(\cdot,t)\|=\|\partial_tu_1 (\cdot,t+t^*)\||t_3| \leqslant\frac{\epsilon}{2},$$ where $t^*\in(-t_3,t_3)$. Choose $\overline{q}_0=\delta\leqslant\min\{\frac{\delta_1}{1+\gamma_0}, \frac{\epsilon}{2}\}$, and the initial function $u_0(x)$ satisfies $\|u_0(x)-u_1(x,0)\|<\delta$. Since $\partial_tu_1(x,t)>0$, then $u_1(x,0)+\delta\leqslant u_1(x,\delta)+\delta$, $u_1(x,-\delta)-\delta\leqslant u_1(x,0)-\delta$. Therefore, for all $x\in\mathbb{R}$, we have $$\begin{aligned} u_1(x,-\delta)-\delta\leqslant u_1(x,0)-\delta\leqslant u_0(x)\leqslant u_1(x,0)+\delta\leqslant u_1(x,\delta)+\delta.\end{aligned}$$ Thus $\overline{u}_1^{+}(x,t)$ and $\underline{u}_1^{-}(x,t)$ are also the supersolution and subsolution of with the initial condition $u_0(x)$. Consequently, $$\begin{aligned} \label{eq:inequ}&\max\{0,u_1(x,t-\delta(1+\gamma_0-\gamma_0e^{vt})) -\delta e^{vt}\}\nonumber\\[0.2cm]\leqslant& u(x,t;u_0) \leqslant\min\{1,u_1(x,t+\delta(1+\gamma_0-\gamma_0e^{vt}))+\delta e^{vt}\}.\end{aligned}$$ By noting that $\delta(1+\gamma_0-\gamma_0e^{vt})\leqslant\delta_1$, it follows from that $$u_1(x,t)-\epsilon\leqslant u(x,t;u_0)\leqslant u_1(x,t)+\epsilon.$$ In a word, for any $\epsilon>0$, there exists a $\delta>0$, when $\|u_0(x)-u_1(x,0)\|<\delta$, then $$\|u(x,t;u_0)-u_1(x,t)\|\leqslant\epsilon, \ \ \ \ \ \ t\geqslant0,$$ which means that the entire solution $u_1(x,t)$ is local asymptotic stable. Finally, we will show that the entire solution $u_1(x,t)$ is local exponential asymptotic stable. First of all, since $\underline{u}_1(x,t)$ given in Theorem \[thm:entire1\] is also the subsolution of for $t\geqslant0$, then it follows from the comparison theorem that for all $(x,t)\in\mathbb{R}\times\mathbb{R}$, $$\label{eq:ineq} \max\{\phi(x+ct+x_6),\phi(-x+ct+x_6)\}=\underline{u}_1(x,t)\leqslant u_1(x,t)<1.$$ Therefore, from and , there is a $T_3>0$ with $cT_3+x_6>0$ such that for any $t\geqslant T_3$, we have $$\begin{aligned} 0\leqslant1-u_1(x,t)\leqslant1-\underline{u}_1(x,t) \leqslant M_4e^{\mu_2(|x|+ct+x_6)} \leqslant M_4e^{\mu_2ct}.\label{eq:exp1}\end{aligned}$$ The next step is to prove $$\label{eq:sub}\underline{u}_1^{-}(x,t)=u_1(x,t-\delta(1+\gamma_0-\gamma_0e^{vt})) -\delta e^{vt}$$ holds for large $t$. Indeed, we remark that $$\label{eq:msub} \partial_t(u_1(x,t-\overline{q}_0(1+\gamma_0-\gamma_0e^{vt})) -\overline{q}_0e^{vt})=(1+\overline{q}_0\gamma_0ve^{vt}) \partial_tu_1-\overline{q}_0ve^{vt}.$$ Since $\gamma_0=\frac{v-w}{\overline{b}v}$, then $$1+\overline{q}_0\gamma_0ve^{vt}=1-\overline{q}_0 \frac{w-v}{\overline{b}}e^{vt}.$$ Hence we can choose a constant $T_4\geqslant\max\{0,\frac{1}{v} \ln\frac{\overline{b}}{(w-v)\theta}\}$ such that when $t\geqslant T_4$, by noting that $w>0$, $v<0$, $\overline{b}>0$ as well as $0\leqslant\overline{q}_0\leqslant\theta$, we have $$\overline{q}_0\frac{w-v}{\overline{b}}e^{vt}\leqslant\theta \frac{w-v}{\overline{b}}e^{vt}\leqslant\theta \frac{w-v}{\overline{b}}e^{vT_4}\leqslant1.$$ Therefore, when $t\geqslant T_4$, from $v<0$ and $\partial_tu_1(x,t)>0$ as well as , we can obtain $$\partial_t(u_1(x,t-\overline{q}_0(1+\gamma_0-\gamma_0e^{vt})) -\overline{q}_0e^{vt})>0.$$ Moreover, by noting $\lim\limits_{t\rightarrow+\infty}\underline{u}_1 (x,t-\overline{q}_0(1+\gamma_0-\gamma_0e^{vt}))=1$, thus, there is a constant $T_5\geqslant\max\{T_3,\ T_4,\ \delta (1+\gamma_0),\ \delta (1+\gamma_0)-\frac{x_6}{c}\}$ such that for $t\geqslant T_5$, holds. Therefore, for $t\geqslant T_5\geqslant \delta(1+\gamma_0)$, it follows from and that $$\begin{aligned} |1-u(x,t;u_0)|&\leqslant|1-u_1(x,t-\delta(1+\gamma_0- \gamma_0e^{vt}))+\delta e^{vt}|\nonumber\\[0.2cm] &\leqslant M_4e^{\mu_2(|x|+ct-c\delta(1+\gamma_0-\gamma_0e^{vt})+x_6)} +\delta e^{vt}\nonumber\\[0.2cm] &\leqslant M_4e^{\mu_2ct}+\delta e^{vt}.\label{eq:exp3}\end{aligned}$$ Thus, when $\|u_0(x)-u_1(x,0)\|<\delta$, for $t\geqslant T_5$, from and , we have $$\begin{aligned} \|u(x,t;u_0)-u_1(x,t)\| \leqslant&\|1-u_1(x,t)\| +\|1-u(x,t;u_0)\|\\[0.2cm] \leqslant& 2M_4e^{\mu_2ct}+ \delta e^{vt}.\end{aligned}$$ Next we discuss the stability of $u_1(x,t)$ under the case (C2). The proof of the local stability is similar, we only need to consider the local exponential stability. By directly calculating , we know that $$p_3(t)=p_3(0)+ct-\frac{1}{\lambda_1}\ln\left\{1-\frac{M_8}{c} e^{\lambda_1p_3(0)} (1-e^{c\lambda_1t})\right\}.$$ Set $$g(t)=\frac{1}{\lambda_1}\ln\left\{1-\frac{M_8}{c}e^{\lambda_1p_3(0)} (1-e^{c\lambda_1t})\right\}.$$ Then $$g'(t)=\frac{cM_8e^{\lambda_1p_3(0)}e^{\lambda_1t}}{c-M_8 e^{\lambda_1p_3(0)} +M_8e^{\lambda_1p_3(0)}e^{c\lambda_1t}}>0,$$ since $p_3(0)<\frac{1}{\lambda_1}\ln\frac{c}{M_8}$. Thus, $g(t)<g(\infty)=\frac{1}{\lambda_1}\ln\{1-\frac{M_8}{c} e^{\lambda_1p_3(0)}\}:=x_7$. Consequently, $p_3(t)>ct+x_8$, where $x_8:=p_3(0)-x_7$. As a result, $$\underline{u}_1(x,t)>\max\{\phi(x+ct+x_8),\phi(-x+ct+x_8)\}.$$ Thus in the case (C2), we also get an inequality similar to . The rest of the proof is similar. \[remark:remark1\] Under the cases (C1) and (C2), by noting $\int^1_0f(u)du>0$, the entire solution $u_1(x,t)$ found in Theorem \[thm:entire1\] satisfies $$\lim\limits_{t\rightarrow+\infty}\|u_1(x,t)-1\|=0.$$ In fact, this conclusion coincides with the results in [@y03]. Thus the super-sub solution method is a valid way to simplify the proof of the existence of entire solutions of in [@y03]. \[remark:remark2\] Under the case (C1), since the subsolution is $$\underline{u}_1(x,t)=\max\{\phi(x+ct+x_6),\phi(-x+ct+x_6)\},$$ then the existence, uniqueness and stability of entire solutions of can be found in [@wlr09] as well. Here in order to prove the local asymptotic stability of entire solutions of , we constructed the different supersolution and subsolution compared with [@wlr09]. Moreover we also simplify the way to prove the local asymptotic stability of entire solutions of compared with [@y03]. \[remark:remark3\] When $f(u)=u(1-u)(u-\alpha)\ \alpha\in(0,1)$, one easily sees that $\int^1_0f(s)ds=\frac{1-2\alpha}{12}$ and $f'(0)=-\alpha$, $f'(1)=-1+\alpha$. Obviously, $$f'(0)\lesseqqgtr f'(1)\ if\ and\ only\ if\ \int^1_0f(u)du\lesseqqgtr0\ if\ and\ only\ if\ \alpha\gtreqqless\frac{1}{2}.$$ Therefore if $\int^1_0f(u)du>0$, then $f'(0)>f'(1)$, namely, only the case (C1) will occur. For the case $\int^1_0f(u)du<0$, the authors in [@cg05] had found the entire solution $u_2(x,t)$ for . Now we will discuss the long time behavior and the local exponential asymptotic stability of the entire solution in the following theorem. \[thm:entire3\] Assume that $\int^1_0f(u)du<0$ and $\phi$ is the solution to , then admits a unique entire solution $u_2(x,t)$ satisfying $\partial_tu_2(x,t)<0$, $u_2(x,t)=u_2(-x,t)$, $0<u_2(x,t)<1$, and for $(x,t)\in\mathbb{R}\times(-\infty,-4B\phi(0)],$ $$u_2(x,t+h_1(t))<\phi(-x+ct)\phi(x+ct)<u_2(x,t-h_1(t)),$$ where $h_1(t)=4B\phi(ct)$ and $B>0$, $\lim\limits_{t\rightarrow+\infty}\|u_2(x,t)\|=0$, $$\label{eq:asy1} \lim\limits_{t\rightarrow-\infty}\{\sup\limits_{x\geqslant0} |u_2(x,t)-\phi(-x+ct)| +\sup\limits_{x\leqslant0}|u_2(x,t)-\phi(x+ct)|\}=0.$$ Moreover, the unique entire solution $u_2(x,t)$ is local exponential asymptotic stable. Firstly, it follows from [@cg05] that $$\label{eq:asye} \lim\limits_{t\rightarrow-\infty}\|u_2(x,t) -\phi(x+ct)\phi(-x+ct)\|=0,$$ which implies that $$\begin{aligned} \lim\limits_{t\rightarrow-\infty}\sup\limits_{x\geqslant0}|u_2(x,t)-\phi(-x+ct)| \leqslant&\lim\limits_{t\rightarrow-\infty}\sup\limits_{x\geqslant0} |u_2(x,t)-\phi(-x+ct)\phi(x+ct)|\\[0.2cm] +&\lim\limits_{t\rightarrow-\infty}\sup\limits_{x\geqslant0}|\phi(-x+ct) ||1-\phi(x+ct)|\\[0.2cm] =&0.\end{aligned}$$ Similarly, $\lim\limits_{t\rightarrow-\infty}\sup\limits_{x\leqslant0}|u_2(x,t)-\phi(x+ct)|=0$. In a word, the entire solution $u_2(x,t)$ satisfies . Secondly, since $\int^1_0f(u)du<0$, then the wave speed $c<0$, and at the same time, according to [@cg05], the entire solution $u_2(x,t)$ satisfies for $t\leqslant-4B\phi(0)<0$, $$u_2(x,t+h_1(t))<\phi(-x+ct)\phi(x+ct)<u(x,t-h_1(t)).$$ Hence, for any $\tau\leqslant-4B\phi(0)<0$, we have $$u_2(x,\tau+h_1(\tau))<\phi(x+c\tau),$$ and the comparison theorem yields that for all $t>\tau$, $u_2(x,t+h_1(\tau))<\phi(x+ct)$. Since $h_1(t)=4B\phi(ct)$, setting $\tau\rightarrow-\infty$ leads to $u_2(x,t)\leq\phi(x+ct)$. Similarly, $u_2(x,t)\leq\phi(-x+ct)$. Thus, for all $(x,t)\in\mathbb{R} \times\mathbb{R}$, $$\label{eq:2est1} u_2(x,t)\leq\min\{\phi(x+ct),\phi(-x+ct)\}.$$ Specially, $u_2(x,0)\leq\min\{\phi(x),\phi(-x)\}$, which implies that $\lim\limits_{x\rightarrow\pm\infty}u_2(x,0)=0<\alpha$. Hence, due to Lemma \[lem:stacon\], $\lim\limits_{t\rightarrow+\infty}\|u_2(x,t)\|=0$. Thirdly, similar to the proof in Theorem \[thm:entire1\], since $f$ satisfies the assumption (A), there are some constants $\alpha_1<\alpha<\alpha_2$ such that $f\neq0$ in $(0,\alpha_1]\cup[\alpha_2,1)$. Then from the monotonicity of $\phi$, there exists a positive constant $\hat{x}$ such that $\phi(-\hat{x})\leqslant\alpha_1$. Set $$l_2(t)=-ct-\hat{x}, \ \ \ \ m_1(t)=ct+\hat{x},$$ then for any $t\leqslant0$, obviously $m_2(t)\geqslant0\geqslant l_2(t)$, and according to , $u_2(x,t)\leqslant\alpha_1$ for any $x\in(-\infty,l_2(t)]\cup[m_2(t),+\infty)$. On the other hand, according to , for any $\varepsilon>0$, there exists a $T_{13}<0$ such that for any $t\leqslant T_{13}$, $$\sup\limits_{x\geqslant0}|u_2(x,t)-\phi(-x+ct)| \leqslant\varepsilon,\quad\sup\limits_{x\leqslant0}|u_2(x,t)-\phi(x+ct)| \leqslant\varepsilon,$$ which together with the monotonicity of $\phi$ implies that $$u_2(x,t)\geqslant\phi(-m_2(t)+d_2+ct)-\varepsilon\geqslant\alpha_2$$ holds for $x\leqslant-m_2(t)+d_2=-ct+\hat{x}+d_2$ with some sufficiently large positive constant $d_2$. Similarly, $u_2(x,t)\geqslant\alpha_2$ for $x\geqslant l_2(t)+d_2$. Since there is a $T_{14}<0$ satisfying $l_2(t)+d_2\leqslant0\leqslant m_2(t)-d_2$ for any $t\leq T_{14}$, if we choose $T_2\leqslant\min\{T_{13},T_{14}\}$, then $u_2(x,t)\geqslant\alpha_2$ for any $t\leqslant T_2$ and $x\in[l_2(t)+d_2,m_2(t)-d_2]$. Therefore, the entire solution $u_2(x,t)$ satisfies the condition $\mathbb{M}^-$, and from the proof in the papers [@cg05] and [@wlr09], it is unique except for a space-time translation. Fourthly, we consider the local asymptotic exponential stability of $u_2(x,t)$. First of all, for all $(x,t)\in\mathbb{R}\times\mathbb{R}$, similar to the proof of Theorem \[thm:entire1l\], it is easy to prove $\partial_tu_2(x,t)<0$. Thus there exists a negative constant $\tilde{b}$ such that when $u_2(x,t)\in[\theta,1-\theta]$, $\partial_tu_2(x,t)\leqslant\tilde{b}<0$. Similarly, it is not hard to prove the following two functions $$\begin{aligned} \overline{u}_2^{+}(x,t)=\min\{1,u_2(x,t-\tilde{\gamma}(t))+q(t)\}\\[0.2cm] \underline{u}_2^{-}(x,t)=\max\{0,u_2(x,t+\tilde{\gamma}(t))-q(t)\}\end{aligned}$$ are the supersolution and subsolution of with the initial condition $u_0(x)=u_2(x,0)$, respectively. At this time, the function $q(t)$ also satisfies , while the function $\tilde{\gamma}(t)$ satisfies $$\left\{\begin{array}{ll} \tilde{\gamma}'(t)=\frac{w-v}{-\tilde{b}}\overline{q}_0e^{vt},\quad t>0,\\[0.2cm] \tilde{\gamma}(0)=\overline{q}_0, \end{array} \right.$$ where the parameters $\overline{q}_0$, $\theta$ are defined in the above, as well as $w$ and $v$ are defined in and separately. Then from the proof in Theorem \[thm:entire1l\] or [@wlr09], we know that $u_2(x,t)$ is local Lyapunov stable. In the end, we discuss the local exponential asymptotic stability of $u_2(x,t)$. With the similar proof, it is not hard to see that there is a constant $T_6$ such that for $t\geqslant T_6$, $\overline{u}_2^{+}(x,t)=u_2(x,t-\tilde{\gamma}(t))+q(t)$. By choosing $T_7=\max\{T_6,\delta(1+\tilde{\gamma}_0)\}$, where $\tilde{\gamma}_0=\frac{w-v}{\tilde{b}v}>0$, and noting $c<0$, then when $t\geqslant T_7$, with the help of and , we finally arrive at $$\begin{aligned} \|u(x,t;u_0)-u_2(x,t)\| \leqslant&\|u_2(x,t)\| +\|u(x,t;u_0)\|\\[0.2cm] \leqslant& u_2(x,t)+u_2(x,t-\delta(1+\tilde{\gamma}_0-\tilde{\gamma}_0e^{vt})) +\delta e^{vt}\\[0.2cm] \leqslant& 2M_3e^{\lambda_1ct}+ \delta e^{vt},\end{aligned}$$ which implies the local exponential asymptotic stability of $u_2(x,t)$. Thus we have finished the proof. The monostable equation ======================= In this section, under the assumption (A$'$), we will discuss the local exponential asymptotic stability of entire solutions of . From [@gm05] there is an entire solution $u_3(x,t)$ satisfying $$\begin{aligned} &\max\{\phi_{c_1}(x+c_1t+y_3),\phi_{c_2}(-x+c_2t+y_4)\}\\[0.2cm] \leqslant &u_3(x,t) \leqslant\min\{1,\phi_{c_1}(x+p_4(t)) +\phi_{c_2}(-x+p_5(t))\},\end{aligned}$$ where $\phi_{c_k}\ (k=1,2)$ are the solutions to , $c_1,c_2\in[c_{min},+\infty)$, $c_{min}=2\sqrt{f'(0)}$, $c_1\leqslant c_2$, and the functions $p_4(t)$, $p_5(t)$ are the solutions to $$\begin{aligned} \left\{\begin{array}{ll} p'_4(t)=c_1+M_9e^{\tilde{\alpha}p_4},\\[0.2cm] p'_5(t)=c_2+M_9e^{\tilde{\alpha}p_4}, \end{array}\right.\end{aligned}$$ where $p_5(0)\leqslant p_4(0)\leqslant0$, and $M_9$, $\tilde{\alpha}$ are positive constants. Suppose that the function $\rho(t)$ is the solution to $\rho'=f(\rho)$ with $0<\rho(t)<1$. Define $\nu(t):=\nu_0e^{f'(0)t}\ (\nu_0>0)$ such that $0<\rho(t)-\nu(t)\leqslant M_{10}e^{f'(0)t}$ for $t\leqslant0$. From [@gm05], there are three entire solutions $u_{ij}(x,t)$, $(i,j)=(1,0),(0,1),(1,1)$ satisfying $$\begin{aligned} &\max\{\chi_i\phi_{c_1}(x+c_1t+y_6),\chi_j\phi_{c_2}(-x+c_2t+y_7), \rho(t)\}\\[0.2cm]\leqslant& u_{ij}(x,t) \leqslant\min\{1,\chi_i\phi_{c_1}(x+p_4(t)) +\chi_j\phi_{c_2}(-x+p_5(t))+\nu(t)\},\end{aligned}$$ where $\chi_i=i$. We remark that whenever $\underline{u}_3(x,t)=\max\{\phi_{c_1}(x+c_1t+y_4), \phi_{c_2}(-x+c_2t+y_5)\}$, or $\max\{\chi_i\phi_{c_1}(x+c_1t+y_6),\chi_j\phi_{c_2}(-x+c_2t+y_7), \rho(t)\}$, then $\partial_t\underline{u}_3(x,t)\geqslant0$. Then similar to the proof in Theorem \[thm:entire1l\], it is not hard to see that $\partial_tu_{ij}(x,t)>0\ ((i,j)=(1,0),(0,1),(1,1))$ and $\partial_tu_3(x,t)>0$ for all $(x,t)\in\mathbb{R}\times\mathbb{R}$. Here, the proofs of the local stability and the local exponential asymptotic stability of the entire solutions are similar to those in the bistable case. By taking the entire solution $u_3(x,t)$ for example, we only point out the different part. Since $f'(1)<0$ and $f'(0)>0$, there are positive constants $\theta'$ and $\theta''$ such that $f'(u)<0\ (u\in[1-2\theta', 1+\theta'])$ and $f'(u)>0\ (u\in[0, 2\theta''])$. For convenience, we still set $\theta=\min\{\theta',\theta''\}$. Because $\partial_tu_3(x,t)>0$, there exists some constant $T_8$ such that for all $(x,t)\in\mathbb{R}\times(T_8,+\infty)$, $u_3(x,t)>\theta$. Now, we consider the initial problem starting at time $\max\{0,T_8\}$, and then similar to the proof of Theorem \[thm:entire1l\], $u_3(x,t)$ is local exponential asymptotic stable. In a word, we obtain the following theorem. \[thm:entire5\] Suppose that $f$ satisfies (A$'$) and $f'(u)\leqslant f'(0)\ \ (u\in[0,1])$. Furthermore, let $\phi_{c_k}$ be the solutions to with $c_k\in[c_{min},+\infty)$, $k=1,2$. Then the entire solutions $u_3$ and $u_{ij}(x,t)$, $(i,j)=(1,0),(0,1),(1,1)$ are local exponential asymptotic stable. References {#references .unnumbered} ========== [aa]{} S. Ahmad, A. C. Lazer, An elementary approach to traveling front solutions to a system of $N$ competition-diffusion equations, Nonlinear Anal. 16 (1991) 892-901. S. Ahmad, A. C. Lazer, A. Tineo, Traveling waves for a system of equations, Nonlinear Anal. 68 (2008) 3909-3912. D. G. Aronson, H. F. Weinberger, Nonlinear diffusion in population genetics, combustion, and nerve pulse propagation. Partial Differential Equations and Related Topics, Lecture Notes in Math., vol. 446, Springer, Berlin, 1975, pp.5-49. D. G. Aronson, H. F. Weinberger, Multidimensional nonlinear diffusion arising in population genetics, Adv. in Math. 30 (1978) 33-76. X. Chen, J. S. Guo, Existence and uniqueness of entire solutions for a reaction-diffusion equation, J. Differential Equations 212 (2005) 62-84. H. Cohen, Nonlinear diffusion problems, Studies in Mathematics 7, Studies in Applied Mathematics, ed. A. Taubo Math. Assoc. of America and Prentice Hall, (Englewood Cliffs, N. J.) 1971, 27-63. C. Conley, R. Gardner, Application of the generalized Morse index to travelling wave solutions of a competitive reaction-diffusion model, Indiana Univ. Math. J. 33 (1984) 319-343. E. C. M. Crooks, J. C. Tsai, Front-like entire solutions for equations with convection, J. Differential Equations 253 (2012) 1206-1249. P. C. Fife, Mathematical aspects of reacting and diffusing systems, Lecture Notes in Biomathematics, vol. 28, Springer-Verlag, Berlin, 1979. P. C. Fife, Long time behavior of solutions of bistable nonlinear diffusion equations, Arch. Ration. Mech. Anal. 70 (1979) 31-46. P. C. Fife, J. B. McLeod, The approach of solutions of nonlinear diffusion equations to travelling front solutions, Arch. Ration. Mech. Anal. 65 (1977) 335-361. P. C. Fife, J. B. McLeod, A phase plane discussion of convergence to travelling fronts for nonlinear diffusion, Arch. Ration. Mech. Anal. 75 (1981) 281-314. R. A. Fisher, The advance of advantageous genes, Ann. of Eugenics 7 (1937) 355-369. Y. Fukao, Y. Morita, H. Ninomiya, Some entire solutions of the Allen-Cahn equation, Taiwanese J. Math. 8 (2004) 15-32. T. Gallay, E. Risler, A variational proof of global stability for bistable travelling waves, Differential Integral Equations 20 (2007) 901-926. R. Gardner, Existence and stability of travelling wave solutions of competition models: a degree theoretic approach, J. Differential Equations 44 (1982) 343-364. J. S. Guo, Y. C. Lin, Entire solutions for a discrete diffusive equation with bistable convolution type nonlinearity, Osaka J. Math. 50 (2013) 607-629. J. S. Guo, Y. Morita, Entire solutions of reaction-diffusion equations and an application to discrete diffusive equations, Disc. Cont. Dyn. Syst. 12 (2005) 193-212. F. Hamel, N. Nadirashvili, Entire solutions of the KPP Equation, Comm. Pure Appl. Math. 52 (1999) 1255-1276. F. Hamel, N. Nadirashvili, Travelling fronts and entire solutions of the Fisher-KPP Equation in R$^N$, Arch. Rational Mech. Anal. 157 (2001) 91-163. Y. Kan-on, Existence of standing waves for competition-diffusion equations, Japan J. Indust. Appl. Math., 13 (1996) 117-133. A. Kolmogoroff, I. Petrovsky, N. Piscounoff, Étude de Íequation de la diffusion avec croissance de la quantité de matière et son application à unprobleme biologique, Bull. Univ. Moskou, Ser. Internat., Sec. A, 1 (1937) 6, 1-25. C. D. Levermore, J. X. Xin, Multidimensional stability of traveling waves in a bistable reaction-diffusion equation. II, Comm. Partial Differential Equations 17 (1992) 1901-1924. Y. Morita, H. Ninomiya, Entire solutions with merging fronts to reaction-diffusion equations, J. Dynam. Differential Equations 18 (2006) 841-861. Y. Morita, K. Tachibana, An entire solution to the Lotka-Volterra competition-diffusion equations, SIAM J. Math. Anal. 40 (2009) 2217-2240. J. Nagumo, S. Yoshizawa, S. Arimoto, Bistable transmission lines, I.E.E.E. Transactions on Circuit Theory 12 (1965) 400-412. T. Ogiwara, H. Matano, Monotonicity and convergence results in order preserving systems in the presence of symmetry, Disc. Cont. Dyn. Syst. 5 (1999) 1-34. T. Ogiwara, H. Matano, Stability analysis in order-preserving systems in the presence of symmetry, Proc. Roy. Soc. Edinburgh Sect. A 129 (1999) 395-438. E. Risler, Global convergence toward traveling fronts in nonlinear parabolic systems with a gradient structure, Ann. Inst. H. Poincaré Anal. Non Linéaire 25 (2008) 381-424. V. Roussier, Stability of radially symmetric travelling waves in reaction-diffusion equations, Ann. Inst. H. Poincaré Anal. Non Linéaire 21 (2004) 341-379. Y. Wang, X. Li, Some entire solutions to the competitive reaction diffusion system, J. Math. Anal. Appl. 430 (2015) 993-1008. Z. C. Wang, W. T. Li, S. Ruan, Entire solutions in bistable reaction-diffusion equations with nonlocal delayed nonlinearity, Trans. Amer. Math. Soc. 361 (2009) 2047-2084. J. X. Xin, Multidimensional stability of traveling waves in a bistable reaction-diffusion equation. I. Comm. Partial Differential Equations 17 (1992) 1889-1899. H. Yagisita, Backward global solutions characterizing annihilation dynamics of travelling fronts, Publ. Res. Inst. Math. Sci 39 (2003) 117-164. [^1]: Partially supported by the NSFC (11571041)and the Fundamental Research Funds for the Central Universities. Corresponding author.
--- abstract: 'Witnessing the success of deep learning neural networks in natural image processing, an increasing number of studies have been proposed to develop deep-learning-based frameworks for medical image segmentation. However, since the pixel-wise annotation of medical images is laborious and expensive, the amount of annotated data is usually deficient to well-train a neural network. In this paper, we propose a semi-supervised approach to train neural networks with limited labeled data and a large quantity of unlabeled images for medical image segmentation. A novel pseudo-label (namely self-loop uncertainty), generated by recurrently optimizing the neural network with a self-supervised task, is adopted as the ground-truth for the unlabeled images to augment the training set and boost the segmentation accuracy. The proposed self-loop uncertainty can be seen as an approximation of the uncertainty estimation yielded by ensembling multiple models with a significant reduction of inference time. Experimental results on two publicly available datasets demonstrate the effectiveness of our semi-supervied approach.' author: - Yuexiang Li - Jiawei Chen - Xinpeng Xie - Kai Ma - Yefeng Zheng bibliography: - 'my\_reference.bib' title: 'Self-Loop Uncertainty: A Novel Pseudo-Label for Semi-Supervised Medical Image Segmentation' --- Introduction ============ Deep neural networks often require large quantity of labeled images to achieve satisfactory performance. However, since annotating medical images requires experienced physicians to spend hours or days to investigate, which is laborious and expensive, the labeled medical images are often very deficient, especially for the tasks requiring pixel-wise annotations (e.g., segmentation). To tackle this problem, many researches [@BaiWJ2017; @BortsovaG2019; @SedaiS2019; @YuL2019] have been proposed to improve the segmentation performance of deep neural networks through exploiting the information from unlabeled data. Using pseudo-labels of unlabeled data (generated automatically by a segmentation algorithm via uncertainty estimation) is one of the potential solutions, which has been extensively studied. The most popular approaches are: 1) softmax probability map [@BaiWJ2017], 2) Monte Carlo (MC) dropout [@SedaiS2019; @YuL2019], and 3) uncertainty estimation via network ensemble [@LakshminarayananB2017]. Specifically, Bai et al. [@BaiWJ2017] proposed a semi-supervised approach for the cardiac magnetic resonance volume segmentation. The proposed approach first used a limited number of labeled data to train the neural network and then utilized the softmax probability maps predicted by the neural network as the pseudo-label for the unlabeled volumes to augment the training set. In a more recent study, Sedai et al. [@SedaiS2019] proposed an uncertainty guided semi-supervised learning framework for the segmentation of retinal layers in optical coherence tomopgraphy images. The pseudo-label for semi-supervised learning was generated using the Monte Carlo (MC) dropout [@GalY2016], which can be viewed as an approximation of Bayesian uncertainty. Uncertainty estimation via model ensemble [@LakshminarayananB2017] is another form of approximation of Bayesian uncertainty, which separately trained $K$ networks and combined the softmax probability map of each network $k$ by averaging as the ensemble uncertainty (i.e., $\frac{1}{K} \sum_{k=1}^K p_{k}$, where $p$ is the probability map). Due to the variety of existing uncertainty estimation methods, Jungo et al. [@JungoA2019] conducted experiments to evaluate the reliability and limitation of existing approaches and concluded several observations. Two of them cause our interests: 1) the widely-used MC-dropout-based approaches are heavily dependent on the influence of dropout on the segmentation performance; 2) the computational-expensive ensemble method yields the most reliable results and is typically a good choice if the resources allow. To this end, an efficient way to yield the reliable ensemble uncertainty is worthwhile to investigate. In this paper, we propose a novel pseudo-label, namely self-loop uncertainty, for the semi-supervised medical image segmentation. The proposed self-loop uncertainty is generated by recurrently optimizing the encoder of a fully convolutional network (FCN) with a self-supervised sub-task (e.g., Jigsaw puzzles). The benefits of integrating self-supervised learning into our framework can be summarized in two folds: 1) the self-supervised learning sub-task encourages the neural network to deeply mine the information from raw data and benefits the image segmentation task; 2) the same network at different stages during the self-supervised sub-task optimization can be seen as different models, which leads our self-loop uncertainty to an approximation of ensemble uncertainty with much lower computational cost. We evaluate the proposed semi-supervised learning approach on two medical image segmentation tasks—nuclei segmentation and skin lesion segmentation. Experimental results show that our self-loop uncertainty can significantly improve the segmentation accuracy of the neural network, which outperforms the currently widely-used pseudo-label (e.g., softmax probability map and MC dropout). ![The pipeline of our semi-supervised segmentation framework. The proposed framework recurrently optimizes the encoder part of FCN by addressing the self-supervised learning task (i.e., supervised by $\mathcal{L}_{SS}$) to generate the pseudo-label for the unlabeled data. There are two losses, i.e., segmentation loss $\mathcal{L}_{SEG}$ and uncertainty-guided loss $\mathcal{L}_{UG}$, adopted in our framework to supervise the segmentation of labeled and unlabeled data. Our framework generates $Q$ permutations ($P_{1}^{'}$, ... $P_{Q}^{'}$) for an image (either labeled or unlabeled) and yields corresponding $Q$ segmentation predictions ($S_{1}$, ... $S_{Q}$) for the estimation of self-loop uncertainty $y_{sl}$ (as illustrated in Alg. \[alg:ysl\]).[]{data-label="fig1:pipeline"}](imgs/pipeline.eps){width="95.00000%"} Method ====== The proposed semi-supervised segmentation framework is illustrated in Fig. \[fig1:pipeline\]. The training set for our semi-supervised framework consists of labeled data $D_L$ and unlabeled data $D_U$. The proposed semi-supervised framework involves three losses (i.e., $\mathcal{L}_{SEG}$, $\mathcal{L}_{UG}$, and $\mathcal{L}_{SS}$) to supervise the network training with $D_L$ and $D_U$, respectively. The colored arrows in Fig. \[fig1:pipeline\] represent the information flows of $D_{U}$ (orange) and $D_{L}$ (cyan). For a batch containing images from $D_L$ and $D_U$, we calculate the supervised segmentation loss $\mathcal{L}_{SEG}$ (i.e., binary cross-entropy loss in our experiment) for labeled data with pixel-wise annotation to ensure the FCN has the segmentation capacity, self-supervised loss $\mathcal{L}_{SS}$ for both $D_L$ and $D_U$ to exploit rich information from raw data and generate the self-loop uncertainty, and uncertainty-guided loss $\mathcal{L}_{UG}$ for the unlabeled images to boost the segmentation performance of FCN with unlabeled data. Self-supervised Sub-task ------------------------ As aforementioned, the self-supervised loss $\mathcal{L}_{SS}$ aims to exploit rich information contained in raw data and generate the self-loop uncertainty. Various pretext tasks, such as rotation prediction [@gidaris2018image_rotations] and colorization [@larsson_colorization_2017], can be adopted to achieve this goal. In this study, we use Jigsaw puzzles [@noroozi2016jigsaw_puzzles] consisting of translation and rotation transformations as the self-supervised sub-task to recurrently optimize the encoder of an FCN and yield the self-loop uncertainty. Similar to the standard Jigsaw puzzles, we partition the image into several tiles, e.g., nine tiles for $3 \times 3$ Jigsaw puzzles. To formulate the Jigsaw puzzles sub-task, we permute the tiles using the approach proposed by [@noroozi2016jigsaw_puzzles]—a small subset $\mathds{P}^{'}$ of the large permutation pool, i.e., $\mathds{P}= (P_{1}, P_{2},...,P_{9!})$ is formed by selecting the $ K $ permutations with the largest Hamming distance between each other. In each training iteration, the input image is repeatedly disarranged (Q times in total where $Q \ll K$, $Q=10$ and $K=100$ in our experiments) by one randomly selected permutation from $\mathds{P}^{'}$. Meanwhile, the encoder of FCN is recurrently updated to identify the selected permutation from the $K$ options for each disarranged image, which can be seen a classification task with $ K $ categories; therefore, we employ the cross-entropy loss as $\mathcal{L}_{SS}$ to supervise the sub-task. **Input:** Network weights: $\theta_e$ of the encoder and $\theta_d$ of the decoder. Unlabeled data: $x \in D_U$. **Function:** $f(x; \theta)$ neural network forward function. $update(.)$ backpropagation to update the neural network weights. $T(.)$ permuted transformation of Jigsaw puzzles. $T^{-1}(.)$ inverse-permuted transformation. $\mathcal{L}_{SS}(p, g)$ calculation of the self-supervised loss with prediction $p$ and self-supervised signal $g$. **Procedure$^{\dagger}$:** $Q$ permutations are randomly selected from $\mathds{P}^{'}$: $\{P_1^{'},...,P_Q^{'} \in \mathds{P}^{'}\}$. $p_i \leftarrow f(T_{P_i^{'}}(x); \theta_{e})$; $S_i \leftarrow f(T_{P_i^{'}}(x); \{\theta_{e},\theta_{d}\})$; $l_i \leftarrow \mathcal{L}_{SS}(p_i, g_i)$; $\theta_{e}^i \leftarrow update(l_i)$; $\theta_{e} \leftarrow \theta_{e}^i$. $y_{sl} = \sum_{i=1}^{Q} T^{-1}_{P_i^{'}}(S_i) \times norm(\omega_i)$, where $norm(.) = \frac {\omega_i} {\sum_{i=1}^Q\omega_i}$ and $\omega_i = 1-\frac {l_i} {\sum_{i=1}^Q l_i}$. $S$ is the segmentation prediction of FCN. $l$ is the calculated self-supervised loss. **Output:** self-loop uncertainty $y_{sl}$ of input $x$. The Jigsaw puzzles transformation adopted in our approach has two differences, compared to the one in [@noroozi2016jigsaw_puzzles]. First, to increase the diversity of permutation, each of the tiles is randomly rotated by an angle $a \in \{0^{\circ}, 90^{\circ}, 180^{\circ}, 270^{\circ}\}$ besides the translation transformation. Second, to integrate the Jigsaw puzzles task into the end-to-end semi-supervised framework, the input of self-supervised sub-task is required to have the same size as that of the target segmentation task. Hence, instead of using the shared-weight neural network for each tile, the permuted tiles are first assembled to an image of the same size of the original image (i.e., $\{P_1^{'},...,P_{Q}^{'}\}$ shown in Fig. \[fig1:pipeline\]) and then fed as input to the neural network for the permutation classification. Estimation of Self-loop Uncertainty for Unlabeled Data ------------------------------------------------------ The generation procedure of our self-loop uncertainty is presented in Alg. \[alg:ysl\]. The self-supervised sub-task is able to recurrently optimize the neural network in an iteration, as the self-supervised signal can be self-driven without manual annotation. The different stages (i.e., $\{\theta_{e}^{i},\theta_{d}\}, i \in \{1,...,Q\}$) of self-supervised optimization are seen as different models, which enable the proposed self-loop uncertainty to approximate the ensemble uncertainty with a single neural network. The permutated images go through the FCN and yield a set of segmentation predictions $S_i, i \in \{1,...,Q\}$. Since the calculated self-supervised loss ($l$) can explicitly represent the difficulty of puzzled image for neural network to restore, we formulate $l$ as the confidence of corresponding segmentation result $S$ (via $norm(.)$ and $\omega$ defined in Alg. \[alg:ysl\]) to revise its contribution to the final pseudo-label. Our self-loop uncertainty thereby is the weighted average of the segmentation predictions produced by different stages of self-supervised optimization. #### **Uncertainty-guided Loss.** The set of segmentation predictions $\{S_1,...,S_Q\}$ is presented in Fig. \[fig1:pipeline\], where the red color represents the high score of foreground. The weight-averaged self-loop uncertainty $y_{sl}$ can be used as the guidance to maintain the reliable predictions (i.e., high score) as target for the neural network to learn from unlabeled data. To achieve this goal, we adopt the mean squared error (MSE) loss as the uncertainty-guided loss $\mathcal{L}_{UG}$ for the network optimization with unlabeled data and pseudo-labels $y_{sl}$, which can be defined as: $$\mathcal{L}_{UG}(S_x,y_{sl})=\frac{\sum_{H \times W}\mathbb{I}(y_{sl}>th)\| S_x - y_{sl}\|^{2}} {\sum_{H \times W} \mathbb{I}(y_{sl}>th)}$$ where $\mathbb{I}$ is the indicator function; $H$ and $W$ are the image height and width, respectively; $S_x$ is the segmentation prediction of input image $x$; and $th$ is the threshold to select the high score target. Objective Function ------------------ Assuming a batch contains $N$ labeled data ($\{(x_j,y_j)\}_{j=1}^N$) and $M$ unlabeled data $\{x_j\}_{j=N+1}^{N+M}$, where $x_{j} \in \mathbb{R}^{H \times W \times C}$ is the input image ($H$, $W$, and $C$ are the height, width, and channel of the image, respectively) and $y_{j} \in\{0,1\}^{H \times W}, j=1, 2, \dots, N$ is the ground-truth annotation, the objective function $\mathcal{L}$ for this batch can be formulated as: $$\mathcal{L} = \sum_{j=1}^{N} \mathcal{L}_{SEG}(x_j,y_j) + \sum_{j=N+1}^{N+M} \mathcal{L}_{UG}(x_j,y_{sl})+ \sum_{j=1}^{N+M} \sum_{i=1}^Q \mathcal{L}_{SS}(T_{P_i^{'}}(x_j), g_i).$$ During network optimization, for the unlabeled data, we first fixed the decoder of FCN and recurrently update the encoder with $\mathcal{L}_{SS}$ to generate $y_{sl}$. Then, the weight of the whole FCN is optimized by $\mathcal{L}_{UG}$. In other words, an unsynchronized optimization of the encoder and decoder happens when using the unlabeled data. For the labeled data, on the other hand, the network is optimized with the $\mathcal{L}_{SEG}$ and $\mathcal{L}_{SS}$ simultaneously. Experiments =========== #### **MoNuSeg Dataset [@naylor2018segmentation].** The dataset consists of diverse H$\&$E stained tissue images captured from seven different organs (e.g., breast, liver, kidney, prostate, bladder, colon and stomach), which were collected from 18 institutes. The dataset has a public training set and a public test set for network training and evaluation, respectively. The training set contains 30 histopathological images with hand-annotated nuclei, while the test set consists of 14 images. The size of the histopathological images is $1000 \times 1000$ pixels. #### **ISIC Dataset [@ISIC2019].** The ISIC dataset is widely-used to assess the segmentation accuracy of skin lesion areas of automatic segmentation algorithms. The dataset contains 2,594 dermoscopic images. The skin lesion area of each image has been manually annotated by the data provider. The image size varies from around $1000 \times 1000$ pixels to $4000 \times 3000$ pixels. We resize all the images to a uniform size of $512 \times 512$ pixels for network training and validation. The dataset is randomly separated to training and test sets according to the ratio of 75:25. #### **Evaluation Criterion.** The F1 score, i.e., the unweighted average classification accuracy of the foreground and background tissues, which is widely-used in the area of nuclei [@LunaM2019; @OdaH2018; @ZhouY2019] and skin lesion [@LiY2018; @CMIG2019; @TangY2019] segmentation, is adopted as the metric to evlauate the segmentation performance. #### **Baselines.** Three popular uncertainty approaches—softmax probability map [@BaiWJ2017], Monte Carlo (MC) dropout [@SedaiS2019; @YuL2019], and uncertainty estimation via ensembling networks [@LakshminarayananB2017]—are involved as baselines in this study. Similar to [@SedaiS2019], we set the dropout rate to 0.2 and forward the image through the neural network for ten times to generate MC dropout uncertainty. The ensemble uncertainty is generated by ensembling ten models trained with different network initializations. Consistent with the baselines, we generate ten permutations for an image to iteratively optimize the neural network and accordingly yield the self-loop uncertainty. The widely used ResUNet-18 [@He01; @Ronneberger01] is used as the backbone for uncertainty estimation. For fair comparison, all the baselines are trained according to the same protocol. Evaluation of Pseudo-label Quality ---------------------------------- Compared to skin lesion segmentation, which contains a single target in each image, the annotation of nucleus is more difficult and laborious. Hence, we mainly use the MoNuSeg dataset to evaluate the quality of pseudo-label yielded by different approaches in this section.[^1] To quantitatively validate the accuracy of different pseudo-labels, we calculate the F1 score between the pseudo-labels and ground-truth and present the results in Tabel \[tab1:pseduolabelQ\]. The pseudo-labels are generated with different amounts (i.e., 20% and 50%) of labeled data $D_L$ and the remaining training set is used as unlabeled data $D_U$. As shown in Table \[tab1:pseduolabelQ\], our self-loop uncertainty outperforms all the baselines under different amounts of labeled data, which are $+2.27\%$ and $+2.88\%$ higher than the runner-up (i.e., MC Dropout) with 20% and 50% labeled data, respectively. The pesudo-labels yielded by uncertainty via ensembling models achieve lower accuracy among the baselines. The underlying reason may be that the MoNuSeg training set only contains 30 histopathological images, which make the amount of labeled data (i.e., 20% and 50%) insufficient to well train the neural network. Therefore, the ensembling of multiple unsatisfactory models cannot improve the accuracy of uncertainty estimation. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [**Amount of $D_{L}$**]{} [ **Softmax**]{} [ **MC D.**]{} [ **Ensemble**]{} [ **SL$^{3}$** ]{} [ **SL$^{6}$** ]{} [ **SL$^{10}$** ]{} --------------------------- ------------------ ---------------- ------------------- -------------------- -------------------- ---------------------------------------------------- [20%]{} 67.48 72.42 67.46 73.90 74.68 **75.24\ & 69.53 & 73.58 & 70.01 & 76.51 & 76.77 & **76.85\ **** ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- : F1 score (%) between ground-truth and the pseudo-labels generated by different uncertainty approaches with different amounts of labeled data. The superscript of SL is the number of permutations $Q$ generated for self-supervised learning. (MC D.—MC Dropout, SL—Self-loop)[]{data-label="tab1:pseduolabelQ"} #### **Ablation Study.** We conduct an ablation study to investigate the relationship between the number of permutation $Q$ and the quality of pseudo-label. $Q$ is set to 3, 6, 10, respectively, for the generation of self-loop uncertainty. As shown in Table \[tab1:pseduolabelQ\], the self-loop uncertainty generated with a larger $Q$ achieves the higher F1 score. However, the improvement of F1 score provided by increasing $Q$ from 6 to 10 becomes marginal (e.g., $+0.08\%$ using 50% labeled data), which illustrates that $Q$ may not be the larger the better for practical applications, when taking the computational cost into account. Segmentation Performance Evaluation ----------------------------------- To validate the effectiveness of pseudo-labels, we evaluate the performance of different semi-supervised frameworks on the test sets of MoNuSeg and ISIC. The semi-supervied approaches are trained with different portions (i.e., 20% and 50%) of labeled data. The evaluation results are listed in Table \[tab2:sota\]. The performance of fully-supervised approach with 100% labeled data is also assessed as the upper bound for the semi-supervised approaches. To validate the effectiveness of self-supervised sub-task, the self-loop uncertainty without $\mathcal{L}_{SS}$ is also involved for comparison. We pass the ten permutated images through the FCN without self-supervised optimization and yield the uncertainty by averaging the segmentation predictions. Due to lack of extra information exploited by self-supervised sub-task, the improvements yielded without $\mathcal{L}_{SS}$ significantly decrease. #### **Nuclei Segmentation.** As shown in Table \[tab2:sota\], the performance of fully-supervised approach significantly drops from 79.30% to 75.87% and 71.51%, respectively, with the reductions (i.e., $-50\%$ and $-80\%$) of manual annotations. The application of pseudo-labels provides a consistent improvement to the segmentation accuracy. Among them, the proposed self-loop uncertainty yields the largest improvements, especially under the condition with 20% annotated data, i.e., $+5.6\%$ higher than the fully-supervised approach. Furthermore, we notice that our semi-supervised framework trained with 50% labeled data achieves comparable F1 score (79.10%) to that of 100% fully-supervised approach (79.30%), which demonstrates the potential of our approach for reducing the workload of manual annotations. #### **Skin Lesion Segmentation.** Similar trends of improvement are observed on the ISIC test set. Due to the extra information provided by the unlabeled data, the semi-supervised approaches outperform the fully-supervised one with limited annotated data (20% and 50%). The framework adopted our self-loop uncertainty as pseudo-labels achieves the highest F1 scores, i.e., 84.92% and 86.17% with 20% and 50% labeled data, respectively, and the latter is comparable to that of fully-supervied approach with 100% annotations (i.e., 86.17%). As ISIC has much more training data, compared to MoNuSeg, the ensemble-uncertainty-based framework achieves a comparable F1 score of 86.06% with 50% labeled data. However, it is worthwhile to mention that the generation of ensemble uncertainty requires 10 times of inferences during the test phase, as well as the MC dropout. Conversely, the proposed self-loop uncertainty can be generated with a single inference, which significantly saves the computational cost. -------------------------- --------- --------- -------------------------------------------------- ---------- --------- ---------- [20%]{} [50%]{} [100%]{} [20% ]{} [50%]{} [100%]{} [**Fully-supervised**]{} 71.51 75.87 **79.30 & 81.49 & 84.86 & **86.58\ & 73.65 & 76.18 & - & 82.81 & 85.11 & -\ & 75.31 & 77.98 & - & 83.68 & 85.74 & -\ & 73.33 & 76.87 & - & 83.27 & 86.06 & -\ & 74.70 & 77.78 & - & 82.70 & 85.22 & -\ & **77.11 & **79.10 & - & **84.92 & **86.17 & -\ ************ -------------------------- --------- --------- -------------------------------------------------- ---------- --------- ---------- : F1 score (%) yielded by different semi-supervised approaches on the two publicly available datasets.[]{data-label="tab2:sota"} Conclusion ========== In this paper, we proposed a semi-supervised approach to train neural networks with limited labeled data and a large quantity of unlabeled images for medical image segmentation. A novel pseudo-label (namely self-loop uncertainty), generated by recurrently optimizing the neural network with a self-supervised task, is adopted as the ground-truth for the unlabeled images to augment the training set and boost the segmentation accuracy. Acknowledge {#acknowledge .unnumbered} =========== This work is supported by the Natural Science Foundation of China (No. 61702339), the Key Area Research and Development Program of Guangdong Province, China (No. 2018B010111001), National Key Research and Development Project (2018YFC2000702) and Science and Technology Program of Shenzhen, China (No. ZDSYS201802021814180). Appendix {#appendix .unnumbered} ======== ![Visualization of pseudo-label yielded by different uncertainty approaches with [**20% labeled data**]{}. The pseudo-labels and ground-truth are in and , respectively. The overlapping areas are in . It can be observed that our self-loop uncertainty is closer to the ground-truth (i.e., larger overlapping areas), compared to other approaches.[]{data-label="fig2:visualization"}](imgs/quality.eps){width="\textwidth"} ![Visualization of pseudo-label yielded by different uncertainty approaches with [**50% labeled data**]{}. The pseudo-labels and ground-truth are in and , respectively. The overlapping areas are in . The proposed self-loop uncertainty achieves larger overlapping areas than the others.[]{data-label="fig2:visualization_2"}](imgs/quality_50.eps){width="\textwidth"} [^1]: For visual comparison between pseudo-labels, please refer to [*arxiv version.*]{}
--- abstract: 'We give a new proof for a product formula of Jacobi which turns out to be equivalent to a $q$-trigonometric product which was stated without proof by Gosper. We apply this formula to derive a $q$-analogue for the Gauss multiplication formula for the gamma function. Furthermore, we give explicit formulas for short products of $q$-gamma functions.' address: - 'Dept. Math. Sci, United Arab Emirates University, PO Box 15551, Al-Ain, UAE' - 'Babes-Bolyai University, Department of Mathematics and Computer Science, 400084 Cluj-Napoca, Romania' author: - Mohamed El Bachraoui and József Sándor date: '**' title: 'On a theta product of Jacobi and its applications to $q$-gamma products' --- Introduction ============ Throughout we let $\tau$ be a complex number in the upper half plane, let $q=e^{\pi i\tau}$, and let $\tau'=-\frac{1}{\tau}$. Note that the assumption $\mathrm{Im}(\tau)>0$ guarantees that $|q|<1$. The $q$-shifted factorials of a complex number $a$ are defined by $$(a;q)_0= 1,\quad (a;q)_n = \prod_{i=0}^{n-1}(1-a q^i),\quad (a;q)_{\infty} = \lim_{n\to\infty}(a;q)_n.$$ It is easily verified that $$\label{q-basics} (q^2;q^2)_{\infty} = (q;q)_{\infty} (-q;q)_{\infty},\quad (q;q)_{\infty} = (q^2;q^2)_{\infty} (q;q^2)_{\infty},$$ $$(-q;q)_{\infty} = {\frac}{1}{(q;q^2)_{\infty}}, \quad\text{and\quad } (-q;q)_{\infty} = (-q^2;q^2)_{\infty} (-q;q^2)_{\infty}.$$ Ramanujan theta functions $\psi(q)$ and $f(q)$ are given by $$\psi(q) = {\frac}{(q^2;q^2)_{\infty}}{(q;q^2)_{\infty}}\quad\text{and\quad} f(-q) = (q;q)_{\infty}.$$ See Berndt [@Berndt-1] for some properties of Ramanujan theta functions. For convenience we write $$(a_1,\ldots,a_k;q)_n = (a_1;q)_n\cdots (a_k;q)_n,\quad (a_1,\ldots,a_k;q)_{\infty} = (a_1;q)_{\infty} \cdots (a_k;q)_{\infty}.$$ Jacobi first theta function is defined as follows: $$\theta_1(z,q) =\theta_1(z \mid \tau) = 2\sum_{n=0}^{\infty}(-1)^n q^{(2n+1)^2/4}\sin(2n+1)z.$$ The numbers $\tau$ and $q$ respectively are referred to as the *parameter* and *nome* of the theta functions. An important property of Jacobi theta functions is their infinite product representations which for the function $\theta_1(z\mid\tau)$ is given by $$\label{theta-product} \theta_1(z \mid\tau) = i q^{\frac{1}{4}}e^{-iz} (q^2 e^{-2iz},e^{2iz},q^2; q^2)_{\infty} .$$ Jacobi [@Jacobi] proved that $$\label{MainProd} {\frac}{(q^{2n};q^{2n})_{\infty}}{(q^2;q^2)_{\infty}^n} \prod_{k=-{\frac}{n-1}{2}}^{{\frac}{n-1}{2}}{\theta}_1 \left(z+{\frac}{k\pi}{n} \bigm| \tau \right) = {\theta}_1(nz \mid n\tau),$$ see also Enneper [@Enneper p. 249]. Unlike many of Jacobi’s results the formula (\[MainProd\]) seems not to have received much attention by mathematicians. This is probably due to the lack of applications. Furthermore, up to the authors’ knowledge no new proof has been given for this product formula. Besides, this formula turns out to be equivalent to a $q$-trigonometric identity of Gosper (see (\[SineProd\]) below) which he apparently was not aware of as he stated the identity without mentioning any reference to it. Our first goal in this note is to offer a new proof for (\[MainProd\]). To this end we will need the following basic properties of the function $\theta_1(z\mid\tau)$. $$\begin{aligned} \label{theta-1} \theta_1(k\pi\mid\tau) &= 0 \quad (k\in\mathbb{Z}), \nonumber \\ \theta_1(z+\pi \mid \tau) & = {\theta}_1(-z\mid \tau) = -{\theta}_1(z\mid \tau), \\ \theta_1(z+\pi\tau \mid \tau) &= -q^{-1} e^{-2iz} \theta_1(z \mid \tau). \nonumber $$ It can be shown that the last formula can be extended as follows $$\label{Transf} {\theta}_1 \left( z+\pi\tau \bigm| \frac{\tau}{k} \right) = (-1)^k q^{-k}e^{-2kiz} {\theta}_1 \left(z\bigm| \frac{\tau}{k} \right)\quad (k\in\mathbb{N}).$$ We will need Jacobi’s imaginary transformation stating that $$\label{ImTransf} \theta_1(z\mid \tau') =-i (-i\tau)^{\frac{1}{2}} e^{\frac{i\tau z^2}{\pi}}\theta_1(z\tau \mid \tau).$$ Letting ${\theta}'(z\mid\tau)$ denote the first derivative of $\theta(z\mid\tau)$ with respect to $z$, we have $$\label{theta-1-deriv} {\theta}_1'(0\mid\tau) = 2 q^{{\frac}{1}{4}} (q^2;q^2)^3$$ and $$\label{key-derivative} {\theta}_1'(0\mid \tau') = 2 (-i\tau)^{{\frac}{3}{2}} q^{{\frac}{1}{4}} (q^2;q^2)^3.$$ For details about theta functions we refer to the book by Whittaker and Watson [@Whittaker-Watson] and the book by Lawden [@Lawden]. For recent references on Jacobi theta functions which are closely related to our current topic, the reader is referred to Liu [@Liu-2005; @Liu-2007] and Shen [@Shen-1; @Shen-2]. As to our applications, we shall use (\[MainProd\]) in an equivalent form to establish new $q$-analogues for well-known products involving the gamma function $\Gamma (z)$ as we describe now. The $q$-gamma function is given by $$\Gamma_q(z) = \dfrac{(q;q)_\infty}{(q^{z};q)_\infty} (1-q)^{1-z} \quad (|q|<1).$$ It is immediate from the previous definition and (\[q-basics\]) that $$\label{q-gamma-half} \Gamma_q\left({\frac}{1}{2}\right) = {\frac}{(q;q)_{\infty}}{(q^{{\frac}{1}{2}};q)_{\infty}} \sqrt{1-q} = {\frac}{f^2(-q)}{f(-q^{{\frac}{1}{2}})} \sqrt{1-q} = \psi(q^{{\frac}{1}{2}})\sqrt{1-q}.$$ It is well-known that $\Gamma_q (z)$ is a $q$-analogue for the function $\Gamma (z)$, see Gasper and Rahman [@Gasper-Rahman]. The Gaussian multiplication formula for the gamma function states that $$\label{Gauss-Gamma-special} \Gamma\left({\frac}{1}{n}\right) \Gamma\left({\frac}{2}{n}\right) \cdots \Gamma\left({\frac}{n-1}{n}\right) = {\frac}{(2\pi)^{{\frac}{n-1}{2}}}{\sqrt{n}} \quad (n=1,2,\ldots).$$ A natural question is if, for the product $$\label{Sandor-problem} \Gamma_q\left({\frac}{1}{n}\right) \Gamma_q\left({\frac}{2}{n}\right) \cdots \Gamma_q\left({\frac}{n-1}{n}\right),$$ one can find some closed formula of type (\[Gauss-Gamma-special\]). We note that (\[Gauss-Gamma-special\]) can be deduced by an application of the following well-known more general identity: $$\label{Gauss-Gamma} n^{nz-{\frac}{1}{2}} \Gamma(z)\Gamma\left(z+{\frac}{1}{n}\right) \cdots \Gamma\left(z+{\frac}{n-1}{n}\right) = \Gamma(nz) (2\pi)^{{\frac}{n-1}{2}}\quad (n=1,2,\ldots).$$ A famous $q$-analogue for (\[Gauss-Gamma\]) due to Jackson [@Jackson-1; @Jackson-2], (see [@Gasper-Rahman p. 22]), states that $$\label{Jackson-q-Gamma} \left({\frac}{1-q^n}{1-q}\right)^{nz-1} \Gamma_{q^n} (z)\Gamma_{q^n} \left(z+{\frac}{1}{n}\right) \cdots \Gamma_{q^n} \left(z+{\frac}{n-1}{n}\right)$$ $$= \Gamma_{q^n} (nz)\Gamma_{q^n} \left({\frac}{1}{n}\right) \Gamma_{q^n} \left({\frac}{2}{n}\right) \cdots \Gamma_{q^n} \left({\frac}{n-1}{n}\right) \quad (n=1,2,\ldots).$$ However, it seems not to be easy to derive a closed formula for the product (\[Sandor-problem\]) using the relation (\[Jackson-q-Gamma\]). Our secondary goal in this note is to apply Jacobi’s relation (\[MainProd\]) to establish a closed formula for the product  (\[Sandor-problem\]). This gives a $q$-analogue for the formula (\[Gauss-Gamma-special\]) which seems not to appear in literature. Furthermore, Sándor and Tóth [@Sandor-Toth] found $$\label{short-gam-prod} P(n):= \prod_{\substack{k=1 \\ (k,n)=1}}^n \Gamma\left({\frac}{k}{n}\right) = {\frac}{(2\pi)^{{\frac}{\varphi(n)}{2}}}{ \prod_{d\mid n} d^{{\frac}{1}{2}\mu\left({\frac}{n}{d}\right)} } = {\frac}{(2\pi)^{{\frac}{\varphi(n)}{2}}}{ e^{{\frac}{\Lambda(n)}{2}} },$$ where $\varphi(n)$ is the Euler totient function, $\mu(n)$ is the Möbius mu function, and $\Lambda(n)$ is the Von Mangoldt function. We accordingly let $$P_q(n) = \prod_{\substack{k=1 \\ (k,n)=1}}^n \Gamma_q\Big({\frac}{k}{n}\Big).$$ Our third purpose in this note is to evaluate the last product and therefore give a $q$-version of the short product (\[short-gam-prod\]). To have our formula look like (\[short-gam-prod\]) we introduce the $q$-Von Mangoldt function as follows: $$\Lambda_q(n)= \log {\frac}{ 2^{\varphi(n)}\prod_{d\mid n} \big(f(-q^{{\frac}{1}{d}})\big)^{2\mu\left({\frac}{n}{d}\right)} }{(q^{{\frac}{1}{2}};q)_{\infty}^{2\varphi(n)} }.$$ It turns out that our formula for $P_q(n)$ when $n$ is a power of $2$ can be expressed in terms of Ramanujan function $\psi$. This and some work by Berndt [@Berndt-2], Yi *et al.* [@Yi-Lee-Paek], and Baruah and Saikia [@Baruah-Saikia] enable us to deduce explicit identities for a variety of short products of $q$-gamma functions. For references on short products of the gamma function we refer to [@BenAri-et-al; @Chamberland-Straub; @Martin; @Nijenhuis; @Nimbran]. To derive our results on products of the $q$-gamma function we shall use the link of this function with the $q$-trigonometry of Gosper. Gosper [@Gosper] introduced $q$-analogues of $\sin z$ and $\cos z$ as follows $$\label{sine-cosine-q-gamma} \begin{split} \sin_q \pi z &= q^{{\frac}{1}{4}} \Gamma_{q^2}^2\left({\frac}{1}{2}\right) \frac{q^{z(z-1)}}{\Gamma_{q^2}(z) \Gamma_{q^2}(1-z)} \\ \cos_q \pi z &= \Gamma_{q^2}^2\left({\frac}{1}{2}\right) {\frac}{q^{z^2}}{\Gamma_{q^2}\left({\frac}{1}{2}-z \right) \Gamma_{q^2}\left({\frac}{1}{2}+z\right)}. \end{split}$$ It can be shown that $\lim_{q\to 1}\sin_q z = \sin z$ and $\lim_{q\to 1}\cos_q z = \cos z$. Gosper proved that the functions $\sin_q$ and $\cos_q z$ are related to the function $\theta_1(z\mid \tau')$ as follows: $$\label{sine-cosine-theta} \sin_q (z) = \frac{\theta_1(z\mid \tau')}{\theta_1\left( \frac{\pi}{2}\bigm| \tau' \right)} \qquad \text{and \quad} \cos_q (z) = \frac{\theta_1\left( z+\frac{\pi}{2} \bigm| \tau' \right)} {\theta_1 \left( \frac{\pi}{2} \bigm| \tau' \right)} \quad \quad (\tau' = \frac{-1}{\tau}) $$ from which it immediately follows that $\sin_q (z-\pi/2)=\cos_q z$, $\sin_q \pi = 0$, and $\sin_q {\frac}{\pi}{2} = 1$. Gosper, on the one hand, stated many identities involving $\sin_q z$ and $\cos_q z$ which easily follow just from the definition and the basic properties of the function $\theta_1(z\mid\tau)$. For instance, he derived that $$\label{q-sin-derive} \sin_q' 0 = {\frac}{-2 \ln q}{\pi} q^{{\frac}{1}{4}} {\frac}{(q^2;q^2)_{\infty}^2}{(q;q^2)_{\infty}^2} = -{\frac}{2\ln g}{\pi}q^{{\frac}{1}{4}}\psi^2(q).$$ On the other hand, Gosper [@Gosper] using the computer facility *MACSYMA* stated without proof a variety of identities involving $\sin_q z$ and $\cos_q z$ and he asked the natural question whether his formulas hold true. For recent work on Gosper’s conjectures we refer to [@Touk-Houchan-Bachraoui; @Bachraoui-1; @Bachraoui-2; @Bachraoui-3; @Mezo-1]. Among the formulas which Gosper [@Gosper p. 92] stated without proof we have $$\label{SineProd} \prod_{k=0}^{n-1}\sin_{q^n}\pi \left(z+{\frac}{k}{n} \right) = q^{\frac{(n-1)(n+1)}{12}} \frac{(q;q^2)_{\infty}^2}{(q^n;q^{2n})_{\infty}^{2n}} \sin_q n\pi z.$$ However, by using the relation (\[sine-cosine-theta\]) and some basic manipulations, one can show that (\[SineProd\]) is actually equivalent to Jacobi’s multiplication (\[MainProd\]). Main results and some examples ============================== We start with results on the $q$-gamma function. \[Gauss-q-Gamma\] For any positive integer $n$, there holds $$\label{q-anlog-Gauss} \prod_{k=1}^{n-1}\Gamma_q\left({\frac}{k}{n}\right) = \Big(\Gamma_q\big({\frac}{1}{2}\big)\Big)^{n-1} {\frac}{f^{n-1}(-q^{{\frac}{1}{2}})}{f^{n-2}(-q) f(-q^{{\frac}{1}{n}})}.$$ Moreover, identity (\[q-anlog-Gauss\]) is the $q$-analogue for identity (\[Gauss-Gamma-special\]). \[rmk:general-prod\] We note that Mahmoud and Agarwal in [@Mahmoud-Agarwal Theorem 7] proved that for $x>0$ and $0<q<1$ $$\label{M-A} \prod_{k=0}^{n-1}\Gamma_{q^n}\Big({\frac}{x+k}{n}\Big) = {\frac}{(q^n;q^n)_{\infty}^n(1-q)^{{\frac}{n-1}{2}}}{(q;q)_{\infty}} \Big({\frac}{1-q^n}{1-q}\Big)^{1-x} \Gamma_q (x).$$ However, their formula is incorrect as on its right hand-side there should be a factor $(1-q^n)^{{\frac}{n-1}{2}}$ instead of the factor $(1-q)^{{\frac}{n-1}{2}}$. Moreover, their proof is complicated. We improve the authors’ formula by means of our arguments as follows. Combining (\[Jackson-q-Gamma\]) and Theorem \[Gauss-q-Gamma\] yields $$\label{rmk-help-1} \prod_{k=0}^{n-1}\Gamma_{q^n}\Big(z+{\frac}{k}{n}\Big) = {\frac}{(q^n;q^n)_{\infty}^n(1-q^n)^{{\frac}{n-1}{2}}}{(q;q)_{\infty}} \Big({\frac}{1-q^n}{1-q}\Big)^{1-nz} \Gamma_q (nz),$$ which by letting $z={\frac}{x}{n}$ improves (\[M-A\]). Furthermore, putting $q^{{\frac}{1}{n}}$ in place of $q$ in (\[rmk-help-1\]) gives $$\label{rmk-help-2} \prod_{k=0}^{n-1}\Gamma_{q}\Big(z+{\frac}{k}{n}\Big) = {\frac}{(q;q)_{\infty}^n(1-q)^{{\frac}{n-1}{2}}}{(q^{{\frac}{1}{n}};q^{{\frac}{1}{n}})_{\infty}} \Big({\frac}{1-q}{1-q^{{\frac}{1}{n}}}\Big)^{1-nz} \Gamma_{q^{{\frac}{1}{n}}} (nz),$$ which clearly extends Theorem \[Gauss-q-Gamma\] by letting $z={\frac}{1}{n}$. \[q-short-prod\] For any positive integer $n$, there holds $$P_q(n) = {\frac}{ \left(\Gamma_q\Big( {\frac}{1}{2} \Big)\right)^{\varphi(n)} (q^{{\frac}{1}{2}};q)^{\varphi(n)}} { \prod_{d\mid n}\big( f(- q^{{\frac}{1}{d}}\big) \big)^{\mu\left({\frac}{n}{d}\right)}} = {\frac}{\left( 2\Gamma_q\Big({\frac}{1}{2}\Big)\right)^{{\frac}{\varphi(n)}{2}}}{e^{{\frac}{\Lambda_q(n)}{2}}}.$$ Note that $\lim_{q\to 1} P_q(n) = P(n)$ and so $\lim_{q\to 1} \Lambda_q(n) = \Lambda (n)$. We shall now provide examples of explicit values for some products of $q$-gamma functions. To this end, we will use some results of Berndt [@Berndt-2], Yi [*et al.*]{} [@Yi-Lee-Paek], and Baruah and Saikia [@Baruah-Saikia] on explicit identities of Ramanujan’s $\psi$ function. We start by a more general result. \[2-powers\] For any positive integer $m>1$, there holds $$P_q(2^m) = \prod_{k=1}^{2^{m-1}}\Gamma_q\Big({\frac}{2k-1}{2^m}\Big) = (1-q)^{2^{m-2}} \psi(q^{{\frac}{1}{2^{m}}}) \prod_{k=1}^{m-1} \psi^{2^{m-1-k}}(q^{{\frac}{1}{2^{k}}}).$$ We now list the explicit values of the $\psi$ function which are needed for our goal. Throughout this section let $a={\frac}{\pi^{1/4}}{\Gamma\big({\frac}{3}{4}\big)}$. The following are due to Berndt [@Berndt-2 p. 325] $$\begin{split} \psi(e^{-\pi}) &= a 2^{-5/8} e^{\pi/8}, \\ \psi(e^{-2\pi}) &= a 2^{-5/4} e^{\pi/4}, \\ \psi(e^{-\pi/2}) &= a 2^{-7/16}(\sqrt{2}+1)^{1/4} e^{\pi/16}, \end{split}$$ the following are found by Yi *et al.* [@Yi-Lee-Paek] $$\psi(-e^{-\pi}) = a 2^{-3/4} e^{\pi/8} \quad\text{and\quad } \psi(-e^{-2\pi})= a 2^{-15/16} e^{\pi/4},$$ and the following is given by Baruah and Saikia [@Baruah-Saikia] $$\psi(-e^{-\pi/2}) = a 2^{-7/16} e^{\pi/16} (\sqrt{2}-1)^{1/4}.$$ We are now ready to produce some concrete examples. Letting in Theorem \[2-powers\], $m=2$, we obtain $$\begin{split} \Gamma_{e^{-2\pi}} \Big({\frac}{1}{4}\Big) \Gamma_{e^{-2\pi}}\Big({\frac}{3}{4}\Big) &= (1-e^{-2\pi}) \psi(e^{-\pi}) \psi(e^{-\pi/2}) \\ &= (1-e^{-2\pi}) a^2 2^{-17/16} e^{3\pi/16} (\sqrt{2}+1)^{1/4}, \\ \Gamma_{e^{-4\pi}} \Big({\frac}{1}{4}\Big) \Gamma_{e^{-4\pi}}\Big({\frac}{3}{4}\Big) &= (1-e^{-4\pi}) \psi(e^{-2\pi}) \psi(e^{-\pi}) \\ &= (1-e^{-4\pi}) a^2 2^{-15/8} e^{3\pi/8}, \\ \Gamma_{-e^{-2\pi}} \Big({\frac}{1}{4}\Big) \Gamma_{-e^{-2\pi}}\Big({\frac}{3}{4}\Big) &= (1+e^{-2\pi}) \psi(-e^{-\pi}) \psi(-e^{-\pi/2}) \\ &= (1+e^{-2\pi}) a^2 2^{-19/16} e^{3\pi/16}, \\ \Gamma_{-e^{4\pi}} \Big({\frac}{1}{4}\Big) \Gamma_{-e^{-2\pi}}\Big({\frac}{3}{4}\Big) &= (1+e^{-4\pi}) \psi(-e^{-2\pi}) \psi(-e^{-\pi}) \\ &= (1+e^{-4\pi}) a^2 2^{-27/16} e^{3\pi/8}. \end{split}$$ Note that the first two identities in the previous list were first obtained by Mező [@Mezo-2]. Let $m=3$ in Theorem \[2-powers\]. Then $$\begin{split} \Gamma_{e^{-4\pi}} \Big({\frac}{1}{8}\Big) \Gamma_{e^{-4\pi}}\Big({\frac}{3}{8}\Big) \Gamma_{e^{-4\pi}} \Big({\frac}{5}{8}\Big) \Gamma_{e^{-4\pi}}\Big({\frac}{7}{8}\Big) &= (1-e^{-4\pi})^2 \psi^2(e^{-2\pi}) \psi(e^{-\pi}) \psi(e^{-\pi/2}) \\ &= (1-e^{-4\pi})^2 a^4 2^{-57/16} e^{11\pi/16} (\sqrt{2}+1)^{1/4}, \end{split}$$ and similarly $$\Gamma_{-e^{-4\pi}} \Big({\frac}{1}{8}\Big) \Gamma_{-e^{-4\pi}}\Big({\frac}{3}{8}\Big) \Gamma_{-e^{-4\pi}} \Big({\frac}{5}{8}\Big) \Gamma_{-e^{-4\pi}}\Big({\frac}{7}{8}\Big) $$ = (1+e\^[-4]{})\^2 a\^4 2\^[-49/16]{} e\^[11/16]{}. A new proof for Jacobi’s identity (\[MainProd\]) ================================================ We shall prove (\[SineProd\]) which as noticed before is an equivalent form of (\[MainProd\]). We will employ the following result. [@Bachraoui-3] \[master\] Let $n$ be a positive integer and let $f(u)$ be an entire function such that $$f(u+\pi) = - f(u)\quad \text{and\ } f{\left(}u+{\frac}{\pi\tau}{n} {\right)}= (-1)^n q^{{\frac}{-1}{n}} e^{-2iu} f(u).$$ Then for all complex numbers $x_1, x_2, \ldots, x_{n+1}$ we have: $$\sum_{j=1}^{n+1} {\frac}{ {\theta}_1 \big((n-1)x_j - x_1-x_2-\cdots - x_{j-1} - x_{j+1}-x_{j+2}- \cdots - x_{n+1} \mid \tau \big) f(x_j)} { {\displaystyle\prod_{\substack{k=1\\ k\not= j}}^n} {\theta}_1 {\left(}x_j-x_k \bigm| {\frac}{\tau}{n} {\right)}} = 0.$$ Note that by virtue of (\[sine-cosine-theta\]), we can check that the desired formula (\[SineProd\]) is equivalent to $$\theta_1 \big(z\mid \frac{\tau'}{n} \big) \theta_1 \left(z+\frac{\pi}{n} \bigm| \frac{\tau'}{n} \right) \theta_1 \left(z+\frac{2 \pi}{n} \bigm| \frac{\tau'}{n} \right) \cdots \theta_1 \left(z+\frac{(n-1)\pi}{n} \bigm| \frac{\tau'}{n} \right)$$ $$\label{Equiv-SineProd} = q^{\frac{(n-1)(n+1)}{12}} \frac{(q;q^2)_{\infty}^2}{(q^n;q^{2n})_{\infty}^{2n}} \frac{\theta_1^n \big( \frac{\pi}{2}\mid \frac{\tau'}{n} \big)}{\theta_1 \big(\frac{\pi}{2}\mid \tau')} \theta_1(nz \mid \tau').$$ Next observe that the sum in Theorem \[master\] is equivalent to $$\begin{aligned} \label{help-prod-0} {\theta}_1\big( (n-1)x_1-x_2-\ldots-x_{n+1}\mid\tau'\big) f(x_1) & \prod_{\substack{1\not=j\leq n\\ j<k\leq n+1}}{\theta}_1(x_j-x_k\bigm| {\frac}{\tau'}{n}) \nonumber \\ - {\theta}_1\big( (n-1)x_2-x_3-\ldots-x_{n+1}-x_1 \mid\tau' \big) f(x_2) & \prod_{\substack{2\not=j\leq n\\ j<k\leq n+1}}{\theta}_1(x_j-x_k\bigm| {\frac}{\tau'}{n}) \nonumber \\ + {\theta}_1\big( (n-1)x_3-\ldots-x_{n+1}-x_1-x_2\mid\tau' \big) f(x_3)& \prod_{\substack{3\not=j\leq n\\ j<k\leq n+1}} {\theta}_1(x_j-x_k\bigm| {\frac}{\tau'}{n}) \\ + \ldots + (-1)^n {\theta}_1\big( (n-1)x_{n+1}-x_1-\ldots-x_n\mid\tau' \big) f(x_{n+1}) & \prod_{\substack{n+1\not=j\leq n\\ j<k\leq n+1}}{\theta}_1(x_j-x_k\bigm| {\frac}{\tau'}{n}) \nonumber \\ = 0. \nonumber\end{aligned}$$ Let $(n-1) x_{n+1} = x_1+x_2+\ldots + x_n$. Then the last term in (\[help-prod-0\]) vanishes and for all $j=1,\ldots, n$ $$(n-1)x_j - x_{j+1}-x_{j+2}-\ldots - x_{n+1}-x_1-\ldots - x_{j-1} = n x_j - (x_1+\ldots + x_n) - x_{n+1} = n (x_j-x_{n+1}).$$ Then (\[help-prod-0\]) becomes $$\begin{aligned} \label{help-prod-1} {\theta}_1\big( n(x_1-x_{n+1})\mid\tau' \big) f(x_1) & \prod_{\substack{1\not=j\leq n\\ j<k\leq n+1}}{\theta}_1(x_j-x_k\bigm| {\frac}{\tau'}{n}) \nonumber \\ - {\theta}_1 \big( n(x_2-x_{n+1})\mid\tau' \big) f(x_2)& \prod_{\substack{2\not=j\leq n\\ j<k\leq n+1}}{\theta}_1(x_j-x_k\bigm| {\frac}{\tau'}{n}) \nonumber \\ + {\theta}_1 \big( n(x_3-x_{n+1})\mid\tau' \big) f(x_3) & \prod_{\substack{3\not=j\leq n\\ j<k\leq n+1}} {\theta}_1(x_j-x_k\bigm| {\frac}{\tau'}{n}) \\ + \ldots + (-1)^{n-1}{\theta}_1 \big( n(x_n-x_{n+1})\mid\tau' \big) f(x_n) & \prod_{\substack{n\not=j\leq n\\ j<k\leq n+1}}{\theta}_1(x_j-x_k\bigm| {\frac}{\tau'}{n}) \nonumber \\ = 0. \nonumber\end{aligned}$$ Now assume, for $3\leq k\leq n+1$, that $$x_k-x_3 = {\frac}{(3-k)\pi}{n}.$$ Then for all $3\leq j<k \leq n+1$ $${\theta}_1 \big( n(x_k - x_j) \mid\tau' \big) = {\theta}_1\big( (j-k)\pi \mid\tau' \big) = 0.$$ Thus, the formula (\[help-prod-1\]) after some simplification boils down to $$\begin{split} {\theta}_1 \big(n(x_1 -x_3)+(n-2)\pi \mid\tau' \big) & f(x_1) {\theta}_1\left( x_2-x_3 \bigm| {\frac}{\tau'}{n} \right) {\theta}_1 \left( x_2-x_3+ {\frac}{\pi}{n} \bigm| {\frac}{\tau'}{n} \right) \\ &\cdots {\theta}_1\big( x_2-x_3+{\frac}{(n-2)\pi}{n} \bigm| {\frac}{\tau'}{n} \big) \\ = \quad {\theta}_1 \big(n(x_2 - x_3) +(n-2)\pi \mid\tau'\big) & f(x_2) {\theta}_1\left( x_1-x_3 \bigm| {\frac}{\tau'}{n} \right) {\theta}_1\left( x_1-x_3+{\frac}{\pi}{n} \bigm| {\frac}{\tau'}{n} \right) \\ & \cdots {\theta}_1\left( x_1-x_3+{\frac}{(n-2)\pi}{n} \bigm| {\frac}{\tau'}{n} \right). \end{split}$$ [**Case 1: **]{} $n$ is odd. In this case it is easily seen with the help of (\[theta-1\]) and (\[Transf\]) that the function $f(u)= {\theta}_1 \left( u\bigm| {\frac}{\tau'}{n} \right)$ satisfies the conditions of Theorem \[master\] and with this choice of $f(u)$ the foregoing identity becomes $$\label{help-prod-2} \begin{split} &{\theta}_1 \big(n(x_1 - x_3) +(n-2)\pi\mid\tau'\big) {\theta}_1 \left( x_1 \bigm| {\frac}{\tau'}{n} \right) {\theta}_1\left( x_2-x_3 \bigm| {\frac}{\tau'}{n} \right) \\ & \qquad \qquad {\theta}_1 \left( x_2-x_3+ {\frac}{\pi}{n} \bigm| {\frac}{\tau'}{n} \right) \cdots {\theta}_1\big( x_2-x_3+{\frac}{(n-2)\pi}{n} \bigm| {\frac}{\tau'}{n} \big) \\ = & \quad {\theta}_1 \big(n(x_2 - x_3) +(n-2)\pi\mid\tau'\big) {\theta}_1 \left( x_2 \bigm| {\frac}{\tau'}{n} \right) {\theta}_1\left( x_1-x_3 \bigm| {\frac}{\tau'}{n} \right) \\ & \qquad \qquad{\theta}_1\left( x_1-x_3+{\frac}{\pi}{n} \bigm| {\frac}{\tau'}{n} \right) \cdots {\theta}_1\left( x_1-x_3+{\frac}{(n-2)\pi}{n} \bigm| {\frac}{\tau'}{n} \right). \end{split}$$ Now dividing in (\[help-prod-2\]) by $x_2-x_3$ and then letting $x_3\to x_2$ implies $$\begin{aligned} & {\theta}_1\big(n(x_1-x_2) \mid\tau'\big) {\theta}_1\left( x_1\bigm| {\frac}{\tau'}{n} \right) {\theta}_1'\left( 0\bigm| {\frac}{\tau'}{n} \right) {\theta}_1\left( {\frac}{\pi}{n}\bigm| {\frac}{\tau'}{n} \right) \\ & \quad\quad \quad {\theta}_1\left( {\frac}{2\pi}{n}\bigm| {\frac}{\tau'}{n} \right) \cdots {\theta}_1\left( {\frac}{(n-2)\pi}{n}\bigm| {\frac}{\tau'}{n} \right) \\ & = \quad n \ {\theta}_1'(0\mid\tau') {\theta}_1\left(x_2\bigm| {\frac}{\tau'}{n}\right){\theta}_1\left(x_1-x_2\bigm| {\frac}{\tau'}{n}\right) {\theta}_1\left(x_1-x_2+{\frac}{\pi}{n}\bigm| {\frac}{\tau'}{n}\right) \\ & \qquad\qquad\qquad {\theta}_1\left(x_1-x_2+{\frac}{2\pi}{n}\bigm| {\frac}{\tau'}{n}\right) \cdots {\theta}_1\left(x_1-x_2+{\frac}{(n-2)\pi}{n}\bigm| {\frac}{\tau'}{n}\right)\end{aligned}$$ Next, letting $x_1={\frac}{\pi}{n}$, this gives $$\begin{aligned} & {\theta}_1'\left( 0\bigm| {\frac}{\tau'}{n} \right) {\theta}_1(nx_2\mid\tau') {\theta}_1^2 \left( {\frac}{\pi}{n}\bigm| {\frac}{\tau'}{n} \right) {\theta}_1 \left( {\frac}{2\pi}{n}\bigm| {\frac}{\tau'}{n} \right) \cdots {\theta}_1 \left( {\frac}{(n-2)\pi}{n}\bigm| {\frac}{\tau'}{n} \right) \\ & = \quad\quad - n\ {\theta}_1'(0\mid\tau'){\theta}_1\left(-x_2\bigm| {\frac}{\tau'}{n}\right) {\theta}_1\left(-x_2+{\frac}{\pi}{n}\bigm| {\frac}{\tau'}{n}\right) \\ & \qquad\qquad\qquad {\theta}_1\left(-x_2+{\frac}{2\pi}{n}\bigm| {\frac}{\tau'}{n}\right) \cdots {\theta}_1\left(-x_2+{\frac}{(n-1)\pi}{n}\bigm| {\frac}{\tau'}{n}\right)\end{aligned}$$ which by substituting $z=-x_2$ and rearranging yields $$\begin{aligned} \label{help-prod-3} {\theta}_1\left( z\bigm| {\frac}{\tau'}{n}\right) {\theta}_1\left( z+{\frac}{\pi}{n}\bigm| {\frac}{\tau'}{n}\right) \cdots {\theta}_1\left( z+{\frac}{(n-1)\pi}{n}\bigm| {\frac}{\tau'}{n}\right) \\ = {\frac}{{\theta}_1'\left(0\bigm|{\frac}{\tau'}{n}\right)}{n {\theta}_1'(0\mid \tau')} {\theta}_1^2\left({\frac}{\pi}{n}\bigm|{\frac}{\tau'}{n}\right) {\theta}_1\left({\frac}{2\pi}{n}\bigm|{\frac}{\tau'}{n}\right) \cdots {\theta}_1\left({\frac}{(n-2)\pi}{n}\bigm|{\frac}{\tau'}{n}\right) {\theta}_1(nz\mid\tau'). \nonumber\end{aligned}$$ Thus by virtue of the identities (\[Equiv-SineProd\]) and (\[help-prod-3\]) we will be done if we show that $$q^{\frac{(n-1)(n+1)}{2}} \frac{(q;q^2)_{\infty}^2}{(q^n;q^{2n})_{\infty}^{2n}} \frac{\theta_1^n \big( \frac{\pi}{2}\mid \frac{\tau'}{n} \big)}{\theta_1 \big(\frac{\pi}{2}\mid \tau')}$$ $$= {\frac}{{\theta}_1'\left(0\bigm|{\frac}{\tau'}{n}\right)}{n {\theta}_1'(0\mid \tau')} {\theta}_1^2\left({\frac}{\pi}{n}\bigm|{\frac}{\tau'}{n}\right) {\theta}_1\left({\frac}{2\pi}{n}\bigm|{\frac}{\tau'}{n}\right) \cdots {\theta}_1\left({\frac}{(n-2)\pi}{n}\bigm|{\frac}{\tau'}{n}\right),$$ or, equivalently, $$q^{\frac{(n-1)(n+1)}{12}} \frac{(q;q^2)_{\infty}^2}{(q^n;q^{2n})_{\infty}^{2n}}$$ $$\label{help-prod-4} = {\frac}{1}{n}{\frac}{{\theta}_1'\left(0\bigm|{\frac}{\tau'}{n}\right)} {{\theta}_1'(0\mid \tau')} {\frac}{\theta_1 \big(\frac{\pi}{2}\mid \tau')}{\theta_1 \big( \frac{\pi}{2}\mid \frac{\tau'}{n} \big)} {\frac}{{\theta}_1^2\left({\frac}{\pi}{n}\bigm|{\frac}{\tau'}{n}\right) \prod_{j=2}^{n-2}{\theta}_1\left({\frac}{j\pi}{n}\bigm| {\frac}{\tau'}{n}\right)} {\theta_1^{n-1} \big( \frac{\pi}{2}\mid \frac{\tau'}{n} \big)}.$$ To establish (\[help-prod-4\]), we use Jacobi’s imaginary transformation (\[ImTransf\]) and the infinite product representation (\[theta-product\]) and proceed as follows. For all $j=1,\ldots, n-2$ we have $$\begin{split} {\theta}_1\left({\frac}{j\pi}{n}\bigm| {\frac}{\tau'}{n}\right) &= \left(-i {\frac}{\tau'}{n} \right)^{-{\frac}{1}{2}}(-i) e^{{\frac}{i j^2\pi\tau}{n}} {\theta}_1(j\pi\tau \mid n\tau) \\ &= \left(-i {\frac}{\tau'}{n} \right)^{-{\frac}{1}{2}}(-i) e^{{\frac}{i j^2\pi\tau}{n}} i q^{{\frac}{n}{4}} e^{-i j\pi\tau}(q^{2n} e^{-2j i \pi\tau}, e^{2j i \pi\tau}, q^{2n}; q^{2n})_{\infty} \\ &= \left(-i {\frac}{\tau'}{n} \right)^{-{\frac}{1}{2}} q^{{\frac}{j^2}{n}+{\frac}{n}{4}-j}(q^{2j},q^{2n-2j},q^{2n};q^{2n})_{\infty}, \end{split}$$ from which we get $$\begin{split} {\theta}_1^2\left({\frac}{\pi}{n}\bigm| {\frac}{\tau'}{n}\right) \prod_{j=2}^{n-2}{\theta}_1\left({\frac}{j\pi}{n}\bigm| {\frac}{\tau'}{n}\right) &= \left(-i {\frac}{\tau'}{n} \right)^{-{\frac}{n-1}{2}} q^N \prod_{j=1}^{{\frac}{n-1}{2}}(q^{2j},q^{2n-2j},q^{2n};q^{2n})_{\infty}^2 \\ &= \left(-i {\frac}{\tau'}{n} \right)^{-{\frac}{n-1}{2}} q^N {\frac}{(q^2;q^2)_{\infty}^2}{(q^{2n};q^{2n})_{\infty}^2} (q^{2n};q^{2n})_{\infty}^{n-1} \\ &= \left(-i {\frac}{\tau'}{n} \right)^{-{\frac}{n-1}{2}} q^N (q^2;q^2)_{\infty}^2 (q^{2n};q^{2n})_{\infty}^{n-3}, \end{split}$$ where $$N= {\frac}{1}{n}+{\frac}{n}{4}-1 + {\frac}{1+2^2+\ldots+ (n-2)^2}{n} + {\frac}{n(n-2)}{4}-{\frac}{(n-2)(n-1)}{2} = {\frac}{(n-2)(n-1)}{12}.$$ Similarly, $$\begin{split} {\theta}_1\left({\frac}{\pi}{2}\bigm| {\frac}{\tau'}{n}\right) &= \left(-i {\frac}{\tau'}{n} \right)^{-{\frac}{1}{2}}(-i) e^{{\frac}{i n\pi\tau}{4}} {\theta}_1\left({\frac}{n\pi\tau}{2} \mid n\tau \right) \\ &= \left(-i {\frac}{\tau'}{n} \right)^{-{\frac}{1}{2}}(-i) e^{{\frac}{i n\pi\tau}{4}} i q^{{\frac}{n}{4}} e^{-{\frac}{i n\pi\tau}{2}} (q^{2n} e^{-i n \pi\tau}, e^{i n \pi\tau}, q^{2n};q^{2n})_{\infty} \\ &= \left(-i {\frac}{\tau'}{n} \right)^{-{\frac}{1}{2}} (q^n,q^n,q^{2n};q^{2n})_{\infty}, \end{split}$$ from which we derive $${\theta}_1^{n-1}\left({\frac}{\pi}{2}\bigm| {\frac}{\tau'}{n}\right) = \left(-i {\frac}{\tau'}{n} \right)^{-{\frac}{n-1}{2}} (q^n;q^{2n})_{\infty}^{2n-2} (q^{2n};q^{2n})_{\infty}^{n-1}.$$ From the above we have $$\label{help-prod-5} {\frac}{{\theta}_1^2\left({\frac}{\pi}{n}\bigm| {\frac}{\tau'}{n}\right) \prod_{j=2}^{n-2}{\theta}_1\left({\frac}{j\pi}{n}\bigm| {\frac}{\tau'}{n}\right)} {{\theta}_1^{n-1}\left({\frac}{\pi}{2}\bigm| {\frac}{\tau'}{n}\right)} = q^{{\frac}{(n-2)(n-1)}{12}} {\frac}{(q^2;q^2)_{\infty}^2}{(q^{2n};q^{2n})_{\infty}^2 (q^n;q^{2n})_{\infty}^{2n-2}}.$$ Furthermore, with the help of (\[key-derivative\]) we find $$\begin{aligned} \label{help-prod-6} {\frac}{1}{n}{\frac}{{\theta}_1'\left(0\bigm|{\frac}{\tau'}{n}\right)} {{\theta}_1'(0\mid \tau')} {\frac}{\theta_1 \big(\frac{\pi}{2}\mid \tau')}{\theta_1 \big( \frac{\pi}{2}\mid \frac{\tau'}{n} \big)} &= {\frac}{1}{n} {\frac}{2 (-i n\tau)^{{\frac}{3}{2}} q^{{\frac}{n}{4}}(q^{2n};q^{2n})_{\infty}^3 (-i\tau')^{-{\frac}{1}{2}} (q;q^2)_{\infty}^2 (q^2;q^2)_{\infty}} {2 (-i \tau)^{{\frac}{3}{2}} q^{{\frac}{1}{4}}(q^{2};q^{2})_{\infty}^3 \big(-i{\frac}{\tau'}{n} \big)^{-{\frac}{1}{2}} (q^n;q^{2n})_{\infty}^2 (q^{2n};q^{2n})_{\infty}} \nonumber \\ &= q^{{\frac}{n-1}{4}} {\frac}{(q^{2n};q^{2n})_{\infty}^2 (q;q^2)_{\infty}^2}{(q^n;q^{2n})_{\infty}^2 (q^2;q^2)_{\infty}^2}. $$ Finally, multiply (\[help-prod-5\]) and (\[help-prod-6\]) and simplify to deduce the desired formula (\[help-prod-4\]).\ [**Case 2: **]{} If $n$ is even, take $f(u)= {\theta}_1\left(u+{\frac}{\pi}{2}\bigm| {\frac}{\tau'}{n}\right)$ and proceed in exactly the same way to derive the result. Proof of Theorem \[Gauss-q-Gamma\] ================================== We start proving that (\[q-anlog-Gauss\]) is the $q$-analogue for identity (\[Gauss-Gamma-special\]). Assuming (\[q-anlog-Gauss\]) and the basic fact that $\lim_{q\to 1} \Gamma_q(1/2) = \Gamma (1/2) = \sqrt{\pi}$, it will be enough to show that $$\label{key-limit} \lim_{q\to 1}{\frac}{(q;q^2)_{\infty}^{n-1} (q^2;q^2)_{\infty}}{(q^{{\frac}{2}{n}};q^{{\frac}{2}{n}})_{\infty}} = {\frac}{2^{{\frac}{n-1}{2}}}{\sqrt{n}}.$$ Note that from (\[SineProd\]) we have $$\prod_{k=1}^{n-1}\sin_{q^n}\pi \left(z+{\frac}{k}{n} \right) = q^{\frac{(n-1)(n+1)}{12}} \frac{(q;q^2)_{\infty}^2}{(q^n;q^{2n})_{\infty}^{2n}} {\frac}{\sin_q n\pi z}{\sin_{q^n}\pi z}.$$ Taking limits as $z\to 0$ on both sides and using (\[q-sin-derive\]) give $$\label{help-gam-prod-1} \begin{split} \prod_{k=1}^{n-1} \sin_{q^n} {\frac}{k\pi}{n} &= q^{\frac{(n-1)(n+1)}{12}} \frac{(q;q^2)_{\infty}^2}{(q^n;q^{2n})_{\infty}^{2n}} {\frac}{n\pi \sin_q' 0}{\pi \sin_{q^n}'0} \\ &= q^{\frac{(n-1)(n+1)}{12}} \frac{(q;q^2)_{\infty}^2}{(q^n;q^{2n})_{\infty}^{2n}} q^{-{\frac}{n-1}{4}} {\frac}{ (q^2;q^2)_{\infty}^2 (q^n;q^{2n})_{\infty}^2}{ (q;q^2)_{\infty}^2 (q^{2n};q^{2n})_{\infty}^2} \\ &= q^{{\frac}{(n-1)(n-2)}{12}} {\frac}{ (q^2;q^2)_{\infty}^2 }{ (q^n;q^{2n})_{\infty}^{2n-2} (q^{2n};q^{2n})_{\infty}^2} . \end{split}$$ Now let in (\[help-gam-prod-1\]) $q=q^{{\frac}{1}{n}}$, next take limits as $q\to 1$, and finally use the well-known trigonometric formula $$\prod_{k=1}^{n-1} \sin {\frac}{k\pi}{n} = {\frac}{n}{2^{n-1}}.$$ to deduce that $$\lim_{q\to 1} {\frac}{(q^{{\frac}{2}{n}}; q^{{\frac}{2}{n}})_{\infty}^2}{(q;q^2)_{\infty}^{2n-2} (q^2;q^2)_{\infty}^2} = {\frac}{n}{2^{n-1}}$$ which implies (\[key-limit\]). We now establish the formula (\[q-anlog-Gauss\]). By (\[sine-cosine-q-gamma\]) and (\[help-gam-prod-1\]) we get $$\prod_{k=1}^{n-1} q^{{\frac}{n}{4}} \Gamma_{q^{2n}}^2\left({\frac}{1}{2}\right) {\frac}{ q^{n\Big({\frac}{k}{n}\big({\frac}{k}{n}-1\big) \Big)} } { \Gamma_{q^{2n}} \left({\frac}{k}{n}\right) \Gamma_{q^{2n}} \left(1-{\frac}{k}{n}\right) } = q^{{\frac}{(n-1)(n-2)}{12}} {\frac}{ (q^2;q^2)_{\infty}^2 }{ (q^n;q^{2n})_{\infty}^{2n-2} (q^{2n};q^{2n})_{\infty}^2},$$ which after rearranging and simplifying means $$\left( \Gamma_{q^{2n}} \Big({\frac}{1}{2}\Big)\right)^{2n-2} = {\frac}{ (q^2;q^2)_{\infty}^2 }{ (q^n;q^{2n})_{\infty}^{2n-2} (q^{2n};q^{2n})_{\infty}^2} \prod_{k=1}^{n-1}\Gamma_{q^{2n}} \left({\frac}{k}{n}\right) \Gamma_{q^{2n}} \left(1-{\frac}{k}{n}\right),$$ or equivalently, $$\left(\prod_{k=1}^{n-1}\Gamma_{q^{2n}} \left({\frac}{k}{n}\right) \right)^2 = \left( \Gamma_{q^{2n}} \Big({\frac}{1}{2}\Big)\right)^{2n-2} {\frac}{ (q^n;q^{2n})_{\infty}^{2n-2} (q^{2n};q^{2n})_{\infty}^2}{ (q^2;q^2)_{\infty}^2 }.$$ Now let in the foregoing identity $q = q^{{\frac}{1}{2n}}$ to obtain $$\prod_{k=1}^{n-1}\Gamma_{q} \Big({\frac}{k}{n}\Big) = \left( \Gamma_{q} \Big({\frac}{1}{2}\Big)\right)^{n-1} {\frac}{ (q^{{\frac}{1}{2}};q)_{\infty}^{n-1} (q;q)_{\infty}}{ (q^{{\frac}{1}{n}};q^{{\frac}{1}{n}})_{\infty} }.$$ This completes the proof. Proof of Theorem \[q-short-prod\] ================================= By an appeal to Theorem \[Gauss-q-Gamma\] and the Möbius inversion formula, we have $$\begin{split} P_q(n) &= \prod_{d\mid n}\left(\left(\Gamma_q\left({\frac}{1}{2}\right)\right)^{d-1} {\frac}{(q^{{\frac}{1}{2}};q)_{\infty}^{d-1} (q;q)_{\infty}}{(q^{{\frac}{1}{d}}; q^{{\frac}{1}{d}})_{\infty}} \right)^{\mu\left({\frac}{n}{d}\right)}\\ &= (q;q)_{\infty}^{\sum_{d\mid n}\mu\left({\frac}{n}{d}\right)} {\frac}{\left(\Gamma_q\left({\frac}{1}{2}\right)\right)^{\sum_{d\mid n}d \mu\left({\frac}{n}{d}\right)- \sum_{d\mid n}\mu\left({\frac}{n}{d}\right)} (q^{{\frac}{1}{2}};q)_{\infty}^{\sum_{d\mid n}d\mu\left({\frac}{n}{d}\right)}} { \prod_{d\mid n}(q^{{\frac}{1}{d}}; q^{{\frac}{1}{d}})_{\infty}^{\mu\left({\frac}{n}{d}\right)}}, \end{split}$$ which with the help of the basic facts $$\sum_{d\mid n}\mu\left({\frac}{n}{d}\right)=0\quad \text{and\quad} \sum_{d\mid n}d \mu\left({\frac}{n}{d}\right) = \varphi(n) \quad (n>1)$$ gives the desired formula. Proof of Theorem \[2-powers\] ============================= Let $m>1$ be an integer. The first identity is clear from the definition. As to the second identity, we have by Theorem \[q-short-prod\], (\[q-gamma-half\]), and (\[q-basics\]) $$\begin{split} P_q(2^m) &= \Gamma_{q}^{2^{m-1}}\Big({\frac}{1}{2}\Big) {\frac}{(q^{{\frac}{1}{2}};q)_{\infty}^{2^{m-1}} (q^{{\frac}{1}{2^{m-1}}};q^{{\frac}{1}{2^{m-1}}})_{\infty}}{(q^{{\frac}{1}{2^m}};q^{{\frac}{1}{2^m}})_{\infty}} \\ &= (1-q)^{2^{m-2}} {\frac}{(q;q)_{\infty}^{2^{m-1}} (q^{{\frac}{1}{2^{m-1}}};q^{{\frac}{1}{2^{m-1}}})_{\infty}}{(q^{{\frac}{1}{2^m}};q^{{\frac}{1}{2^m}})_{\infty}} \\ &= (1-q)^{2^{m-2}} {\frac}{(q;q)_{\infty}^{2^{m-1}}}{(q^{{\frac}{1}{2^m}};q^{{\frac}{1}{2^{m-1}}})_{\infty}}. \end{split}$$ Then we will be done if show that $${\frac}{(q;q)_{\infty}^{2^{m-1}}}{(q^{{\frac}{1}{2^m}};q^{{\frac}{1}{2^{m-1}}})_{\infty}} = \psi(q^{{\frac}{1}{2^{m}}}) \prod_{k=1}^{m-1} \psi^{2^{m-1-k}}(q^{{\frac}{1}{2^{k}}}).$$ We proceed by induction on $m>1$. If $m=2$, then $$\begin{split} {\frac}{(q;q)_{\infty}^2}{(q^{{\frac}{1}{4}};q^{{\frac}{1}{2}})_{\infty}} &= {\frac}{(q;q)_{\infty} (-q^{{\frac}{1}{2}};q^{{\frac}{1}{2}})_{\infty} (q^{{\frac}{1}{2}};q^{{\frac}{1}{2}})_{\infty}}{(q^{{\frac}{1}{4}};q^{{\frac}{1}{2}})_{\infty}} \\ &= {\frac}{(q;q)_{\infty}}{(q^{{\frac}{1}{2}};q)_{\infty}} {\frac}{(q^{{\frac}{1}{2}};q^{{\frac}{1}{2}})_{\infty}}{(q^{{\frac}{1}{4}};q^{{\frac}{1}{2}})_{\infty}} \\ &= \psi(q^{{\frac}{1}{4}}) \psi(q^{{\frac}{1}{2}}), \end{split}$$ as required for the basic case. Now suppose the induction hypothesis holds for $m>1$. Then $$\begin{split} {\frac}{(q;q)_{\infty}^{2^{m}}}{(q^{{\frac}{1}{2^{m+1}}};q^{{\frac}{1}{2^{m}}})_{\infty}} &= (q;q)_{\infty}^{2^{m-1}} {\frac}{(q;q)_{\infty}^{2^{m-1}}}{ (q^{{\frac}{1}{2^{m+1}}};q^{{\frac}{1}{2^{m}}})_{\infty}} \\ &= {\frac}{(q;q)_{\infty}^{2^{m-1}}}{(q^{{\frac}{1}{2}};q)_{\infty}^{2^{m-1}}} {\frac}{(q^{{\frac}{1}{2}};q^{{\frac}{1}{2}})_{\infty}^{2^{m-1}}}{\big((q^{{\frac}{1}{2}})^{{\frac}{1}{2^{m}}};(q^{{\frac}{1}{2}})^{{\frac}{1}{2^{m-1}}}\big)_{\infty}} \\ &= \psi^{2^{m-1}}(q^{{\frac}{1}{2}}) \psi(q^{{\frac}{1}{2^{m+1}}}) \prod_{k=1}^{m-1} \psi^{2^{m-1-k}}(q^{{\frac}{1}{2^{k+1}}}) \\ &= \psi(q^{{\frac}{1}{2^{m+1}}}) \prod_{k=1}^{m} \psi^{2^{m-k}}(q^{{\frac}{1}{2^{k}}}). \end{split}$$ This completes the proof. [**Acknowledgment.**]{} The authors are grateful to the referee for valuable comments and interesting suggestions. [99]{} S. Abo Touk, Z. Al Houchan, and M. El Bachraoui, *Proofs for two q-trigonometric identities of Gosper*, J. Math. Anal. Appl. 456 (2017), 662-670. M. El Bachraoui, *Confirming a $q$-trigonometric conjecture of Gosper*, Proc. Amer. Math. Soc. 146:4 (2018), 1619–-1625. M. El Bachraoui, *Proving some identities of Gosper on q-trigonometric functions* Proc. Amer. Math. Soc. In press, DOI: https://doi.org/10.1090/proc/14084. M. El Bachraoui, *Solving some q-trigonometric conjectures of Gosper*, J. Math. Anal. Appl. 460 (2018), 610–617 N.D. Baruah and N. Saikia, *Two parameters for Ramanujan’s theta-functions and their explicit values*, Rocky Mountain J. Math. 37:6 (2007), 1747–-1790. B. C. Berndt, *Ramanujan’s notebooks, Part [V]{}*, Springer-Verlag, 1994. B. C. Berndt, *Number theory in the spirit of Ramanujan*, Student mathematical Library, 2006. I. Ben-Ari, D. Hay, and A. Roitershtein, *On Wallis-type products and Pólya’s urn schemes*, Amer. Math. Monthly 121 (2014), 422–-432. M. Chamberland and A. Straub, *On gamma quotients and infinite products*, Adv. in Appl. Math. 51 (2013) 546-562. A. Enneper, *Elliptische functionen. Theorie und geschichte*, Halle a.S., L. Nebert, 1876. G. Gasper and M. Rahman, *Basic Hypergeometric series*, Cambridge University Press, 2004. R. W. Gosper, *Experiments and discoveries in $q$-trigonometry*, in Symbolic Computation, Number Theory, Special Functions, Physics and Combinatorics. Editors: F. G. Garvan and M. E. H. Ismail. Kluwer, Dordrecht, Netherlands, 2001. pp 79-105. F. H. Jackson, *A generalization of the function $\Gamma(n)$ and $x^n$*, Proc. Roy. Soc. London 74 (1904), 64-72. F. H. Jackson, *The basic gamma-function and elliptic functions*, Proc. Roy. Soc. London A 76 (1905), 127-144. C. Jacobi, *Suites des notices sur les fonctions elliptiques*, Crelle J., tome 2, (1828) 303-310. D. F. Lawden, *Elliptic Functions and Applications*, Springer-Verlag, 1989. Z.-G. Liu, *A theta function identity and its implications*, Trans. Amer. Math. Soc. 357:2 (2005), 825-835. Z.-G. Liu, *An addition formula for the Jacobian theta function and its applications*, Adv. Math. 212:1 (2007), 389-406. M. Mahmoud and R. P. Agarwal, *Hermite’s formula for q-gamma function*, Mathematical Inequalities and Applications 19:3 (2016), 841-851. C. Martin, *A product of gamma function values at fractions with the same denominator*, Preprint (available at http://arxiv.org/abs/0907.4384). I. Mező, *Duplication formulae involving Jacobi theta functions and Gosper’s $q$-trigonometric functions*, Proc. Amer. Math. Soc. 141:7 (2013), 2401–2410. I. Mező, *Several special values of Jacobi theta functions*, Preprint (available at https://arxiv.org/abs/1106.2703). A. Nijenhuis, *Short gamma products with simple values*, Amer. Math. Monthly 117 (2010), 733–737. A. S. Nimbran, *Interesting infinite products of rational functions motivated by Euler*, Math. Student 85 (2016), 117–133. J. Sándor and L. Tóth, *A remark on the gamma function*, Elem. Math. 44:3 (1989), 73–76. L.-C. Shen, *On the additive formulae of the theta functions and a collection of Lambert series pertaining to the modular equations of degree 5*, Trans. Amer. Math. Soc. 345:1 (1994), 323-345 L.-C. Shen, *On some modular equations of degree 5*, Proc. Amer. Math. Soc. 123:5 (1995), 1521–1526. E.T. Whittaker and G.N. Watson, *A course of modern analysis*, Cambridge University Press, 1996. J. Yi, Y. Lee, and D. H. Paek *The explicit formulas and evaluations of Ramanujan’s theta-function $\psi$*, J. Math. Anal. Appl. 321 (2006), 157-–181.
--- abstract: 'We propose a new family of natural generalizations of the pentagram map from 2D to higher dimensions and prove their integrability on generic twisted and closed polygons. In dimension $d$ there are $d-1$ such generalizations called dented pentagram maps, and we describe their geometry, continuous limit, and Lax representations with a spectral parameter. We prove algebraic-geometric integrability of the dented pentagram maps in the 3D case and compare the dimensions of invariant tori for the dented maps with those for the higher pentagram maps constructed with the help of short diagonal hyperplanes. When restricted to corrugated polygons, the dented pentagram maps coincide between themselves and with the corresponding corrugated pentagram map. Finally, we prove integrability for a variety of pentagram maps for generic and partially corrugated polygons in higher dimensions.' author: - 'Boris Khesin and Fedor Soloviev[^1]' title: The geometry of dented pentagram maps --- Introduction {#intro .unnumbered} ============ The pentagram map was originally defined in [@Schwartz] as a map on plane convex polygons considered up to their projective equivalence, where a new polygon is spanned by the shortest diagonals of the initial one, see Figure \[fig:hex\]. This map is the identity for pentagons, it is an involution for hexagons, while for polygons with more vertices it was shown to exhibit quasi-periodic behaviour under iterations. The pentagram map was extended to the case of twisted polygons and its integrability in 2D was proved in [@OST99], see also [@FS]. ![The image $T(P)$ of a hexagon $P$ under the 2D pentagram map.[]{data-label="fig:hex"}](4grafik_new.pdf){width="1.8in"} While this map is in a sense unique in 2D, its generalizations to higher dimensions seem to allow more freedom. A natural requirement for such generalizations, though, is their integrability. In [@KS] we observed that there is no natural generalization of this map to polyhedra and suggested a natural integrable generalization of the pentagram map to generic twisted space polygons (see Figure \[T1-spiral\]). This generalization in any dimension was defined via intersections of “short diagonal" hyperplanes, which are symmetric higher-dimensional analogs of polygon diagonals, see Section \[sect:any-diag\] below. This map turned out to be scale invariant (see [@OST99] for 2D, [@KS] for 3D, [@Beffa_scale] for higher D) and integrable in any dimension as it admits a Lax representation with a spectral parameter [@KS]. ![A space pentagram map is applied to a twisted polygon in 3D[]{data-label="T1-spiral"}](spiral_t1.pdf){width="3.7in"} A different integrable generalization to higher dimensions was proposed in [@GSTV], where the pentagram map was defined not on generic, but on the so-called corrugated polygons. These are piecewise linear curves in ${{\mathbb {RP}}}^d$, whose pairs of edges with indices differing by $d$ lie in one and the same two-dimensional plane. It turned out that the pentagram map on corrugated polygons is integrable and it admits an explicit description of the Poisson structure, a cluster algebra structure, and other interesting features [@GSTV]. In this paper we present a variety of integrable generalized pentagram maps, which unifies these two approaches. “Primary integrable maps" in our construction are called the dented pentagram maps. These maps are defined for generic twisted polygons in ${{\mathbb {RP}}}^d$. It turns out that the pentagram maps for corrugated polygons considered in [@GSTV] are a particular case (more precisely, a restriction) of these dented maps. We describe in detail how to perform such a reduction in Section \[S:corrug\]. To define the dented maps, we propose a definition of a “dented diagonal hyperplane" depending on a parameter $m=1,...,d-1$, where $d$ is the dimension of the projective space. The parameter $m$ marks the skipped vertex of the polygon, and in dimension $d$ there are $d-1$ different dented integrable maps. The vertices in the “dented diagonal hyperplanes" are chosen in a non-symmetric way (as opposed to the unique symmetric choice in [@KS]). We would like to stress that in spite of a non-symmetric choice, the integrability property is preserved, and each of the dented maps can be regarded as a natural generalization of the classical 2D pentagram map of [@Schwartz]. We describe the geometry and Lax representations of the dented maps and their generalizations, the deep-dented pentagram maps, and prove their algebraic-geometric integrability in 3D. In a sense, from now on a new challenge might be to find examples of non-integrable Hamiltonian maps of pentagram type, cf. [@KS14]. We emphasize that often throughout the paper we understand [*integrability*]{} as the existence of a Lax representation with a spectral parameter corresponding to scaling invariance of a given dynamical system. We show how it is used to prove algebraic-geometric integrability for the primary maps in ${{\mathbb {CP}}}^3$. In any dimension, the Lax representation provides first integrals (as the coefficients of the corresponding spectral curve) and allows one to use algebraic-geometric machinery to prove various integrability properties. We also note that while most of the paper deals with $n$-gons satisfying the condition $gcd(n,d+1)=1$, the results hold in full generality and we show how they are adapted to the general setting in Section \[nonprimes\]. While most of definitions below work both over ${{\mathbb R}}$ and ${{\mathbb C}}$,  throughout the paper we describe the geometric features of pentagram maps over ${{\mathbb R}}$, while their Lax representations over ${{\mathbb C}}$. Here are the main results of the paper. $\bullet$ We define generalized pentagram maps $T_{I,J}$ on (projective equivalence classes of) twisted polygons in ${{\mathbb {RP}}}^d$, associated with $(d-1)$-tuple of numbers $I$ and $J$: the tuple $I$ defines which vertices to take in the definition of the diagonal hyperplanes, while the tuple $J$ determines which of the hyperplanes to intersect in order to get the image point. In Section \[sect:any-diag\] we prove the duality between such pentagram maps: $$T_{I,J}^{-1}=T_{J^*,I^*}\circ Sh\,,$$ where $I^*$ and $J^*$ stand for the $(d-1)$-tuples taken in the opposite order and $Sh$ is any shift in the indices of polygon vertices. $\bullet$ The [*dented pentagram maps*]{} $T_m$ on polygons $(v_k)$ in ${{\mathbb {RP}}}^d$ are defined by intersecting $d$ consecutive diagonal hyperplanes. Each hyperplane $P_k$ passes through all vertices but one from $v_k$ to $v_{k+d}$ by skipping only the vertex $v_{k+m}$. The main theorem on such maps is the following (cf. Theorem \[thm:lax\_anyD\]): The dented pentagram map $T_m$ on both twisted and closed $n$-gons in any dimension $d$ and any $m=1,...,d-1$ is an integrable system in the sense that it admits a Lax representation with a spectral parameter. We also describe the dual dented maps, prove their scale invariance (see Section \[sect:dual\]), and study their geometry in detail. Theorem \[thm:comparison\] shows that in dimension 3 the algebraic-geometric integrability follows from the proposed Lax representation for both dented pentagram maps and the short-diagonal pentagram map. $\bullet$ The continuous limit of any dented pentagram map $T_m$ (and more generally, of any generalized pentagram map) in dimension $d$ is the $(2, d+1)$-KdV flow of the Adler-Gelfand-Dickey hierarchy on the circle, see Theorem \[thm:cont\]. For 2D this is the classical Boussinesq equation on the circle: $u_{tt}+2(u^2)_{xx}+u_{xxxx}=0$, which appears as the continuous limit of the 2D pentagram map [@OST99; @Sch08]. $\bullet$ Consider the space of corrugated polygons in ${{\mathbb {RP}}}^d$, i.e., twisted polygons, whose vertices $v_{k-1}, v_{k}, v_{k+d-1},$ and $v_{k+d}$ span a projective two-dimensional plane for every $k\in {{\mathbb Z}}$, following [@GSTV]. It turns out that the pentagram map $T_{cor}$ on them can be viewed as a particular case of the dented pentagram map, see Theorem \[thm:restr\_to\_corr\]: This pentagram map $T_{cor}$ is a restriction of the dented pentagram map $T_m$ for any $m=1,..., d-1$ from generic n-gons ${\mathcal P}_n$ in ${{\mathbb {RP}}}^d$ to corrugated ones ${\mathcal P}_n^{cor}$ (or differs from it by a shift in vertex indices). In particular, these restrictions for different $m$ coincide modulo an index shift. We also describe the algebraic-geometric integrability for corrugated pentagram map in ${{\mathbb {CP}}}^3$, see Section \[corr3D\]. $\bullet$ Finally, we provide an application of the use of dented pentagram maps. The latter can be regarded as “primary" objects, simplest integrable systems of pentagram type. By considering more general diagonal hyperplanes, such as “deep-dented diagonals", i.e., those skipping more than one vertex, one can construct new integrable systems, see Theorem \[thm:ddd\]: The deep-dented pentagram maps in ${{\mathbb {RP}}}^d$ are restrictions of integrable systems to invariant submanifolds and have Lax representations with a spectral parameter. The main tool to prove integrability in this more general setting is an introduction of the corresponding notion of [*partially corrugated polygons*]{}, occupying an intermediate position between corrugated and generic ones, see Section \[sect:appl\]. The pentagram map on such partially corrugated polygons also turns out to be integrable. This work brings about the following question, which manifests the change of perspective on generalized pentagram maps: Is it possible to choose the diagonal hyperplane so that the corresponding pentagram map turned out to be [non-integrable]{}? Some numerical evidence in this direction is presented in [@KS14]. [**Acknowledgments**]{}. We are grateful to S. Tabachnikov for useful discussions. B.K. and F.S. were partially supported by NSERC research grants. B.K. is grateful to the Simons Center for Geometry and Physics in Stony Brook for support and hospitality; F.S. acknowledges the support of the Fields Institute in Toronto and the CRM in Montreal. Duality of pentagram maps in higher dimensions {#sect:any-diag} ============================================== We start with the notion of a twisted $n$-gon in dimension $d$. \[tw-ngon\] [A [*twisted $n$-gon*]{} in a projective space ${{\mathbb {RP}}}^d$ with a monodromy $M \in SL_{d+1}({{\mathbb R}})$ is a map $\phi: {{\mathbb Z}}\to {{\mathbb {RP}}}^d$, such that $\phi(k+n) = M \circ \phi(k)$ for each $k\in {{\mathbb Z}}$ and where $M$ acts naturally on ${{\mathbb {RP}}}^d$. Two twisted $n$-gons are [*equivalent*]{} if there is a transformation $g \in SL_{d+1}({{\mathbb R}})$ such that $g \circ \phi_1=\phi_2$. ]{} We assume that the vertices $v_k:=\phi(k), \; k \in {{\mathbb Z}},$ are in general position (i.e., no $d+1$ consecutive vertices lie in the same hyperplane in ${{\mathbb {RP}}}^d$), and denote by ${\mathcal P}_n$ the space of generic twisted $n$-gons considered up to the above equivalence. Define general pentagram maps as follows. \[def:I-diag\] Let $I=(i_1,...,i_{d-1})$ and $J=(j_1,...,j_{d-1})$ be two $(d-1)$-tuple of numbers $i_\ell, j_m\in {{\mathbb N}}$. For a generic twisted $n$-gon in ${{\mathbb {RP}}}^d$ one can define a [*$I$-diagonal hyperplane*]{} $P_k$ as the one passing through $d$ vertices of the $n$-gon by taking every $i_\ell$th vertex starting at the point $v_k$, i.e., $$P_k:=(v_k, v_{k+i_1}, v_{k+i_1+i_2},..., v_{k+i_1+...+i_{d-1}})\,,$$ see Figure \[fig:T312\]. ![The diagonal hyperplane for the jump tuple $I=(3,1,2)$ in ${{\mathbb {RP}}}^4$.[]{data-label="fig:T312"}](t312_p4.pdf){width="2.9in"} The image of the vertex $v_k$ under the [*generalized pentagram map*]{} $T_{I,J}$ is defined by intersecting every $j_m$th out of the $I$-diagonal hyperplanes starting with $P_k$: $$T_{I,J}v_k:=P_{k}\cap P_{k+j_1}\cap P_{k+j_1+j_2}\cap...\cap P_{k+j_1+...+j_{d-1}}\,.$$ (Thus $I$ defines the structure of the diagonal hyperplane, while $J$ governs which of them to intersect.) The corresponding map $T_{I,J}$ is considered (and is generically defined) on the space ${\mathcal P}_n$ of equivalence classes of $n$-gons in ${{\mathbb {RP}}}^d$. As usual, we assume that the vertices are in “general position,” and every $d$ hyperplanes $P_i$ intersect at one point in ${{\mathbb {RP}}}^d$. [Consider the case of $I=(2,2,...,2)$ and $J=(1,1,...,1)$ in ${{\mathbb {RP}}}^d$. This choice of $I$ corresponds to “short diagonal hyperplanes", i.e., every $I$-diagonal hyperplane passes through $d$ vertices by taking every other vertex of the twisted polygon. The choice of $J$ corresponds to taking intersections of $d$ consecutive hyperplanes. This recovers the definition of the short-diagonal (or higher) pentagram maps from [@KS]. Note that the classical 2D pentagram map has $I$ and $J$ each consisting of one number: $I=(2)$ and $J=(1)$. ]{} Denote by $I^*=(i_{d-1},...,i_{1})$ the $(d-1)$-tuple $I$ taken in the opposite order and by $Sh$ the operation of any index shift on the sequence of vertices. [**(Duality)**]{}\[thm:duality\] There is the following duality for the generalized pentagram maps $T_{I,J}$: $$T_{I,J}^{-1}=T_{J^*,I^*}\circ Sh\,,$$ where $Sh$ stands for some shift in indices of vertices. To prove this theorem we introduce the following duality maps, cf. [@OST99]. [Given a generic sequence of points $\phi(j) \in {{\mathbb {RP}}}^d, \; j \in {{\mathbb Z}},$ and a $(d-1)$-tuple $I=(i_1,...,i_{d-1})$ we define the following [*sequence of hyperplanes*]{} in ${{\mathbb {RP}}}^d$: $$\alpha_I(\phi(j)):=(\phi(j), \phi(j+i_1),..., \phi(j+i_1+...+i_{d-1}))\,,$$ which is regarded as a sequence of points in the dual space: $\alpha_I(\phi(j))\in ({{\mathbb {RP}}}^d)^*$. ]{} The generalized pentagram map $T_{I,J}$ can be defined as a composition of two such maps up to a shift of indices: $T_{I,J}=\alpha_I\circ\alpha_J\circ Sh$. Note that for a special $I=(p,p,...,p)$ the maps $\alpha_I$ are involutions modulo index shifts (i.e., $\alpha_I^2=Sh$), but for a general $I$ the maps $\alpha_I$ are no longer involutions. However, one can see from their construction that they have the following duality property: $\alpha_I\circ \alpha_{I^*}=Sh$ and they commute with index shifts: $\alpha_I\circ Sh=Sh\circ \alpha_I$. Now we see that $$T_{I,J}\circ T_{J^*,I^*}=(\alpha_I\circ\alpha_J\circ Sh)\circ (\alpha_{J^*}\circ\alpha_{I^*}\circ Sh)=Sh\,,$$ as required. [$\Box$]{} \[rem:T\_pr\] For $d$-tuples $I=(p,p,...,p)$ and $J=(r,r,...,r)$ the generalized pentagram maps correspond to the general pentagram maps $T_{p,r}=T_{I,J}$ discussed in [@KS], and they possess the following duality: $T^{-1}_{p,r}=T_{r,p}\circ Sh$. Note that in [@Beffa] one considered an intersection of the hyperplane $P_k$ with a chord joining two vertices, which leads to a different generalization of the pentagram map and for which an analog of the above duality is yet unknown. In the paper [@KS] we studied the case $T_{2,1}$ of short diagonal hyperplanes: $I=(2,2,...,2)$ and $J=(1,1,...,1)$, which is a very symmetric way of choosing the hyperplanes and their intersections. In this paper we consider the general, non-symmetric choice of vertices. \[thm:dual\] If $J=J^*$ (i.e., $\alpha_J$ is an involution), then modulo a shift in indices $i)$ the pentagram maps $T_{I,J}$ and $T_{J, I^*}$ are inverses to each other; $ii)$ the pentagram maps $T_{I,J}$ and $T_{J, I}$ (and hence $T_{I,J}$ and $T_{I^*, J}^{-1}$) are conjugated to each other, i.e., the map $\alpha_J$ takes the map $T_{I,J}$ on $n$-gons in ${{\mathbb {RP}}}^d$ into the map $T_{J,I}$ on $n$-gons in $({{\mathbb {RP}}}^d)^*$. In particular, all four maps $T_{I,J}, T_{I^*, J}, T_{J, I}$ and $T_{J, I^*}$ are integrable or non-integrable simultaneously. Whenever they are integrable, their integrability characteristics, e.g., the dimensions of invariant tori, periods of the corresponding orbits, etc., coincide. The statement $i)$ follows from Theorem \[thm:duality\]. To prove $ii)$ we note that for $J=J^*$ one has $\alpha_J^2=Sh$ and therefore $$\alpha_J\circ T_{I, J} \circ \alpha_J^{-1}=\alpha_J\circ (\alpha_I\circ\alpha_J\circ Sh)\circ \alpha_J =(\alpha_J\circ \alpha_I\circ Sh)\circ\alpha_J^2=T_{J,I}\circ Sh\,.$$ Hence modulo index shifts, the pentagram map $T_{I, J}$ is conjugated to $T_{J,I}$, while by the statement $i)$ they are also inverses of $T_{J, I^*}$ and $T_{I^*, J}$ respectively. This proves the theorem. [$\Box$]{} Dented pentagram maps ===================== Integrability of dented pentagram maps {#sect:dent} -------------------------------------- From now on we consider the case of $J={\mathbf 1}:=(1,1,...,1)=J^*$ for different $I$’s, i.e., we take the intersection of [*consecutive*]{} $I$-diagonal hyperplanes. [Fix an integer parameter $m\in \{1,...,d-1\}$ and for the $(d-1)$-tuple $I$ we set $I=I_m:=(1,...,1,2,1,...,1)$, where the only value 2 is situated at the $m$th place: $i_m=2$ and $i_\ell=1$ for $\ell\not=m$. This choice of the tuple $I$ corresponds to the diagonal plane $P_k$ which passes through consecutive vertices $v_k, v_{k+1},...,v_{k+m-1}$, then skips vertex $v_{k+m}$ and continues passing through consecutive vertices $v_{k+m+1},...,v_{k+d}$: $$P_k:=(v_k, v_{k+1},...,v_{k+m-1},v_{k+m+1},v_{k+m+2},...,v_{k+d})\,.$$ We call such a plane $P_k$ a [*dented*]{} (or [*$m$-dented*]{}) [*diagonal plane*]{}, as it is “dented" at the vertex $v_{k+m}$, see Figure \[fig:dented-plane\]. We define the [*dented pentagram map*]{} $T_m$ by intersecting $d$ consecutive planes $P_k$: $$T_m v_k:=P_{k}\cap P_{k+1}\cap ...\cap P_{k+d-1}\,.$$ In other words, the dented pentagram map is $T_m:=T_{I_m,{\mathbf 1}}$, i.e. $ T_{I_m,J}$ where $J={\mathbf 1}$. ]{} ![The dented diagonal hyperplane $P_k$ for $m=2$ in ${{\mathbb {RP}}}^5$.[]{data-label="fig:dented-plane"}](figure2.pdf){width="2.9in"} \[cor:T\_m\] The dented map $T_m$ is conjugated (by the involution $\alpha_{\mathbf 1}$) to $T^{-1}_{d-m}$ modulo shifts. Indeed, $I_m=I^*_{d-m}$ and hence, due to Theorem \[thm:dual\], one has $\alpha_{\mathbf 1}\circ T_m\circ \alpha_{\mathbf 1} =\alpha_{\mathbf 1}\circ T_{I_m,{\mathbf 1}}\circ \alpha_{\mathbf 1} =T_{{\mathbf 1},I_m}\circ Sh =T^{-1}_{I^*_m,{\mathbf 1}}\circ Sh=T^{-1}_{I_{d-m},{\mathbf 1}}\circ Sh=T^{-1}_{d-m}\circ Sh$, where $\alpha_{\mathbf 1}$ stands for $\alpha_J$ for $J=(1,...,1)$. [$\Box$]{} One can also see that for $m=0$ or $m=d$ all the vertices defining the hyperplane $P_k$ are taken consecutively, and the corresponding map $T_m$ is the identity modulo a shift in indices of $v_k$. For 2D the only option for the dented map is $I=(2)$ and $J=(1)$, and where $m=1$, so the corresponding map $T_m$ coincides with the classical pentagram transformation $T=T_{2,1}$ in 2D. Thus the above definition of maps $T_m$ for various $m$ is another natural higher-dimensional generalization of the 2D pentagram map. Unlike the definition of the short-diagonal pentagram map $T_{2,1}$ in ${{\mathbb {RP}}}^d$, the dented pentagram map is not unique for each dimension $d$, but also has one more integer parameter $m=1,...,d-1$. It turns out that the dented pentagram map $T_m$ defined this way, i.e., defined as $T_{I_m,{\mathbf 1}}$ for $I_m=(1,...,1,2,1,...,1)$ and ${\mathbf 1}=(1,1,...,1)$, has a special scaling invariance. To describe it we need to introduce coordinates on the space ${\mathcal P}_n$ of twisted $n$-gons. Now we complexify the setting and consider the spaces and maps over ${{\mathbb C}}$. \[diff-eq\] [One can show that there exists a lift of the vertices $v_k=\phi(k) \in {{\mathbb {CP}}}^d$ to the vectors $V_k \in {{\mathbb C}}^{d+1}$ satisfying $\det(V_j, V_{j+1}, ..., V_{j+d})=1$ and $ V_{j+n}=MV_j,\; j \in {{\mathbb Z}},$ where $M\in SL_{d+1}({{\mathbb C}})$, provided that the condition $gcd(n,d+1)=1$ holds. The corresponding lifted vectors satisfy the difference equations have the form $$\label{eq:difference_anyD} V_{j+d+1} = a_{j,d} V_{j+d} + a_{j,d-1} V_{j+d-1} +...+ a_{j,1} V_{j+1} +(-1)^{d} V_j,\quad j \in {{\mathbb Z}},$$ with $n$-periodic coefficients in the index $j$. This allows one to introduce [*coordinates*]{} $\{ a_{j,k} ,\;0\le j\le n-1, \; 1\le k\le d \}$ on the space of twisted $n$-gons in ${{\mathbb {CP}}}^d$. In the theorems below we assume the condition $gcd(n,d+1)=1$ whenever we use explicit formulas in the coordinates $\{ a_{j,k}\}$. However the statements hold in full generality and we discuss how the corresponding formulas are being adapted in Section \[nonprimes\]. (Strictly speaking, the lift from vertices to vectors is not unique, because it is defined up to simultaneous multiplication of all vectors by $\varepsilon$, where $\varepsilon^{d+1}=1$, but the coordinates $\{ a_{j,k}\}$ are well-defined as they have the same values for all lifts.)[^2] ]{} [**(Scaling invariance)**]{}\[thm:scaling\] The dented pentagram map $T_m$ on twisted $n$-gons in ${{\mathbb {CP}}}^d$ with hyperplanes $P_k$ defined by taking the vertices in a row but skipping the $m$[th]{} vertex is invariant with respect to the following scaling transformations: $$a_{j,1} \to s^{-1}a_{j,1} ,\; a_{j,2} \to s^{-2}a_{j,2} ,\;...\:, \; a_{j,m} \to s^{-m} a_{j,m},$$ $$a_{j,m+1} \to s^{d-m}a_{j,m+1}, \;...\:, \; a_{j,d} \to s a_{j,d}$$ for all $s\in {{\mathbb C}}^*$. For $d=2$ this is the case of the classical pentagram map, see [@OST99]. We prove this theorem in Section \[sect:scale\_proof\]. The above scale invariance implies the Lax representation, which opens up the possibility to establish algebraic-geometric integrability of the dented pentagram maps. [Recall that a discrete Lax equation with a spectral parameter is a representation of a dynamical system in the form $$\label{lax-eq} L_{j,t+1}(\lambda) = P_{j+1,t}(\lambda) L_{j,t}(\lambda) P_{j,t}^{-1}(\lambda),$$ where $t$ stands for the discrete time variable, $j$ refers to the vertex index, and $\lambda$ is a complex spectral parameter. It is a discrete version of the classical zero curvature equation $\partial_tL-\partial_xP=[P,L]$. ]{} [**(Lax form)**]{}\[thm:lax\_anyD\] The dented pentagram map $T_m$ on both twisted and closed $n$-gons in any dimension $d$ and any $m=1,...,d-1$ is an integrable system in the sense that it admits a Lax representation with a spectral parameter. In particular, for $gcd(n,d+1)=1$ the Lax matrix is $$L_{j,t}(\lambda) = \left( \begin{array}{cccc|c} 0 & 0 & \cdots & 0 &(-1)^d\\ \cline{1-5} \multicolumn{4}{c|}{\multirow{4}*{$D(\lambda)$}} & a_{j,1}\\ &&&& a_{j,2}\\ &&&& \cdots\\ &&&& a_{j,d}\\ \end{array} \right)^{-1},$$ with the diagonal $(d \times d)$-matrix $D(\lambda)={\rm diag}(1,...,1,\lambda, 1,...1)$, where the spectral parameter $\lambda$ is situated at the $(m+1)$[th]{} place, and an appropriate matrix $P_{j,t}(\lambda)$. Rewrite the difference equation in the matrix form. It is equivalent to the relation $(V_{j+1},V_{j+2},...,V_{j+d+1})=(V_j,V_{j+1},...,V_{j+d})N_{j}$, where the transformation matrix $N_{j}$ is $$N_{j} := \left( \begin{array}{ccc|c} 0 & \cdots & 0 &(-1)^d\\ \cline{1-4} \multicolumn{3}{c|}{\multirow{3}*{\rm{Id}}} & a_{j,1}\\ &&& \cdots\\ &&& a_{j,d}\\ \end{array} \right),$$ and where $\mathrm{Id}$ stands for the identity $(d\times d)$-matrix. It turns out that the monodromy $M$ for twisted $n$-gons is always conjugated to the product $\tilde{M}:=N_{0} N_{1}...N_{n-1}$, see Remark \[rem:monodromy\] below. Note that the pentagram map defined on classes of projective equivalence preserves the conjugacy class of $M$ and hence that of $\tilde{M}$. Using the scaling invariance of the pentagram map $T_m$, replace $a_{j,k}$ by $s^* a_{j,k}$ for all $k$ in the right column of $N_j$ to obtain a new matrix $N_j(s)$. The pentagram map preserves the conjugacy class of the new monodromy $\tilde{M}(s):=N_0(s) ...N_{n-1}(s)$ for any $s$, that is, the monodromy can only change to a conjugate one during its pentagram evolution: $ \tilde{M}_{t+1}(s) = P_{t}(s) \tilde{M}_t(s) P_{t}^{-1}(s)$. Then $N_j(s)$ (or, more precisely, $N_{j,t}(s)$ to emphasize its dependence on $t$), being a discretization of the monodromy $\tilde{M}$, could be taken as a Lax matrix $L_{j,t}(s)$. The gauge transformation $L_{j,t}^{-1}(\lambda) := \left(g^{-1} N_j(s) g \right)/s$ for $g = \text{diag}(s^{-1},s^{-2},...,s^{-m-1},s^{d-m-1},...,s,1)$ and $\lambda \equiv s^{-d-1}$ simplifies the formulas and gives the required matrix $L_{j,t}(\lambda)$. Closed polygons are subvarieties defined by polynomial relations on coefficients $a_{j,k}$. These relations ensure that the monodromy $\tilde{M}(s)$ has an eigenvalue of multiplicity $d+1$ at $s=1$. [$\Box$]{} \[rem:monodromy\] [Define the current monodromy $\tilde M_j$ for twisted $n$-gons by the relation $$(V_{j+n},V_{j+n+1},...,V_{j+n+d})=(V_j,V_{j+1},...,V_{j+d})\tilde M_{j},$$ i.e., as the product $\tilde{M}_j:=N_{j} N_{j+1}...N_{j+n-1}$. Note that $\tilde{M}_j$ acts on matrices by multiplication on the right, whereas in Definition \[tw-ngon\] the monodromy $M$ acts on vectors $V_j$ on the left. The theorem above uses the following fact: ]{} All current monodromies $\tilde M_j$ lie in the same conjugacy class in $SL_{d+1}({{\mathbb C}})$ as $M$. All products $\tilde{M}_j:=N_{j} N_{j+1}...N_{j+n-1}$ are conjugated: $\tilde{M}_{j+1}=N_{j}^{-1}\tilde{M}_j N_j$ for all $j\in {{\mathbb Z}}$, since $N_j=N_{j+n}$. Furthermore, $$(V_j,V_{j+1},...,V_{j+d})\tilde M_{j}(V_j,V_{j+1},...,V_{j+d})^{-1}\-=(V_{j+n},V_{j+n+1},...,V_{j+n+d})(V_j,V_{j+1},...,V_{j+d})^{-1}$$ $$= M(V_j,V_{j+1},...,V_{j+d})(V_j,V_{j+1},...,V_{j+d})^{-1}=M\,.$$ [$\Box$]{} To prove the scale invariance of dented pentagram maps we need to introduce the notion of the corresponding dual map. [The [*dual dented pentagram map*]{} $\widehat{T}_m$ for twisted polygons in ${{\mathbb {RP}}}^d$ or ${{\mathbb {CP}}}^d$ is defined as $\widehat{T}_m:=T_{{\mathbf 1},I^*_m}$ for $I^*_m=(1,...,1,2,1,...,1)$ where 2 is at the $(d-m)$th place and ${\mathbf 1}=(1,1,...,1)$. In this case the diagonal planes $P_k$ are defined by taking $d$ consecutive vertices of the polygon starting with the vertex $v_k$, but to define the image $\widehat T_mv_k$ of the vertex $v_k$ one takes the intersection $P_k\cap P_{k+1}\cap ...\cap P_{k+d-m-1}\cap P_{k+d-m+1}\cap ...\cap P_{k+d}$ of all but one consecutive planes by skipping only the plane $P_{k+d-m}$. ]{} [According to Theorem \[thm:dual\], the dual map satisfies $\widehat{T}_m={T}^{-1}_m\circ Sh$. In particular, the dual map $\widehat{T}_m$ is also integrable and has the same scaling properties and the Lax matrix as ${T}_m$. The dynamics for $\widehat{T}_m$ is obtained by reversing time in the dynamics of ${T}_m$. Moreover, the map $\widehat{T}_m$ is conjugated to $T_{d-m}$ (modulo shifts) by means of the involution $\alpha_{\mathbf 1}$. ]{} In dimension $d=3$ one has the following explicit Lax representations. For the case of $T_1$ (i.e., $m=1$) one sets $D(\lambda)=(1,\lambda,1)$. The dual map $\widehat{T}_1$, being the inverse of $T_1$, has the same Lax form and scaling. For the map $T_2$ (where $m=2$) one has $D(\lambda)=(1,1,\lambda) $. Similarly, $\widehat{T}_2$ is the inverse of $T_2$. Note that the maps $T_1$ and $T_2^{-1}$ are conjugated to each other by means of the involution $\alpha_{\mathbf 1}$ for ${\mathbf 1}=(1,1)$. They have the same dimensions of invariant tori, but their Lax forms differ. In dimension $d=4$ one has two essentially different cases, according to whether the dent is on a side of the diagonal plane or in its middle. Namely, the map $T_2$ is the case where the diagonal hyperplane is dented in the middle point, i.e., $m=2$ and $I_m=(1,2,1)$. In this case $D(\lambda)=(1,1,\lambda,1)$. For the side case consider the map $T_1$ (i.e., $m=1$ and $I_m=(2,1,1)$), where $D(\lambda)=(1,\lambda,1,1)$. The dual map $\widehat{T}_1$ is the inverse of $T_1$ and has the same Lax form. The map $T_3$ has the Lax form with $D(\lambda)=(1,1,1,\lambda)$ and is conjugate to the inverse $T_1^{-1}$, see Corollary \[cor:T\_m\]. Coordinates in the general case {#nonprimes} ------------------------------- In this section we describe how to introduce coordinates on the space of twisted polygons for any $n$. If $gcd(n,d+1)\not=1$ one can use quasiperiodic coordinates $a_{j,k}$ subject to a certain equivalence relation, instead of periodic ones, cf. Section 5.3 in [@KS]. \[def:quasi-abc\] [Call $d$ sequences of coordinates $\{a_{j,k}, k=1,...,d, \;j\in {{\mathbb Z}}\}$ $n$-[*quasi-periodic*]{} if there is a $(d+1)$-periodic sequence $t_j,\; j \in {{\mathbb Z}}$, satisfying $t_j t_{j+1} ... t_{j+d}=1$ and such that $a_{j+n, k} = a_{j,k}\cdot{t_j}/{t_{j+k}}$ for each $j \in {{\mathbb Z}}$. ]{} This definition arises from the fact that there are different lifts of vertices $v_j\in {{\mathbb {CP}}}^d$ to vectors $V_j\in {{\mathbb C}}^{d+1}, \; j \in {{\mathbb Z}},$ so that $\det(V_j, V_{j+1}, ..., V_{j+d})=1$ and $ v_{j+n}=Mv_j$ for $M\in SL_{d+1}({{\mathbb C}})$ and $j \in {{\mathbb Z}}$. (The later monodromy condition on vertices $v_j$ is weaker than the condition $ V_{j+n}=MV_j$ on lifted vectors in Definition \[diff-eq\].) We take arbitrary lifts $V_0, ..., V_{d-1}$ of the first $d$ vertices $v_0, ..., v_{d-1}$ and then obtain that $ V_{j+n}=t_jMV_j$, where $t_j t_{j+1}\dots, t_{j+d}=1$ and $t_{j+d+1}=t_j $ for all $ j \in {{\mathbb Z}}$, see details in [@OST99; @KS]. This way twisted $n$-gons are described by quasiperiodic coordinate sequences $a_{j,k}, \; k=1,..., d, \; j\in {{\mathbb Z}}$, with the equivalence furnished by different choices of $t_j,\; j \in {{\mathbb Z}}$. Indeed, the defining relation (\[eq:difference\_anyD\]) after adding $n$ to all indices $j$’s becomes the relation $$t_jV_{j+d+1} = a_{j,d} V_{j+d} t_{j+d} + a_{j,d-1} V_{j+d-1} t_{j+d-1}+...+ a_{j,1} V_{j+1} t_{j+1}+(-1)^{d} V_j t_j,\quad j \in {{\mathbb Z}},$$ which is consistent with the quasi-periodicity condition on $\{a_{j,k}\}$. In the case when $n$ satisfies $gcd(n,d+1)=1$, one can choose the parameters $t_j$ in such a way that the sequences $\{a_{j,k}\}$ are $n$-periodic in $j$. For a general $n$, from $n$-quasi-periodic sequences $\{a_{j,k}, k=1,...,d,\; j\in {{\mathbb Z}}\}$ one can construct $n$-periodic ones (in $j$) as follows: $$\tilde a_{j,k}=\dfrac{a_{j+1,k-1}}{a_{j,k}a_{j+1,d}}$$ for $j\in {{\mathbb Z}}$ and $k=1,...,d$, where one sets $a_{j,0}=1$ for all $j$. These new $n$-periodic coordinates $\{\tilde a_{j,k} ,\;0\le j\le n-1, \; 1\le k\le d \}$ are well-defined coordinates on twisted $n$-gons in ${{\mathbb {CP}}}^d$ (i.e., they do not depend on the choice of lift coefficients $t_j$). The periodic coordinates $\{\tilde a_{j,k} \}$ are analogs of the cross-ratio coordinates $x_j, y_j$ in [@OST99] and $x_j, y_j, z_j$ in [@KS]. [**(=\[thm:lax\_anyD\]$'$)**]{}\[thm:nonprimes\] The dented pentagram map $T_m$ on $n$-gons in any dimension $d$ and any $m=1,...,d-1$ is an integrable system. In the coordinates $\{\tilde a_{j,k} \}$ its Lax matrix is $$\widetilde L_{j,t}(\lambda) = \left( \begin{array}{cccc|c} 0 & 0 & \cdots & 0 &(-1)^d\\ \cline{1-5} \multicolumn{4}{c|}{\multirow{4}*{$A(\lambda)$}} & 1\\ &&&& 1\\ &&&& \cdots\\ &&&& 1\\ \end{array} \right)^{-1},$$ where $A(\lambda)={\rm diag}(\tilde a_{j,1}, ..., \tilde a_{j,m},\lambda\tilde a_{j,m+1},\tilde a_{j,m+2},..., \tilde a_{j,d})$. Note that Lax matrices $\widetilde L$ and $L$ are related as follows: $\widetilde L_{j,t}(\lambda)= a_{j+1, d}(h^{-1}_{j+1} L_{j,t}(\lambda) h_j)$ for the matrix $h_j=\diag (1, a_{j,1}, a_{j,2},..., a_{j,d})$. Algebraic-geometric integrability of pentagram maps in 3D {#sect:ag-in} --------------------------------------------------------- The key ingredient responsible for algebraic-geometric integrability of the pentagram maps is a Lax representation with a spectral parameter. It allows one to construct the direct and inverse spectral transforms, which imply that the dynamics of the maps takes place on invariant tori, the Jacobians of the corresponding spectral curves. The proofs in 3D case for the short-diagonal pentagram map $T_{2,1}$ are presented in detail in [@KS] (see also [@FS] for the 2D case). In dimension 3 we consider two dented pentagram maps $T_1$ and $T_2$, where the diagonal hyperplane $P_k$ is dented on [*different sides*]{} as opposed to the short-diagonal pentagram map $T_{2,1}$, where the diagonal hyperplane is dented on [*both sides*]{}, see Figure \[fig:T-3D\]. ![Different diagonal planes in 3D: for $T_{2,1}, T_1,$ and $ T_2$.[]{data-label="fig:T-3D"}](t_3d.pdf){width="7in"} The proofs for the maps $T_1$ and $T_2$ follow the same line as in [@KS], so in this section we present only the main statements and outline the necessary changes. For simplicity, in this section we assume that $n$ is odd, which is equivalent to the condition $gcd(n,d+1)=1$ for $d=3$ (this condition may not appear for a different choice of coordinates, but the results of [@KS] show that the dimensions of tori may depend on the parity of $n$). In this section we consider twisted polygons in the complex space ${{\mathbb {CP}}}^3$. \[thm:comparison\] In dimension 3 the dented pentagram maps on twisted $n$-gons generically are fibered into (Zariski open subsets of) tori of dimension $3\lfloor n/2\rfloor-1$ for $n$ odd and divisible by 3 and of dimension $3\lfloor n/2\rfloor$ for $n$ odd and not divisible by 3. Recall that for the short-diagonal pentagram map in 3D the torus dimension is equal to $3\lfloor n/2\rfloor$ for any odd $n$, see [@KS]. To prove this theorem we need the notion of a spectral curve. Recall that the product of Lax functions $L_j(\lambda),\; 0 \le j \le n-1,$ gives the monodromy operator $T_0(\lambda)$, which determines the spectral function $R(k,\lambda):=\det{(T_0(\lambda) - k \,\text{Id})}$. The zero set of $R(k,\lambda)=0$ is an algebraic curve in ${{\mathbb C}}^2$. A standard procedure (of adding the infinite points and normalization with a few blow-ups) makes it into a compact Riemann surface, which we call the [*spectral curve*]{} and denote by $\Gamma$. Its genus equals the dimension of the corresponding complex torus, its Jacobian and Proposition \[prop:Jacobians\] below shows how to find this genus. As it is always the case with integrable systems, the spectral curve $\Gamma$ is an invariant of the map and the dynamics takes place on its Jacobian. To describe the dynamics one introduces a [*Floquet-Bloch solution*]{} which is formed by eigenvectors of the monodromy operator $T_0(\lambda)$. After a certain normalization it becomes a uniquely defined meromorphic vector function $\psi_0$ on the spectral curve $\Gamma$. Other Floquet-Bloch solutions are defined as the vector functions $\psi_{i+1}=L_i...L_1 L_0 \psi_0,\; 0 \le i \le n-1$. Theorem \[thm:comparison\] is based on the study of $\Gamma$ and Floquet-Bloch solutions, which we summarize in the tables below. In each case, the analysis starts with an evaluation of the spectral function $R(k,\lambda)$. Then we provide Puiseux series for the singular points at $\lambda=0$ and at $\lambda=\infty$. They allow us to find the genus of the spectral curve and the symplectic leaves for the corresponding Krichever-Phong’s universal formula. Then we describe the divisors of the Floquet-Bloch solutions, which are essential for constructing the inverse spectral transform. We start with reproducing the corresponding results for the short-diagonal map $T_{2,1}$ for odd $n$, obtained in [@KS]. We set $q:=\lfloor n/2 \rfloor.$ The tables below contain the information on the Puiseux series of the spectral curve, Casimir functions of the pentagram dynamics, and divisors $(\psi_{i,k})$ of the components of the Floquet-Bloch solutions $\psi_i$ (we refer to [@KS] for more detail). As an example, we show how to use these tables to find the genus of the spectral curve. As before, we assume $n$ to be odd, $n=2q+1$. Recall that for the short-diagonal pentagram map $T_{2,1}$ the genus is $g=3q$ for odd $n$, see [@KS]. \[prop:Jacobians\] The spectral curves for the dented pentagram maps in ${{\mathbb {CP}}}^3$ generically have the genus $g=3q-1$ for $n$ odd and divisible by 3 and the genus $g=3q$ for $n$ odd and not divisible by 3. Let us compute the genus for the dented pentagram map $T_1$. As follows from the definition of the spectral curve $\Gamma$, it is a ramified 4-fold cover of ${{\mathbb {CP}}}^1$, since the $4\times 4$-matrix $\tilde{T}_{i,t}(\lambda)$ (or ${T}_{i,t}(\lambda)$) has 4 eigenvalues. By the Riemann-Hurwitz formula the Euler characteristic of $\Gamma$ is $\chi(\Gamma)=4\chi({{\mathbb {CP}}}^1)-\nu=8-\nu$, where $\nu $ is the ramification index of the covering. In our setting, the index $\nu$ is equal to the sum of orders of the branch points at $\lambda=0$ and $\lambda=\infty$, plus the number $\bar\nu$ of branch points over $\lambda\not=0, \infty$, where we assume the latter points to be all of order $1$ generically. On the other hand, $\chi(\Gamma)=2-2g$, and once we know $\nu$ it allows us to find the genus of the spectral curve $\Gamma$ from the formula $2-2g=8-\nu$. The number $\bar\nu$ of branch points of $\Gamma$ on the $\lambda$-plane equals the number of zeroes of the function $\partial_k R(\lambda,k)$ aside from the singular points $\lambda=0$ or $\infty$. The function $\partial_k R(\lambda,k)$ is meromorphic on $\Gamma$, therefore the number of its zeroes equals the number of its poles. One can see that for any $n=2q+1$ the function $\partial_k R(\lambda,k)$ has poles of total order $5n$ at $z=0$, and it has zeroes of total order $2n$ at $z=\infty$. Indeed, substitute the local series for $k$ in $\lambda$ from the table to the expression for $\partial_k R(\lambda,k)$. (E.g., at $O_1$ one has $k={\mathcal O}(1)$. The leading terms of $\partial_k R(\lambda,k)$ for the pole at $\lambda=0$ are $4k^3, -3k^2G_0\lambda^{-q}, 2k J_0\lambda^{-n}, -I_0 \lambda^{-n}$. The last two terms, being of order $\lambda^{-n}$, dominate and give the pole of order $n=2q+1$.) For $n$ odd and not divisible by 3, the corresponding orders of the poles and zeroes of $\partial_k R(\lambda,k)$ on the curve $\Gamma$ are summarized as follows: $$\begin{array}{||c|c||c|c||c|c||c|c||} \hline \text{ pole } & \text{ order } & \text{ zero } & \text{ order } \\ \hline O_1 & n & W_1 & 0 \\ \hline O_2 & n & W_{2} & 2n\\ \hline O_{3} & 3n & & \\ \hline \end{array}$$ Therefore, for such $n$, the total order of poles is $n+n+3n=5n$, while the total order of zeroes is $0+2n=2n$. Consequently, the number of zeroes of $\partial_k R(\lambda,k)$ at nonsingular points $\lambda\not=\{0,\infty\}$ is $\bar\nu=5n-2n=3n$, and so is the total number of branch points of $\Gamma$ in the finite part of the $(\lambda,k)$ plane (generically, all of them have order $1$). For $n$ odd and not divisible by 3 there is an additional branch point at $\lambda=0$ of order $1$ and another branch point at $\lambda=\infty$ of order $2$ (see the table for $T_1$). Hence the ramification index is $\nu=\bar\nu+3=3n+3=6q+6$. The identity $2-2g=8-\nu$ implies that $g=3q$. For $n$ odd and divisible by 3, $n=6l+3$, one has the same orders of poles $O_j$, $W_1$ is of order zero, while each of the three zeros $W_{2,3,4}$ is of order $4l+2$. Then the total order of zeroes is still $3(4l+2)=12l+6=2n$, and again $\bar\nu=5n-2n=3n$. However, there is no branch point at $\lambda=\infty$ and hence the ramification index is $\nu=\bar\nu+1=3n+1=6q+4$. Thus for such $n$ we obtain from the identity $2-2g=8-\nu$ that $g=3q-1$. Finally note that $T_1$ and $T^{-1}_2$ are conjugated to each other by means of the involution $\alpha_{\mathbf 1}$, and hence $T_1$ and $T_2$ have the same dimensions of invariant tori. Their spectral curves are related by a change of coordinates furnished by this involution and have the same genus. [$\Box$]{} Dual dented maps {#sect:dual} ================ Properties of dual dented pentagram maps ---------------------------------------- It turns out that the pentagram dynamics of $\hat{T}_m$ has the following simple description. (We consider the geometric picture over ${{\mathbb R}}$.) \[prop:two\_subs\] The dual pentagram map $\hat{T}_m$ in ${{\mathbb {RP}}}^d$ sends the vertex $v_k$ into the intersection of the subspaces of dimensions $m$ and $d-m$ spanned by the vertices $$\hat{T}_m v_k =(v_{k+d-m-1}, ...,v_{k+d-1})\cap (v_{k+d}, ...,v_{k+2d-m})\,.$$ As we discussed above, the point $\hat T_mv_k$ is defined by taking the intersection of all but one consecutive hyperplanes: $$\hat T_mv_k=P_k\cap P_{k+1}\cap ...\cap P_{k+d-m-1}\cap P_{k+d-m+1}\cap ...\cap P_{k+d}.$$ Note that this point $\hat T_mv_k$ can be described as the intersection of the subspace $$L_1^m =P_k\cap P_{k+1}\cap ...\cap P_{k+d-m-1}$$ of dimension $m$ and the subspace $$L_2^{d-m} =P_{k+d-m+1}\cap ...\cap P_{k+d}$$ of dimension $d-m$ in ${{\mathbb {RP}}}^d$. (Here the upper index stands for the dimension.) Since each of the subspaces $L_1$ and $L_2$ is the intersection of several consecutive hyperplanes $P_j$, and each hyperplane $P_j$ is spanned by consecutive vertices, we see that $L_1^m=(v_{k+d-m-1}, ...,v_{k+d-1})$ and $L_2^{d-m}=(v_{k+d}, ...,v_{k+2d-m})$, as required. [$\Box$]{} Consider the shift $Sh$ of vertex indices by $d-(m+1)$ to obtain the map $$\widehat{T}_m v_k := (\hat{T}_m \circ Sh)\, v_k=(v_{k}, ...,v_{k+m})\cap (v_{k+m+1}, ...,v_{k+d+1})\,,$$ which we will study from now on. [For $d=3$ and $m=2$ we have the dual pentagram map $\widehat{T}_2$ in ${{\mathbb {RP}}}^3$ defined via intersection of the 2-dimensional plane $L_1=(v_{k}, v_{k+1}, v_{k+2})$ and the line $L_2=(v_{k+3}, v_{k+4})$: $$\widehat{T}_2 v_k =(v_{k}, v_{k+1}, v_{k+2})\cap (v_{k+3}, v_{k+4})\,,$$ see Figure \[fig:dual-map\]. This map is dual to the dented pentagram map $T_m$ for $I=(1,2)$ and $J=(1,1)$. ]{} ![The dual $\widehat T_2$ to the dented pentagram map $T_m$ for $m=2$ in ${{\mathbb {RP}}}^3$.[]{data-label="fig:dual-map"}](figure4.pdf){width="3.5in"} Let $V_k$ are the lifts of the vertices $v_k$ of a twisted $n$-gon from ${{\mathbb {RP}}}^d$ to ${{\mathbb R}}^{d+1}$. We assume that $n$ and $d+1$ are mutually prime and the conditions $\det(V_k,..., V_{k+d})=1$ with $ V_{k+n}=MV_k$ for all $k\in {{\mathbb Z}}$ to provide the lift uniqueness. \[Tm-formula\] Given a twisted polygon $(v_k)$ in ${{\mathbb {RP}}}^d$ with coordinates $a_{k,j}$, the image $\widehat{T}_m V_k$ in ${{\mathbb R}}^{d+1}$ under the dual pentagram map is proportional to the vector $$R_k=a_{k,m} V_{k+m} + a_{k,m-1} V_{k+m-1} +...+ a_{k,1} V_{k+1} +(-1)^{d} V_k$$ for all $k\in {{\mathbb Z}}$. Since $$\widehat{T}_m V_k \in (V_{k}, ...,V_{k+m})\cap (V_{k+m+1}, ...,V_{k+d+1})\,,$$ the vector $W_k:=\widehat{T}_m V_k$ can be represented as a linear combination of vectors from either of the groups: $$W_k=\mu_k V_{k}+ ...+\mu_{k+m}V_{k+m}=\nu_{k+m+1} V_{k+m+1}+ ...+\nu_{k+d+1}V_{k+d+1}.$$ Normalize this vector by setting $\nu_{k+d+1}=1$. Now recall that $$V_{k+d+1} = a_{k,d} V_{k+d} + a_{k,d-1} V_{k+d-1} +...+ a_{k,1} V_{k+1} +(-1)^{d} V_k,$$ for $k\in {{\mathbb Z}}$. Replacing $V_{k+d+1} $ by its expression via $V_k,...,V_{k+d}$ we obtain that $\mu_k =(-1)^{d}$, $\mu_{k+1}=a_{k,1}, ... $, $\mu_{k+m}=a_{k,m}$. Thus the vector $$R_k=a_{k,m} V_{k+m} + a_{k,m-1} V_{k+m-1} +...+ a_{k,1} V_{k+1} +(-1)^{d} V_k$$ belongs to both the subspaces, and hence spans their intersection. [$\Box$]{} Note that the image $\widehat{T}_m V_k$ under the dual map is $W_k:=\widehat{T}_m V_k=\lambda_k R_k$, where the coefficients $\lambda_k$ are determined by the condition that $\det(W_k,..., W_{k+d})=1$ for all $k\in {{\mathbb Z}}$. Proof of the scale invariance {#sect:scale_proof} ----------------------------- In this section we prove scaling invariance in any dimension $d$ for any map $\widehat T_m,\; 1 \le m \le d-1$ dual to the dented pentagram map $T_m$ on twisted $n$-gons in ${{\mathbb {CP}}}^d$, whose hyperplanes $P_k$ are defined by taking consecutive vertices, but skipping the $m$[th]{} vertex. [**(=\[thm:scaling\]$\widehat~$)**]{}\[thm:scaling2\] The dual dented pentagram map $\widehat T_m$ on twisted $n$-gons in ${{\mathbb {CP}}}^d$ is invariant with respect to the following scaling transformations: $$a_{k,1} \to s^{-1}a_{k,1} ,\; a_{k,2} \to s^{-2}a_{k,2} ,\;...\:, \; a_{k,m} \to s^{-m} a_{k,m},$$ $$a_{k,m+1} \to s^{d-m}a_{k,m+1}, \;...\:, \; a_{k,d} \to s a_{k,d}$$ for all $s\in {{\mathbb C}}^*$. The dual dented pentagram map is defined by $W_k:=\widehat{T}_m V_k=\lambda_k R_k$, where the coefficients $\lambda_k$ are determined by the normalization condition: $\det(W_k,..., W_{k+d})=1$ for all $k\in {{\mathbb Z}}$. The transformed coordinates are defined using the difference equation $$W_{k+d+1} = \hat{a}_{k,d} W_{k+d} + \hat{a}_{k,d-1} W_{j+d-1} +...+ \hat{a}_{k,1} W_{k+1}+(-1)^d W_k.$$ The corresponding coefficients $\hat{a}_{k,j}$ can be readily found using Cramer’s rule: $$\label{cram-rule} \hat{a}_{k,j} =\dfrac{\lambda_{k+d+1}}{\lambda_{k+j}} \dfrac{\det(R_k, R_{k+1}, ... ,R_{k+j-1},R_{k+d+1} , R_{k+j+1}, ... ,R_{k+d})}{\det(R_k , R_{k+1}, ... , R_{k+d})}$$ The normalization condition reads as $\lambda_k \lambda_{k+1}...\lambda_{k+d}\det(R_k , R_{k+1}, ... , R_{k+d})=1$ for all $k\in {{\mathbb Z}}$. To prove the theorem, it is sufficient to prove that the determinants in (\[cram-rule\]) are homogenous in $s$ and find their degrees of homogeneity. The determinant $\det(R_k , R_{k+1}, ... , R_{k+d})$ has zero degree of homogeneity in $s$. The determinant in the numerator of formula (\[cram-rule\]) has the same degree of homogeneity in $s$ as ${a}_{k,j}$. The theorem immediately follows from this lemma since even if $\lambda_k$ have some nonzero degree of homogeneity, it does not depend on $k$ anyway by the definition of scaling transformation, and it cancels out in the ratio. Hence the whole expression (\[cram-rule\]) for $\hat{a}_{k,j}$ transforms just like ${a}_{k,j}$, i.e., the dented pentagram map is invariant with respect to the scaling. Proposition \[Tm-formula\] implies that the vector $R_k:=(V_k,V_{k+1},...,V_{k+d}){{\mathbf{r}}}_k$ has an expansion $${{\mathbf{r}}}_k=((-1)^d,a_{k,1},...,a_{k,m},0,...,0)^t$$ in the basis $(V_k,V_{k+1},...,V_{k+d})$, where $t$ stands for the transposed matrix. Note that the vector $R_{k+1}$ has a similar expression ${{\mathbf{r}}}_k=((-1)^d,a_{k+1,1},...,a_{k+1,m},0,...,0)^t$ in the shifted basis $(V_{k+1},V_{k+2},...,V_{k+d+1})$, but in the initial basis $(V_k,V_{k+1},...,V_{k+d})$ its expansion has the form ${{\mathbf{r}}}_{k+1}=N_k{{\mathbf{r}}}_k$ for the transformation matrix $N_k$ (see its definition in the proof of Theorem \[thm:lax\_anyD\]), since the relation (\[eq:difference\_anyD\]) implies $$(V_{k+1},V_{k+2},...,V_{k+d+1})=(V_k,V_{k+1},...,V_{k+d})N_k\,.$$ Note that formula (\[cram-rule\]) is independent on the choice of the basis used and we expand vectors $R_k$ in the basis $(V_{k+m+1},V_{k+m+2},...,V_{k+m+1+d})$. It turns out that the corresponding expansions ${{\mathbf{r}}}_k,...,{{\mathbf{r}}}_{k+d+1}$ have particularly simple form in this basis, which is crucial for the proof. We use hats, $\hat {{\mathbf{r}}}_k,...\hat {{\mathbf{r}}}_{k+d+1}$, when the vectors $R_k, ..., R_{k+d+1}$ are written in this new basis. Explicitly we obtain $$\begin{aligned} \hat {{\mathbf{r}}}_k &= (N_k N_{k+1} ... N_{k+m})^{-1} {{\mathbf{r}}}_k=(-a_{k,m+1},-a_{k,m+2},...,-a_{k,d},1,0,...,0)^t, \\ \hat {{\mathbf{r}}}_{k+1} &= (N_{k+1} N_{k+2} ... N_{k+m})^{-1} {{\mathbf{r}}}_{k+1}=(0,-a_{k+1,m+1},-a_{k+1,m+2},...,-a_{k+1,d},1,0,...,0)^t,\\ &\ldots\\ \hat {{\mathbf{r}}}_{k+m} &= N_{k+m}^{-1} {{\mathbf{r}}}_{k+m}=(0,...,0,-a_{k+m,m+1},-a_{k+m,m+2},...,-a_{k+m,d},1)^t,\\ \hat {{\mathbf{r}}}_{k+m+1} &= {{\mathbf{r}}}_{k+m+1}=((-1)^d,a_{k+m+1,1},...,a_{k+m+1,m},0,...,0)^t,\\ \hat {{\mathbf{r}}}_{k+m+2} &= N_{k+m+1} {{\mathbf{r}}}_{k+m+2}=(0,(-1)^d,a_{k+m+2,1},...,a_{k+m+2,m},0,...,0)^t,\\ \hat {{\mathbf{r}}}_{k+m+3} &= N_{k+m+1} N_{k+m+2} {{\mathbf{r}}}_{k+m+3}=(0,0,(-1)^d,a_{k+m+3,1},...,a_{k+m+3,m},0,...,0)^t,\\ &\ldots\\ \hat {{\mathbf{r}}}_{k+d+1} &= N_{k+m+1} N_{k+m+2}...N_{k+d} {{\mathbf{r}}}_{k+d+1}=(0,...,0,(-1)^d,a_{k+d+1,1},...,a_{k+d+1,m})^t.\end{aligned}$$ Consider the matrix $\mathbf{M}=(\hat {{\mathbf{r}}}_k,\hat {{\mathbf{r}}}_{k+1},...,\hat {{\mathbf{r}}}_{k+d+1})$ of size $(d+1) \times (d+2)$, which is essentially the matrix of the system of linear equations determining $\hat{a}_{k,j}$. All its entries are homogenous in $s$. Also note that the determinant $\det(R_k , R_{k+1}, ... , R_{k+d})=\det(\hat {{\mathbf{r}}}_k,\hat {{\mathbf{r}}}_{k+1},...,\hat {{\mathbf{r}}}_{k+d})$ is the minor formed by the first $d+1$ columns, while the determinant in the numerator of formula (\[cram-rule\]) is up to a sign the minor formed by crossing out the $(j+1)$th column in $\mathbf{M}$. For instance, for $d=6$ and $m=2$ this matrix has the form: $$\mathbf{M}= \begin{pmatrix} -a_{k,3} & 0 & 0 & 1 & 0 & 0 & 0 & 0\\ -a_{k,4} & -a_{k+1,3} & 0 & a_{k+3,1} & 1 & 0 & 0 & 0\\ -a_{k,5} & -a_{k+1,4} & -a_{k+2,3} & a_{k+3,2} & a_{k+4,1} & 1 & 0 & 0\\ -a_{k,6} & -a_{k+1,5} & -a_{k+2,4} & 0 & a_{k+4,2} & a_{k+5,1} & 1 & 0\\ 1 & -a_{k+1,6} & -a_{k+2,5} & 0 & 0 & a_{k+5,2} & a_{k+6,1} & 1\\ 0 & 1 & -a_{k+2,6} & 0 & 0 & 0 & a_{k+6,2} & a_{k+7,1}\\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & a_{k+7,2} \end{pmatrix}.$$ Let us form the corresponding matrix $\mathbf{D}$ of the same size representing the homogeneity degree of the entries of $\mathbf{M}$ given by the scaling transformations. One can assign an arbitrary degree to a zero entry, and we do this in such a way that within each column the degrees would change uniformly. Note that those degrees also change uniformly along all the rows as well, except for one simultaneous jump after the $(m+1)$th column. In the above example one has $$\mathbf{D}= \begin{pmatrix} 4 & 5 & 6 & 0 & 1 & 2 & 3 & 4\\ 3 & 4 & 5 & -1 & 0 & 1 & 2 & 3\\ 2 & 3 & 4 & -2 & -1 & 0 & 1 & 2\\ 1 & 2 & 3 & -3 & -2 & -1 & 0 & 1\\ 0 & 1 & 2 & -4 & -3 & -2 & -1 & 0\\ -1 & 0 & 1 & -5 & -4 & -3 & -2 & -1\\ -2 & -1 & 0 & -6 & -5 & -4 & -3 & -2 \end{pmatrix}.$$ Then the determinant of the minors obtained by crossing any of the columns are homogeneous. Indeed, elementary transformations on the matrix rows by adding to one row another multiplied by a homogeneous in $s$ function preserve this table of homogeneity degrees. On the other hand, producing the upper-triangular form by such transformations one can easily compute the homogeneity degree of the corresponding minor. For instance, the minor $\det(\hat {{\mathbf{r}}}_k,\hat {{\mathbf{r}}}_{k+1},...,\hat {{\mathbf{r}}}_{k+d})$ formed by the first $d+1$ columns has zero degree. Indeed, we need to find the trace of the corresponding $(d+1)\times (d+1)$ degree matrix. It contains $m+1$ columns with the diagonal entries of degree $d-m$, as well as $d-m$ columns with the diagonal entries of degree $-m-1$, i.e., the total degree is $(d-m)\cdot (m+1)+ (-m-1)\cdot(d-m)=0$. (In the example above it is $4\cdot 3+(-3)\cdot 4=0$ on the diagonal for the first 7 columns.) Similarly one finds the degree of any $j$th minor of the matrix $\mathbf{M}$ for arbitrary $d$ and $m$ by calculating the difference of the degrees for the diagonal $(j,j)$-entry and the $(j,d+2)$-entry in the matrix $\mathbf{D}$. [$\Box$]{} [The idea of using Cramer’s rule and a simple form of vectors $R_k$ was suggested in [@Beffa_scale] to prove the scale invariance of short-diagonal pentagram maps $T_{2,1}$. For the maps $\widehat{T}_m$ we employ this approach along with passing to the dual maps and the above “retroactive" basis change. This choice of basis in the proof of Theorem \[thm:scaling2\] also allows one to obtain explicit formulas for the pentagram map via the matrix $\mathbf{M}$. ]{} Continuous limit of general pentagram maps {#sect:cont_lim} ========================================== Consider the continuous limit of the dented pentagram maps on $n$-gons as $n\to\infty$. In the limit a generic twisted $n$-gon becomes a smooth non-degenerate quasi-periodic curve $\gamma(x)$ in ${{\mathbb {RP}}}^d$. Its lift $G(x)$ to ${{\mathbb R}}^{d+1}$ is defined by the conditions that the components of the vector function $G(x)=(G_1,...,G_{d+1})(x)$ provide the homogeneous coordinates for $\gamma(x)=(G_1:...:G_{d+1})(x)$ in ${{\mathbb {RP}}}^d$ and $\det(G(x),G'(x),...,G^{(d)}(x))=1$ for all $x\in {{\mathbb R}}$. Furthermore, $G(x+2\pi)=MG(x)$ for a given $M\in SL_{d+1}({{\mathbb R}})$. Then $G(x)$ satisfies the linear differential equation of order $d+1$: $$G^{(d+1)}+u_{d-1}(x)G^{(d-1)}+...+u_1(x)G'+u_0(x)G=0$$with periodic coefficients $u_i(x)$, which is a continuous limit of difference equation . Here $'$ stands for $d/dx$. Fix a small $\epsilon>0$ and let $I$ be any $(d-1)$-tuple $I=(i_1,...,i_{d-1})$ of positive integers. For the $I$-diagonal hyperplane $$P_k:=(v_k, v_{k+i_1}, v_{k+i_1+i_2},..., v_{k+i_1+...+i_{d-1}})$$ its continuous analog is the hyperplane $P_\epsilon(x)$ passing through $d$ points $\gamma(x),\gamma(x+i_1\epsilon),...,\gamma(x+(i_1+...+i_{d-1})\epsilon)$ of the curve $\gamma$. In what follows we are going to make a parameter shift in $x$ (equivalent to shift of indices) and define $P_\epsilon(x):=(\gamma(x+k_0\epsilon),\gamma(x+k_1\epsilon),...,\gamma(x+k_{d-1}\epsilon))$, for any real $k_0<k_1<...<k_{d-1}$ such that $\sum_l k_l=0$. Let $\ell_\epsilon (x)$ be the envelope curve for the family of hyperplanes $P_\epsilon(x)$ for a fixed $\epsilon$. The envelope condition means that $P_\epsilon(x)$ are the osculating hyperplanes of the curve $\ell_\epsilon (x)$, that is the point $\ell_\epsilon (x)$ belongs to the hyperplane $P_\epsilon(x)$, while the vector-derivatives $\ell'_\epsilon (x),...,\ell^{(d-1)}_\epsilon (x)$ span this hyperplane for each $x$. It means that the lift of $\ell_\epsilon (x)$ to $L_\epsilon (x)$ in ${{\mathbb R}}^{d+1}$ satisfies the system of $d$ equations: $$\det ( G(x+k_0\epsilon), ..., G(x+k_{d-1}\epsilon), L^{(j)}_\epsilon(x) )=0,\quad j=0,...,d-1.$$ A continuous limit of the pentagram map is defined as the evolution of the curve $\gamma$ in the direction of the envelope $\ell_\epsilon$, as $\epsilon$ changes. Namely, one can show that the expansion of $L_\epsilon(x)$ has the form $$L_\epsilon(x)=G(x)+\epsilon^2 B(x)+{\mathcal O} (\epsilon^3)\,,$$ where there is no term linear in $\epsilon$ due to the condition $\sum_l k_l=0$. It satisfies the family of differential equations: $$L_\epsilon^{(d+1)}+u_{d-1,{\epsilon}}(x)L_\epsilon^{(d-1)}+...+u_{1,{\epsilon}}(x)L_\epsilon'+u_{0,{\epsilon}}(x)L_\epsilon=0, \text{ where } u_{j,0}(x)=u_{j}(x).$$ Then the corresponding expansion of the coefficients $u_{j,{\epsilon}}(x)$ as $u_{j,{\epsilon}}(x)=u_{j}(x)+{\epsilon}^2w_j(x)+{\mathcal O}({\epsilon}^3)$, defines the continuous limit of the pentagram map as the system of evolution differential equations $du_j(x)/dt\, =w_j(x)$ for $j=0,...,d-1$. (This definition of limit assumes that we have the standard tuple $J={\mathbf 1}:=(1,...,1)$.) [**(Continuous limit)**]{}\[thm:cont\] The continuous limit of any generalized pentagram map $T_{I,J}$ for any $I=(i_1,...,i_{d-1})$ and $J={\mathbf 1}$ (and in particular, of any dented pentagram map $T_m$) in dimension $d$ defined by the system $du_j(x)/dt\, =w_j(x), \, j=0,...,d-1$ for $x\in S^1$ is the $(2, d+1)$-KdV flow of the Adler-Gelfand-Dickey hierarchy on the circle. Recall that the $(n, d+1)$-KdV flow is defined on linear differential operators $L= \partial^{d+1} + u_{d-1}(x) \partial^{d-1} + u_{d-2}(x) \partial^{d-2} + ...+ u_1(x) \partial + u_0(x)$ of order $d+1$ with periodic coefficients $u_j(x)$, where $\partial^{k}$ stands for $d^k/dx^k$. One can define the fractional power $L^{n/{d+1}}$ as a pseudo-differential operator for any positive integer $n$ and take its pure differential part $Q_n :=(L^{n/{d+1}})_+$. In particular, for $n=2$ one has $Q_2= \partial^2 + \dfrac{2}{d+1}u_{d-1}(x) $. Then the $(n, d+1)$-KdV equation is the evolution equation on (the coefficients of) $L$ given by $dL/dt= [Q_n,L] $, see [@Adler]. For $d=2$ the (2,3)-KdV equation is the classical Boussinesq equation on the circle: $u_{tt}+2(u^2)_{xx}+u_{xxxx}=0$, which appears as the continuous limit of the 2D pentagram map [@OST99]. By expanding in the parameter $\epsilon$ one can show that $L_\epsilon(x)$ has the form $L_\epsilon(x)=G(x)+\epsilon^2 C_{d,I}\left(\partial^2+ \dfrac{2}{d+1}u_{d-1}(x) \right)G(x)+{\mathcal O} (\epsilon^3)$ as $ {\epsilon}\to 0$, for a certain non-zero constant $C_{d,I}$, cf. Theorem 4.3 in [@KS]. We obtain the following evolution of the curve $G(x)$ given by the $\epsilon^2 $-term of this expansion: ${dG}/{dt} = \left(\partial^2+ \dfrac{2}{d+1}u_{d-1}\right)G$, or which is the same, ${d}G/{dt} =Q_2G$. We would like to find the evolution of the operator $L$ tracing it. For any $t$, the curve $G$ and the operator $L$ are related by the differential equation $LG=0$ of order $d+1$. Consequently, $d(LG)/dt=({d}L/{dt}) G + L ({d}G/{dt})=0.$ Now note that if the operator $L$ satisfies the $(2,d+1)$-KdV equation ${d}L/{dt}=[Q_2, L]:=Q_2L-LQ_2,$ and $G$ satisfies ${d}G/{dt} =Q_2G$, we have the identity: $$\frac{dL}{dt} G + L \frac{dG}{dt}=(Q_2L-LQ_2) G + L Q_2G= Q_2LG=0\,.$$ In virtue of the uniqueness of the linear differential operator $L$ of order $d+1$ for a given fundamental set of solutions $G$, we obtain that indeed the evolution of $L$ is described by the $(2,d+1)$-KdV equation. [$\Box$]{} Corrugated polygons and dented diagonals {#S:corrug} ======================================== Pentagram maps for corrugated polygons -------------------------------------- In [@GSTV] pentagram maps were defined on spaces of corrugated polygons in ${{\mathbb {RP}}}^d$. These maps turned out to be integrable, while the corresponding Poisson structures are related to many interesting structures on such polygons. Below we describe how one can view integrability in the corrugated case as a particular case of the dented maps. Let $(v_k)$ be generic twisted $n$-gons in ${{\mathbb {RP}}}^d$ (here “generic” means that every $d+1$ consecutive vertices do not lie in a projective subspace). The space of equivalence classes of generic twisted $n$-gons in ${{\mathbb {RP}}}^d$ has dimension $nd$ and is denoted by ${\mathcal P}_n$. \[def:2-corrugated\] A twisted polygon $(v_k)$ in ${{\mathbb {RP}}}^d$ is corrugated if for every $k\in {{\mathbb Z}}$ the vertices $v_k, v_{k+1}, v_{k+d},$ and $v_{k+d+1}$ span a projective two-dimensional plane. The projective group preserves the space of corrugated polygons. Denote by ${\mathcal P}_n^{cor}\subset {\mathcal P}_n$ the space of projective equivalence classes of generic corrugated $n$-gons. One can show that such polygons form a submanifold of dimension $2n$ in the $nd$-dimensional space ${\mathcal P}_n$. The consecutive $d$-diagonals (the diagonal lines connecting $v_k$ and $v_{k+d}$) of a corrugated polygon intersect pairwise, and the intersection points form the vertices of a new corrugated polygon: $T_{cor} v_k:=(v_k,v_{k+d})\cap(v_{k+1},v_{k+d+1})$. This gives the definition of the pentagram map on (classes of projectively equivalent) corrugated polygons $T_{cor}: {\mathcal P}_n^{cor}\to{\mathcal P}_n^{cor}$, see [@GSTV]. In 2D one has ${\mathcal P}_n^{cor}= {\mathcal P}_n$ and this gives the definition of the classical pentagram map on ${\mathcal P}_n$. \[prop:corr\_well\_def\][[@GSTV]]{} The pentagram map $T_{cor}$ is well defined on ${\mathcal P}_n^{cor}$, i.e., it sends a corrugated polygon to a corrugated one. The image of the pentagram map $T_{cor}$ is defined as the intersection of the diagonals in the quadrilateral $(v_{k-1}, v_k, v_{k+d-1}, v_{k+d})$. Consider the diagonal $(v_k, v_{k+d})$. It contains both vertices $T_{cor}v_{k-1}$ and $T_{cor}v_{k}$, as they are intersections of this diagonal with the diagonals $(v_{k-1}, v_{k+d-1})$ and $(v_{k+1}, v_{k+d+1})$ respectively. Similarly, both vertices $T_{cor}v_{k-d-1}$ and $T_{cor}v_{k-d}$ belong to the diagonal $(v_{k-d}, v_{k})$. Hence we obtain two pairs of new vertices $T_{cor}v_{k-d-1}, T_{cor}v_{k-d} $ and $T_{cor}v_{k-1}, T_{cor}v_{k}$ for each $k\in{{\mathbb Z}}$ lying in one and the same 2D plane passing through old vertices $(v_{k-d}, v_{k}, v_{k+d})$. Note also that the indices of these new pairs differ by $d$. Thus they satisfy the corrugated condition. [$\Box$]{} \[thm:restr\_to\_corr\] This pentagram map $T_{cor}: {\mathcal P}_n^{cor}\to {\mathcal P}_n^{cor}$ is a restriction of the dented pentagram map $T_m: {\mathcal P}_n\to {\mathcal P}_n$ for any $m=1,..., d-1$ from generic n-gons ${\mathcal P}_n$ in ${{\mathbb {RP}}}^d$ to corrugated ones ${\mathcal P}_n^{cor}$ (or differs from it by a shift in vertex indices). In order to prove this theorem we first show that the definition of a corrugated polygon in ${{\mathbb {RP}}}^d$ is equivalent to the following: \[prop:equiv\_2-corrug\] Fix any $\ell=2, 3, ..., d-1$. A generic twisted polygon $(v_k)$ is corrugated if and only if the 2$\ell$ vertices $v_{k-(\ell -1)}, ... , v_{k}$ and $v_{k+d-(\ell-1)}, ..., v_{k+d}$ span a projective $\ell$-space for every $k\in {{\mathbb Z}}$. The case $\ell=2$ is the definition of a corrugated polygon. Denote the above projective $\ell$-dimensional space by $Q^\ell_k=(v_{k-(\ell -1)}, ... , v_{k},v_{k+d-(\ell-1)}, ..., v_{k+d})$. Then for any $\ell>2$ the intersections of the $\ell$-spaces $Q^\ell_k$ and $Q^\ell_{k+1}$ is spanned by the vertices $(v_{k-(\ell -2)}, ... , v_{k},v_{k+d-(\ell-2)}, ..., v_{k+d})$ and has dimension $\ell-1$, i.e., is the space $Q^{\ell-1}_k=Q^\ell_k\cap Q^\ell_{k+1}$. This allows one to derive the condition on $(\ell-1)$-dimensional spaces from the condition on $\ell$-dimensional spaces, and hence reduce everything to the case $\ell=2$. Conversely, start with the $(\ell-1)$-dimensional space $Q^{\ell-1}_k$ and consider the space $Q^{\ell}_k$ containing $Q^{\ell-1}_k$, as well as the vertices $v_{k-(\ell -1)}$ and $v_{k+d-(\ell-1)}$. We claim that after the addition of two extra vertices the new space has dimension $\ell$, rather than $\ell+1$. Indeed, the 4 vertices $v_{k-(\ell -1)}, v_{k-(\ell -2)}, v_{k+d-(\ell-1)}, v_{k+d-(\ell-2)}$ lie in one and the same two-dimensional plane according to the above reduction. Thus adding two vertices $v_{k-(\ell -1)}$ and $v_{k+d-(\ell-1)}$ to the space $Q^{\ell-1}_k$, which already contains $v_{k-(\ell -2)}$ and $v_{k+d-(\ell-2)}$, boils down to adding one more projective direction, because of the corrugated condition, and thus $Q^{\ell}_k$ has dimension $\ell$ for all $k\in {{\mathbb Z}}$. [$\Box$]{} Now we take a generic twisted $n$-gon in ${{\mathbb {RP}}}^d$ and consider the dented $(d-1)$-dimensional diagonal $P_k$ corresponding to $m=1$ and $I=(2,1,...,1)$, i.e., the hyperplane passing through the following $d$ vertices: $P_k=(v_k, v_{k+2}, v_{k+3},..., v_{k+d})$. For a corrugated $n$-gon in ${{\mathbb {RP}}}^d$, according to the proposition above, such a diagonal hyperplane will also pass through the vertices $v_{k-(\ell -1)}, ... , v_{k-1}$, i.e., it coincides with the space $Q^\ell_k$ for $\ell=d-1$: $$P_k=Q^{d-1}_k=(v_{k-(d-2)}, ... , v_{k},v_{k+2}, ..., v_{k+d})\,,$$ see Figure \[fig:corrugated\]. ![The diagonal hyperplane $P_k$ coincides with the hyperplane $Q^{3}_k$ in ${{\mathbb {RP}}}^4$. Definitions of the corrugated pentagram map and its dual.[]{data-label="fig:corrugated"}](figure3.pdf){width="6in"} Now the intersection of $d$ consecutive hyperplanes $P_k\cap P_{k+1}\cap...\cap P_{k+d-1}$, by the repeated use of the relation $Q^{\ell-1}_k=Q^\ell_k\cap Q^\ell_{k+1}$ for $\ell=d-1, d-2, ..., 3$, reduces to the intersection of $Q^2_k\cap Q^2_{k+1}\cap Q^2_{k+2}$. The latter is the intersection of the diagonals in $Q^2_{k+1}$, i.e., $(v_{k+1},v_{k+d+1})\cap(v_{k+2},v_{k+d+2})=:T_{cor}v_{k+1}$. Thus the definition of the dented pentagram map $T_m$ for $m=1$ upon restriction to corrugated polygons reduces to the definition of the pentagram map $T$ on the latter (modulo shifts). For any $m=1,...,d-1$ we consider the dented diagonal hyperplane $$P_{k-m+1}=(v_{k-m+1}, ..., v_k, v_{k+2}, v_{k+3},..., v_{k+d-m+1})\,.$$ For corrugated $n$-gons in ${{\mathbb {RP}}}^d$ this diagonal hyperplane coincides with the space $Q^\ell_{k-m+1}$ for $\ell=d-1$ since it passes through all vertices from $v_{k-(d-2)}$ to $v_{k+d}$ with the exception of $v_{k+1}$: $$P_{k-m+1}=Q^{d-1}_k=(v_{k-(d-2)}, ... , v_{k},v_{k+2}, ..., v_{k+d})\,.$$ Thus the corresponding intersection of $d$ consecutive dented diagonal hyperplanes starting with $P_{k-m+1}$ will differ only by a shift of indices from the one for $m=1$. [$\Box$]{} \[cor:corr\_coincide\] For dented pentagram maps $T_m$ with different values of $m$, their restrictions from generic to corrugated twisted polygons in ${{\mathbb {RP}}}^d$ coincide modulo an index shift. Note that the inverse dented pentagram map $\widehat T_m$ upon restriction to corrugated polygons also coincides with the inverse corrugated pentagram map $\widehat T_{cor}$. The latter is defined as follows: for a corrugated polygon $(v_k)$ in ${{\mathbb {RP}}}^d$ for every $k\in {{\mathbb Z}}$ consider the two-dimensional plane spanned by the vertices $v_{k-1}, v_{k}, v_{k+d-1},$ and $v_{k+d}$. In this plane take the intersection of (the continuations of) the sides on the polygon, i.e., lines $(v_{k-1}, v_{k})$ and $(v_{k+d-1}, v_{k+d})$, and set $$\widehat T_{cor}v_k:=(v_{k-1}, v_{k})\cap(v_{k+d-1}, v_{k+d})\,.$$ Continuous limit of the pentagram map $T_{cor}$ for corrugated polygons in ${{\mathbb {RP}}}^d$ is a restriction of the $(2, d+1)$-KdV equation. The continuous limit for dented maps is found by means of the general procedure described Section \[sect:cont\_lim\]. The restriction of the universal $(2, d+1)$-KdV system from generic to corrugated curves might lead to other interesting equations on the submanifold. (This phenomenon could be similar to the KP hierarchy on generic pseudo-differential operators $\partial+\sum_{j\ge 1} u_j(x)\partial^{-j}$, which when restricted to operators of the form $\partial+\psi(x)\partial^{-1}\psi^*(x)$ gives the NLS equation, see [@Kr].) \[rem:map\_corr\] One of applications of corrugated polygons is related to the fact that there is a natural map from generic polygons in 2D to corrugated polygons in any dimension (see [@GSTV] and Remark \[corrug-coord\] below), which generically is a local diffeomorphism. Furthermore, this map commutes with the pentagram map, i.e., it takes deeper diagonals, which join vertices $v_i$ and $v_{i+p}$, in 2D polygons to the intersecting diagonals of corrugated polygons in ${{\mathbb {RP}}}^p$. This way one obtains a representation of the deeper diagonal pentagram map $T_{p,1}$ in ${{\mathbb {RP}}}^2$ via the corrugated pentagram map in higher dimensions, see Figure \[fig:T31-2D\]. ![Deeper pentagram map $T_{3,1}$ in 2D.[]{data-label="fig:T31-2D"}](t31_2d.pdf){width="3.1in"} As a corollary one obtains that the deeper diagonal pentagram map $T_{p,1}$ in ${{\mathbb {RP}}}^2$ is also an integrable system [@GSTV]. Indeed, integrability of corrugated pentagram maps implies integrability of the pentagram map for deeper diagonals in 2D, since first integrals and other structures for the corrugated pentagram map in higher dimensions descends to those for the pentagram map on generic polygons in 2D thanks to the equivariant local diffeomorphism between them. Explicit formulas of the invariants seems to be complicated because of a non-trivial relation between coordinates for polygons in ${{\mathbb {RP}}}^2$ and in ${{\mathbb {RP}}}^p$. Integrability for corrugated polygons {#corr3D} ------------------------------------- Generally speaking, the algebraic-geometric integrability of the pentagram map on the space ${\mathcal P}_n$ (see Theorem \[thm:comparison\] for 3D case), would not necessarily imply the algebraic-geometric integrability for a subsystem, the pentagram map on the subspace ${\mathcal P}_n^{cor}$ of corrugated polygons. However, a Lax representation with a spectral parameter for corrugated polygons naturally follows from that for generic ones. In this section, we perform its analysis in the 3D case (similarly to what has been done in Theorem \[thm:comparison\]), which implies the algebraic-geometric integrability for corrugated polygons in the 3D case. It exhibits some interesting features: the dynamics on the Jacobian depends on whether $n$ is a multiple of $3$, and if it is, it resembles a “staircase”, but with shifts in $3$ different directions. We also establish the equivalence of our Lax representation with that found in [@GSTV]. For simplicity, we assume that $gcd(n,d+1)=1$ (see Remark \[diff-eq\]). In 3D case it just means that $n$ has to be odd. Note that this condition is technical, as it can be gotten rid of by using coordinates introduced in Section \[nonprimes\].[^3] \[corrug-coord\] The coordinates on the space ${\mathcal P}_n^{cor}$ may be introduced using the same difference equation (\[eq:difference\_anyD\]) for $gcd(n,d+1)=1$. Since the corrugated condition means that the vectors $V_j, V_{j+1}, V_{j+d}$ and $ V_{j+d+1}$ are linearly dependent for all $j \in {{\mathbb Z}}$, the subset ${\mathcal P}_n^{cor}$ of corrugated polygons is singled out in the space of generic twisted polygons ${\mathcal P}_n$ by the relations $a_{j,l}=0, \; 2 \le l \le d-1$ in equation , i.e., they are defined by the equations $$\label{eq:2D_corr} V_{j+d+1} = a_{j,d} V_{j+d} + a_{j,1} V_{j+1} +(-1)^{d} V_j,\quad j \in {{\mathbb Z}}\,.$$ Furthermore, note that this relation also allows one to define a map $\psi$ from generic twisted $n$-gons in ${{\mathbb {RP}}}^2$ to corrugated ones in ${{\mathbb {RP}}}^d$ for any dimension $d$, see [@GSTV]. Indeed, consider a lift of vertices $v_j \in {{\mathbb {RP}}}^2$ to vectors $V_j\in {{\mathbb R}}^3$ so that they satisfy the relations for all $ j \in {{\mathbb Z}}$. Note that for $d\ge 3$ this is a nonstandard normalization of the lifts $V_j\in {{\mathbb R}}^3$, different from the one given in equation (\[eq:difference\_anyD\]) for $d=2$, since the vectors in the right-hand side are not consecutive. Now by considering solutions $V_j\in {{\mathbb R}}^{d+1}$ of these linear relations modulo the natural action of $SL_{d+1}({{\mathbb R}})$ we obtain a polygon in the projective space ${{\mathbb {RP}}}^d$ satisfying the corrugated condition. The constructed map $\psi$ commutes with the pentagram maps (since all operations are projectively invariant) and is a local differeomorphism. Observe that the subset ${\mathcal P}_n^{cor} \subset {\mathcal P}_n$ has dimension $2n$. Now we return to the consideration over ${{\mathbb C}}$. The above restriction $gcd(n,d+1)=1$ allows one to define a Lax function in a straightforward way. Here is an analogue of Theorem \[thm:comparison\]: In dimension $3$ the subspace ${\mathcal P}_n^{cor} \subset {\mathcal P}_n$ is generically fibered into (Zariski open subsets of) tori of dimension $g=n-3$ if $n=3l$, and $g=n-1$ otherwise. The Lax function for the map $T_2$ restricted to the space ${\mathcal P}_n^{cor}$ is: $$L_{j,t}^{-1}(\lambda) = \begin{pmatrix} 0 & 0 & 0 & -1\\ 1 & 0 & 0 & a_{j,1}\\ 0 & 1 & 0 & 0\\ 0 & 0 & \lambda & a_{j,3} \end{pmatrix}.$$ Now the spectral function has the form $$R(k,\lambda) = k^4 - \dfrac{k^3}{\lambda^{\lfloor n/3 \rfloor}} \left( \sum_{j=0}^{\lfloor n/3 \rfloor} G_j \lambda^j \right) + \dfrac{k^2}{\lambda^{\lfloor 2n/3 \rfloor}} \left( \sum_{j=0}^{N_0} J_j \lambda^j \right) - \dfrac{k}{\lambda^n} \left( \sum_{j=0}^{\lfloor n/3 \rfloor} I_j \lambda^j \right) +\dfrac{1}{\lambda^n}$$ where $N_0=\lfloor n/3 \rfloor-\lfloor gcd(n-1,3)/3 \rfloor$. One can show that $G_{\lfloor n/3 \rfloor}=\prod_{j=0}^{n-1} a_{j,1}$ and $I_0=\prod_{j=0}^{n-1} a_{j,3}$. Below we present the summary of relevant computations for the spectral functions, Casimirs, and the Floquet-Bloch solutions, cf. Section \[sect:ag-in\]. [|c|c|]{}\ $\lambda=0$ & $\lambda=\infty$\ $O_1: k_1 = 1/I_0 + {\mathcal O}(\lambda)$ & $W_1: k_1=G_l(1+{\mathcal O}(\lambda^{-1}))$\ $O_2: k_{2,3,4}= I_0^{1/3} \lambda^{-n/3} (1+{\mathcal O}(\lambda^{1/3}))$, & $W_2: k_{2,3,4}=G_l^{-1/3}\lambda^{-n/3}(1+ {\mathcal O}(\lambda^{-1/3}))$,\ \ \ \ \ \ [|c|c|]{}\ $\lambda=0$ & $\lambda=\infty$\ $O_1: k_1 = 1/I_0 + {\mathcal O}(\lambda)$ & $W_1: k_1=G_l(1+{\mathcal O}(\lambda^{-1}))$\ $O_{2,3,4}: k_{2,3,4}= c_1 \lambda^{-l} (1+{\mathcal O}(\lambda))$, & $W_{2,3,4}: k_{2,3,4}=c_2 \lambda^{-l}(1+ {\mathcal O}(\lambda^{-1}))$,\ where $c_1^3-c_1^2 G_0+c_1 J_0-I_0=0.$ & where $c_2^3 G_l-c_2^2 J_l+ c_2 I_l-1=0.$\ \ \ \ \ \ \ The genus of spectral curves found above exhibits the dichotomy $g=n-3$ or $g=n-1$ according to divisibility of $n$ by 3. [$\Box$]{} It is worth noting that dimensions $g=n-3$ or $g=n-1$ of the Jacobians, and hence of the invariant tori, are consistent in the following sense: - the sum of the genus of the spectral curve (which equals the dimension of its Jacobian) and the number of the first integrals equals $2n$, i.e., the dimension of the system; - the number of the first integrals minus the number of the Casimirs equals the genus of the curve. The latter also suggests that Krichever-Phong’s universal formula provides a symplectic form for this system. Also note that Lax functions corresponding to the maps $T_1$ and $T_2$ restricted to the subspace ${\mathcal P}_n^{cor}$ lead to the same spectral curve in 3D, as one can check directly. This, in turn, is consistent with Corollary \[cor:corr\_coincide\]. In any dimension the Lax function for the corrugated pentagram map $T_1$ in ${{\mathbb {CP}}}^d$ for $gcd(n,d+1)=1$ is $$L_{j,t}(\lambda) = \left( \begin{array}{cccc|c} 0 & 0 & \cdots & 0 &(-1)^d\\ \cline{1-5} \multicolumn{4}{c|}{\multirow{5}*{$D(\lambda)$}} & a_{j,1}\\ &&&& 0\\ &&&& \cdots\\ &&&& 0\\ &&&& a_{j,d}\\ \end{array} \right)^{-1},$$ with the diagonal $(d \times d)$-matrix $D(\lambda)={\rm diag}(1,\lambda, 1,...1)$. It is equivalent to the one found in [@GSTV]. The above Lax form follows from Remark \[corrug-coord\] and Theorem \[thm:lax\_anyD\]. To show the equivalence we define the gauge matrix as follows: $$g_j = \left( \begin{array}{cccc|c} 0 & 0 & \cdots & 0 &(-1)^d\\ \cline{1-5} \multicolumn{4}{c|}{\multirow{4}*{$C_j$}} & 0\\ &&&& \cdots\\ &&&& 0\\ &&&& a_{j,d}\\ \end{array} \right),$$ where $C_j$ is the $(d \times d)$ diagonal matrix, and its diagonal entries are equal to $(C_j)_{ll}=\prod_{k=0}^{d-l} a_{j-k,d},\; $ $1 \le l \le d$. One can check that $$\tilde{L}_{j,t}(\lambda) = \dfrac{g_j^{-1} L_{j,t}^{-1} g_{j+1}}{a_{j+1,d}} = \left( \begin{array}{cccccc} 0 & 0 & 0 & \cdots & x_j & x_j+y_j\\ \lambda & 0 & 0 & \cdots & 0 & 0\\ 0 & 1 & 0 & \cdots & 0 & 0\\ 0 & 0 & 1 & \cdots & 0 & 0\\ \multicolumn{6}{c}{\cdots}\\ 0 & 0 & 0 & \cdots & 1 & 1\\ \end{array} \right),$$ $$\text{ where } x_j = \dfrac{a_{j,1}}{\prod_{l=0}^{d-1} a_{j-l,d}}, \quad y_j = \dfrac{1}{ \prod_{l=-1}^{d-1} a_{j-l,d} },$$ which agrees with formula (10) in [@GSTV]. [$\Box$]{} Note that the corresponding corrugated pentagram map has a cluster interpretation [@GSTV] (see also [@Glick] for the 2D case). On the other hand, it is a restriction of the dented pentagram map, which brings one to the following Is it possible to realize the dented pentagram map $T_m$ on [generic twisted polygons]{} in ${{\mathbb {P}}}^d$ as a sequence of cluster transformations? We address this problem in a future publication. Applications: integrability of pentagram maps for deeper dented diagonals {#sect:appl} ========================================================================= In this section we consider in detail more general dented pentagram maps. Fix an integer parameter $p\ge 2$ in addition to an integer parameter $m\in \{1,...,d-1\}$ and define the $(d-1)$-tuple $I=I_{m}^{p}:=(1,...,1,p,1,...,1)$, where the value $p$ is situated at the $m$th place: $i_m=p$ and $i_\ell=1$ for $\ell\not=m$. This choice of the tuple $I$ corresponds to the diagonal plane $P_k$ which passes through $m$ consecutive vertices $v_k, v_{k+1},...,v_{k+m-1}$, then skips $p-1$ vertices $v_{k+m}, ..., v_{k+m+p-2}$ (i.e., “jumps to the next $p$th vertex") and continues passing through the next $d-m$ consecutive vertices $v_{k+m+p-1},...,v_{k+d+p-2}$: $$P_k:=(v_k, v_{k+1},...,v_{k+m-1},v_{k+m+p-1},v_{k+m+p},...,v_{k+d+p-2})\,.$$ We call such a plane $P_k$ a [*deep-dented diagonal (DDD) plane*]{}, as the “dent" now is of depth $p$, see Figure \[fig:DDD-plane\]. ![The diagonal hyperplane for $I=(1,1,3,1)$ in ${{\mathbb {RP}}}^5$.[]{data-label="fig:DDD-plane"}](t1131_p5.pdf){width="2.9in"} Now we intersect $d$ consecutive planes $P_k$, to define the [*deep-dented pentagram map*]{} by $$T_m^p v_k:=P_{k}\cap P_{k+1}\cap ...\cap P_{k+d-1}\,.$$ In other words, we keep the same definition of the $(d-1)$-tuple $J={\mathbf 1}:=(1,1,...,1)$ as before: $T_m^p:=T_{I_{m}^{p},{\mathbf 1}}$. \[thm:ddd\] The deep-dented pentagram map for both twisted and closed polygons in any dimension is a restriction of an integrable system to an invariant submanifold. Moreover, it admits a Lax representation with a spectral parameter. To prove this theorem we introduce spaces of partially corrugated polygons, occupying intermediate positions between corrugated and generic ones. [A twisted polygon $(v_j)$ in ${{\mathbb {RP}}}^d$ is [*partially corrugated*]{} (or $(q,r;\ell)$-[*corrugated*]{}) if the diagonal subspaces $P_j$ spanned by two clusters of $q$ and $r$ consecutive vertices $v_j$ with a gap of $(d-\ell)$ vertices between them (i.e., $P_j=(v_j, v_{j+1},...,v_{j+q-1},v_{j+q+d-\ell},v_{j+q+d-\ell+1},...,v_{j+q+d-\ell+r-1})$, see Figure \[fig:DDD-plane\]) are subspaces of a fixed dimension $\ell\le q+r -2$ for all $j\in {{\mathbb Z}}$. The inequality $\ell\le q+r -2$ shows that indeed these vertices are not in general position, while $\ell= q+r -1$ corresponds to a generic twisted polygon. We also assume that $q\ge2$, $r\ge 2$, and $\ell\ge\max\{q,r\}$, so that the corrugated restriction would not be local, i.e., coming from one cluster of consecutive vertices, but would come from the interaction of the two clusters of those. ]{} Fix $n$ and denote the space of partially corrugated twisted $n$-gons in ${{\mathbb {RP}}}^d$ (modulo projective equivalence) by ${\mathcal P}^{par}$. Note that the corrugated condition in Definition \[def:2-corrugated\] means $(2,2;2)$-corrugated in this terminology. \[prop:equiv\_partial-corrug\] The definition of a $(q,r;\ell)$-corrugated polygon in ${{\mathbb {RP}}}^d$ is equivalent to the definition of a $(q+1,r+1;\ell+1)$-corrugated polygon, i.e., one can add (respectively, delete) one extra vertex in each of the two clusters of vertices, as well as to increase (respectively, decrease) by one the dimension of the subspace through them, as long as $q,r\ge 2$, $\ell \le q+r -2$, and $\ell\le d-2$. The proof of this fact is completely analogous to the proof of Proposition \[prop:equiv\_2-corrug\] by adding one vertex in each cluster. [$\Box$]{} Define the [*partially corrugated pentagram map*]{} $T_{par}$ on the space ${\mathcal P}^{par}$: to a partially corrugated twisted $n$-gon we associate a new one obtained by taking the intersections of $\ell+1$ consecutive diagonal subspaces $P_j$ of dimension $\ell$. \[prop:partial-pent\] i) The partially corrugated pentagram map is well defined: by intersecting $\ell+1$ consecutive diagonal subspaces one generically gets a point in the intersection. ii) This map takes a partially corrugated polygon to a partially corrugated one. Note that the gap of $(d-\ell)$ vertices between clusters is narrowing by 1 vertex at each step as the dimension $\ell$ increases by 1. Add maximal number of vertices, so that obtain a hyperplane (of dimension $d-1$) passing through the clusters of $q$ and $r$ vertices with a gap of one vertex between them. This is a dented hyperplane. One can see that intersections of 2, 3, ... consecutive dented hyperplanes gives exactly the planes of dimensions $d-2, d-3, ...$ obtained on the way while enlarging the clusters of vertices. Then the intersection of $d$ consecutive dented hyperplanes is equivalent to the intersection of $\ell+1$ consecutive diagonal subspaces of dimension $\ell$ for partially corrugates polygons, and generically is a point. The fact that the image of a partially corrugated polygon is also partially corrugated can be proved similarly to the standard corrugated case, cf. Proposition \[prop:corr\_well\_def\]. We demonstrate the necessary changes in the following example. Consider the $(3,2;3)$-corrugated polygon in ${{\mathbb {RP}}}^d$ (here $q=3, r=2, \ell=3$), i.e., whose vertices $(v_j, v_{j+1},v_{j+2},v_{j+d},v_{j+d+1})$ form a 3D subspace $P_j$ in ${{\mathbb {RP}}}^d$ for all $j\in {{\mathbb Z}}$. One can see that for the image polygon: $a)$ three new vertices will be lying in the 2D plane obtained as the intersection $B_{j+1}:=P_j\cap P_{j+1}=(v_{j+1},v_{j+2},v_{j+d+1})$ (since to get each of these three new vertices one needs to intersect this two planes with two more and the corresponding intersections will always be in lying in this plane); $b)$ similarly, two new vertices will be lying on a certain line passing through the vertex $v_{j+d+1}$ (this line is the intersection of the 2-planes: $l_{j+d}:= B_{j+d}\cap B_{j+d+1}$). Hence the obtained new 5 vertices belong to one and the same 3D plane spanned by $B_{j+1}$ and $l_{j+d}$ and hence satisfy the $(3,2;3)$-corrugated condition for all $j\in {{\mathbb Z}}$. The case of a general partially corrugated condition is proved similarly. [$\Box$]{} [**(=\[thm:ddd\]$'$)**]{}\[thm:par\] The pentagram map on partially corrugated polygons in any dimension is an integrable system: it admits a Lax representation with a spectral parameter. $\!\!\!\!$[**\[thm:ddd\] and \[thm:par\].**]{} Now suppose that we are given a generic polygon in ${{\mathbb {RP}}}^c$ and the pentagram map constructed with the help of a deep-dented diagonals (of dimension $c-1$) with the $(c-1)$-tuple of jumps $I=(1,...,1,p,1,...,1)$, which includes $m$ and $c-m$ consecutive vertices before and after the gap respectively. Note that the corresponding gap between two clusters of points for such diagonals consists of $p-1$ vertices. Associate to this polygon a partially $(q, r; \ell)$-corrugated polygon in the higher-dimensional space ${{\mathbb {RP}}}^d$ with clusters of $q=m+1$ and $ r=(c-m)+1$ vertices, the diagonal dimension is $\ell=c$, and the space dimension is $d=c+p-2$. Namely, in the partially corrugated polygon we add one extra vertex in each cluster, increase the dimension of the diagonal plane by one as well (without the corrugated condition the diagonal dimension would increase by two after the addition of two extra vertices), while the gap between two new clusters decreases by one: $(p-1)-1=p-2$. Then the dimension $d$ is chosen so that the gap between two new clusters is $p-2=d-\ell$, which implies that $d=\ell+p-2=c+p-2$. (Example: for deeper $p$-diagonals in ${{\mathbb {RP}}}^2$ one has $c=2, m=1, q=r=\ell=2$, and this way one obtains the space of corrugated polygons in ${{\mathbb {RP}}}^d$ for $d=p$.) Consider the map $\psi$ associating to a generic polygon in ${{\mathbb {RP}}}^c$ a partially corrugated twisted polygon in ${{\mathbb {RP}}}^d$, where $d=c+p-2$. (The map $\psi$ is defined similarly to the one for corrugated polygons in Remark \[corrug-coord\].) This map $\psi$ is a local diffeomorphism and commutes with the pentagram map: the deep-dented pentagram map in ${{\mathbb {RP}}}^c$ is taken to the pentagram map $T_{par}$ on partially corrugated twisted polygons in ${{\mathbb {RP}}}^d$. In turn, the map $T_{par}$ is the restriction of the integrable dented pentagram map in ${{\mathbb {RP}}}^d$. Thus the deep-dented pentagram map on polygons in ${{\mathbb {RP}}}^c$ is the restriction to an invariant submanifold of an integrable map on partially corrugated twisted polygons in ${{\mathbb {RP}}}^d$, and hence it is a subsystem of an integrable system. The Lax form of the map $T_{par}$ can be obtained by restricting the Lax form for dented maps from generic to partially corrugated polygons in ${{\mathbb {RP}}}^d$. We present this Lax form below. [$\Box$]{} \[rem:coord\_ddd\] [Now we describe coordinates on the subspace of partially corrugated polygons and a Lax form of the corresponding pentagram map $T_{par}$ on them. Recall that on the space of generic twisted $n$-gons $(v_j)$ in ${{\mathbb {RP}}}^d$ for $gcd(n,d+1)=1$ there are coordinates $a_{j,k}$ for $0\le j \le n-1, 0\le k\le d-1$ defined by equation : $$V_{j+d+1}=a_{j,d} V_{j+d} + ... + a_{j,1} V_{j+1} +(-1)^d V_j\,,$$ where $V_j\in {{\mathbb R}}^{d+1}$ are lifts of vertices $v_j\in {{\mathbb {RP}}}^d$. One can see that the submanifold of $(q,r;\ell)$-corrugated polygons in ${{\mathbb {RP}}}^d$ without loss of generality can be assumed to have the minimal number of vertices in clusters (see Proposition \[prop:equiv\_partial-corrug\]). In other words, in this case there is such a positive integer $m$ that $q=m+1, r=(\ell-m)+1$, while the gap between clusters consists of $d-\ell$ vertices. Hence the corresponding twisted polygons are described by linear dependence of $q=m+1$ vertices $V_j, V_{j+1}, ..., V_{j+m}$ and $r$ vertices $V_{j+d+m-\ell+1},..., V_{j+d+1}$. (Example: for $m=1, \ell=2$ implies a linear relation between $ V_j, V_{j+1}$ and $V_{j+d}, V_{j+d+1}$, which is the standard corrugated condition.) This relation can be written as $$V_{j+d+1}=a_{j,d} V_{j+d} + ... +a_{j,d+m-\ell+1} V_{j+d+m-\ell+1} + a_{j,m} V_{j+m} +... +a_{j,1} V_{j+1} +(-1)^d V_j$$ for all $j\in {{\mathbb Z}}$ by choosing an appropriate normalization of the lifts $V_j\in {{\mathbb R}}^{d+1}$. Thus the set of partially corrugated polygons is obtained by imposing the condition $a_{j,k}=0$ for $m+1\le k \le d+m-\ell$ and $0\le j \le n-1$ in the space of generic twisted polygons given by equation . Note that the space of $(m+1, \ell-m+1 ;\ell)$-corrugated $n$-gons in ${{\mathbb {RP}}}^d$ has dimension $n\ell$, while the space of generic twisted $n$-gons has dimension $nd$. ]{} In the complex setting, the Lax representation on such partially corrugated $n$-gons in ${{\mathbb {CP}}}^d$ or on generic $n$-gons in ${{\mathbb {CP}}}^c$ with deeper dented diagonals is described as follows. The deep-dented pentagram map $T_m^p$ on generic twisted and closed polygons in ${{\mathbb {CP}}}^c$ and the pentagram map $T_{par}$ on corresponding partially corrugated polygons in ${{\mathbb {CP}}}^d$ with $d=c+p-2$ admits the following Lax representation with a spectral parameter: for $gcd(n,d+1)=1$ its Lax matrix is $$\label{eq:part_corr} L_{j,t}(\lambda) = \left( \begin{array}{cccccc|c} 0 & 0 & \cdots & \cdots & 0 & 0 &(-1)^d\\ \cline{1-7} \multicolumn{6}{c|}{\multirow{8}*{$D(\lambda)$}} & a_{j,1}\\ &&&&&& \cdots\\ &&&&&& a_{j,m}\\ &&&&&& 0\\ &&&&&& \cdots\\ &&&&&& 0\\ &&&&&& a_{j,d+m-\ell+1}\\ &&&&&& \cdots\\ &&&&&& a_{j,d}\\ \end{array} \right)^{-1},$$ with the diagonal $(d \times d)$-matrix $D(\lambda)={\rm diag}(1,...,1,\lambda, 1,...1)$, where $\lambda$ is situated at the $(m+1)$[th]{} place, and an appropriate matrix $P_{j,t}(\lambda)$. This follows from the fact that the partially corrugated pentagram map is the restriction of the dented map to the invariant subset of partially corrugated polygons, so the Lax form is obtained by the corresponding restriction as well, cf. Theorem \[thm:lax\_anyD\]. [$\Box$]{} Note that the jump tuple $I=(2,3)$ in ${{\mathbb {P}}}^3$ corresponds to the first case which is neither a deep-dented pentagram map, nor a short-diagonal one, and whose integrability is unknown. It would be very interesting if the corresponding pentagram map turned out to be non-integrable. Some numerical evidence of non-integrability in that case is presented in [@KS14]. [99]{} M. Adler, [*On a trace functional for formal pseudo differential operators and the symplectic structure of the Korteweg-de Vries type equations,*]{} Invent. Math., vol. 50 (1978/79), no. 3, 219–248. M. Gekhtman, M. Shapiro, S. Tabachnikov, A. Vainshtein, *Higher pentagram maps, weighted directed networks, and cluster dynamics,* Electron. Res. Announc. Math. Sci., vol. 19 (2012), 1–17; arXiv:1110.0472. M. Glick, *The pentagram map and Y-patterns,* Adv. Math., vol. 227 (2011), 1019–1045. B. Khesin, F. Soloviev, *Integrability of higher pentagram maps,* Math. Ann., vol. 357 (2013), no.3, 1005–1047; arXiv:1204.0756. B. Khesin, F. Soloviev, *Non-integrability vs. integrability of pentagram maps,* J. Geometry and Physics, vol. 87 (2015), 275–285; arXiv:1404.6221. I.M. Krichever, *General rational reductions of the KP hierarchy and their symmetries,* Funct. Anal. Appl., vol. 29(2) (1995), 75–80. G. Marí-Beffa, *On generalizations of the pentagram map: discretizations of AGD flows,* J. Nonlinear Sci., vol. 23 (2013), no. 2, 303–334; arXiv:1103.5047. G. Marí-Beffa, *On integrable generalizations of the pentagram map,* IMRN, (2014), 1–17; arXiv:1303.4295. V. Ovsienko, R. Schwartz, S. Tabachnikov, *The pentagram map: a discrete integrable system,* Comm. Math. Phys., vol. 299 (2010), 409–446; arXiv:0810.5605 R. Schwartz, *The pentagram map,* Experiment. Math., vol. 1 (1992), 71–81. R. Schwartz, *Discrete monodromy, pentagrams, and the method of condensation,* J. Fixed Point Theory Appl., vol. 3 (2008), no.2, 379–409. F. Soloviev, *Integrability of the pentagram map,* Duke Math. Journal, vol. 162 (2013), no.15, 2815–2853; arXiv:1106.3950. [^1]: Department of Mathematics, University of Toronto, Toronto, ON M5S 2E4, Canada; e-mails: and [^2]: Note also that over ${{\mathbb R}}$ for odd $d$ to obtain the lifts of $n$-gons from ${{\mathbb {RP}}}^d$ to ${{\mathbb R}}^{d+1}$ one might need to switch the sign of the monodromy matrix: $M \to -M \in SL_{d+1}({{\mathbb R}})$, since the field is not algebraically closed. These monodromies in $SL_{d+1}({{\mathbb R}})$ correspond to the same projective monodromy in $PSL_{d+1}({{\mathbb R}})$. [^3]: Another way to introduce the coordinates is by means of the difference equation $V_{j+d+1} = V_{j+d}+b_{j,d-1} V_{j+d-1}+...+b_{j,1} V_{j+1}+b_{j,0} V_j$, used in [@GSTV].
--- author: - | [^1]\ Author affiliation\ E-mail: title: Contribution title --- ... === Acknowledgements {#acknowledgements .unnumbered} ================ This conference has been organized with the support of the Department of Physics and Astronomy “Galileo Galilei”, the University of Padova, the National Institute of Astrophysics INAF, the Padova Planetarium, and the RadioNet consortium. RadioNet has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 730562. [99]{} .... [^1]: A footnote may follow.
--- abstract: 'We study the symmetries of the three heavy-quark system under exchange of the quark fields within the effective field theory framework of potential non-relativistic QCD. The symmetries constrain the form of the matching coefficients in the effective theory. We then focus on the color-singlet sector and determine the so far unknown leading ultrasoft contribution to the static potential, which is of order $\alpha_{\rm s}^4\ln\mu$, and consequently to the static energy, which is of order $\alpha_{\rm s}^4\ln\alpha_{\rm s}$. Finally, in the case of an equilateral geometry, we solve the renormalization group equations and resum the leading ultrasoft logarithms for the static potential of three quarks in a color singlet, octet and decuplet representation.' author: - Nora Brambilla - Felix Karbstein - Antonio Vairo title: 'Symmetries of the Three Heavy-Quark System and the Color-Singlet Static Energy at NNLL' --- Introduction {#intro} ============ Bound states of a heavy quark $Q$ and an antiquark $\bar Q$ have been the subject of extensive theoretical studies since the early days of quantum chromodynamics (QCD). Relatively less attention has been paid to bound states of three heavy quarks ($QQQ$), also referred to as triple heavy baryons, as a consequence of their still missing experimental evidence. Nevertheless there is an ongoing theoretical activity devoted to their study mostly driven by lattice computations [@Sommer:1985da; @Takahashi:2000te; @Takahashi:2002bw; @Suganuma:2000bi; @Alexandrou:2002sn; @Takahashi:2002it; @Takahashi:2003ty; @Bornyakov:2004uv; @Bornyakov:2004yg; @Takahashi:2004rw; @Hubner:2007qh; @Iritani:2010mu; @Meinel:2012qz], but also by phenomenological analyses (for a review see [@Klempt:2009pi]) and more recently by effective field theory methods [@Brambilla:2005yk; @Brambilla:2009cd; @Vairo:2010su]. The theoretical interest is mainly triggered by the geometry of these systems, which allows to address questions that are inaccessible with two-body systems. Examples are the minimal energy configuration of three quarks in the presence of a confining potential or the origin of a three-body interaction. In this paper we will further explore the geometrical properties of the three heavy-quark system. Systems of heavy quarks are conveniently studied within an effective field theory (EFT) framework, a treatment motivated by the observation that these systems are non-relativistic and, therefore, characterized by, at least, three separated and hierarchically ordered energy scales: a hard scale of the order of the heavy-quark mass, $m$, a soft scale of the order of the typical relative momenta of the heavy quarks, which are much smaller than $m$, and an ultrasoft (US) scale of the order of the typical binding energy, which is much smaller than the relative momenta.[^1] We further assume that these scales are much larger than the typical hadronic scale $\Lambda_{\rm QCD}$, in this way justifying a perturbative treatment for all of them. By integrating out modes associated with the different energy scales one goes through a sequence of EFTs [@Brambilla:2004jw]: non-relativistic QCD (NRQCD), obtained from integrating out hard modes [@Caswell:1985ui; @Bodwin:1994jh] and potential non-relativistic QCD (pNRQCD), derived from integrating out gluons with soft momenta from NRQCD [@Pineda:1997bj; @Brambilla:1999xf]. Potential NRQCD provides a formulation of the non-relativistic system in terms of potentials and US interactions; for this reason it has proven a convenient framework for calculating US corrections. Although originally designed for the study of $Q\bar Q$ bound states, i.e. quarkonia, pNRQCD has been subsequently applied also to baryons with two and three heavy quarks [@Brambilla:2005yk; @Brambilla:2009cd]. In this paper we study the symmetry properties of three heavy-quark systems under exchange of the heavy-quark fields and their implications for the form of the pNRQCD Lagrangian. We also calculate the US corrections of order $\alpha_{\rm s}^4\ln\alpha_{\rm s}$ to the singlet static energy and of order $\alpha_{\rm s}^4\ln\mu$ to the singlet static potential of a triple heavy baryon. Whereas this has been achieved for the case of $Q\bar Q$ systems more than ten years ago [@Brambilla:1999qa], the result for $QQQ$ systems will be new. The paper is organized as follows. Section \[sec:construction\] is devoted to set up pNRQCD for systems made of three static quarks. The explicit construction and color structure of the heavy-quark composite fields, pNRQCD is conventionally formulated in, is outlined in detail. In Sec. \[sec:sym\], we discuss the symmetry under exchange of the heavy-quark fields and analyze its implications for the various matching coefficients, i.e. the potentials, of pNRQCD. In Sec. \[sec:singlet\], we determine the correction of order $\alpha_{\rm s}^4\ln\alpha_{\rm s}$ to the singlet static energy. Restricting ourselves to an equilateral configuration of the heavy quarks, we finally solve in Sec. \[sec:towardsmuinV\] the renormalization group equations for the singlet, octet and decuplet static potentials at leading logarithmic accuracy. We conclude in Sec. \[sec:conclusions\]. $\text{p}$NRQCD for $QQQ$ {#sec:construction} ========================= In this section, we shortly review the basic steps that lead to pNRQCD for systems made of three static quarks. Finite mass corrections may be systematically added to the static Lagrangian in the form of irrelevant operators, some of which have been considered in [@Brambilla:2005yk]. The non-relativistic nature of the system ensures that, apart from the kinetic energy, which is of the same order as the static potential, $1/m$ corrections are small. NRQCD {#subseq:NRQCD} ----- Our starting point is NRQCD in the static limit. In the quark sector the Lagrangian is identical with the heavy-quark effective theory Lagrangian [@Eichten:1989zv] and reads $${\cal L}_{\rm NRQCD}=Q^{\dag}iD^0Q + \sum_{l}\bar{q}^{\,l}i\slashed{D}q^l - \frac{1}{4}F^a_{\mu\nu}F^{a\mu\nu} \,. \label{eq:NRQCD}$$ The heavy-quark fields $Q$ ($Q^{\dag}$), which annihilate (create) a heavy quark, are described by Pauli spinors, whereas $q^l$ are the Dirac spinors that describe light (massless) quarks of flavor $l$. The quantity $iD^0=i\partial^0-gA^0$ denotes the time component of the covariant derivative, where $g$ is the strong gauge coupling, $\alpha_{\rm s} \equiv g^2/(4\pi)$, and $A^0$ is the time component of the gauge field. The Lagrangian is insensitive to the flavor assignment of the heavy-quark fields, a property known as heavy-quark symmetry. We have omitted the heavy-antiquark sector, as it is irrelevant to our scope. pNRQCD ------ For the purpose of studying heavy-quark bound states, it is convenient to employ an EFT where the heavy-quark potentials are explicit rather than encoded in dynamical gluons, as it is the case in NRQCD. Such an EFT is pNRQCD, which is obtained from NRQCD by integrating out gluons whose momenta are soft. The degrees of freedom of pNRQCD are heavy-quark fields, light quarks and US gluons. As it is unnecessary to resolve the individual heavy quarks, pNRQCD is often formulated in terms of heavy-quark composite fields. The matching coefficients of pNRQCD multiplying operators bilinear in the composite fields may then be interpreted as the heavy-quark potentials in the corresponding color configurations. The derivation of pNRQCD involves identifying the heavy-quark composite fields in NRQCD, matching them to pNRQCD, and explicitly ensuring that the resulting pNRQCD field content is ultrasoft. We start with the construction of the heavy-quark composite fields. This is the point where the specific heavy-quark state that the EFT is meant to describe has to be specified. In our case, this is a $QQQ$ state. Geometry of a three-quark state {#subseq:geomQQQ} ------------------------------- To characterize the geometry of a $QQQ$ state, we call ${\bf x}_1$, ${\bf x}_2$ and ${\bf x}_3$ the positions of the quarks and define the vectors ${\bf r}_i$ ($i=1,2,3$) as follows (cf. Fig. \[triangle\]), $${\bf r}_1={\bf x}_1-{\bf x}_2\,, \qquad {\bf r}_2={\bf x}_1-{\bf x}_3\,, \qquad {\bf r}_3={\bf x}_2-{\bf x}_3\,. \label{rx123}$$ Note that the three vectors are not independent, for ${\bf r}_1+{\bf r}_3={\bf r}_2$. Moreover, for three quarks of equal mass or static, it is useful to define the vectors $$\pmb{\rho}={\bf r}_1\,,\qquad \pmb{\lambda}=\frac{{\bf r}_2+{\bf r}_3}{2}\,. \label{rholambda}$$ ![Triangle formed by three heavy quarks located at the positions ${\bf x}_1$, ${\bf x}_2$ and ${\bf x}_3$. The vector $\pmb{\lambda}$ points from the heavy quark at ${\bf x}_3$ to the center of mass of the two heavy quarks at ${\bf x}_1$ and ${\bf x}_2$.[]{data-label="triangle"}](triangle){width="4cm"} Heavy-quark composite fields {#subseq:comp_fields} ---------------------------- Quarks transform under the fundamental representation, $3$, of the (color) gauge group SU(3)$_c$. Hence, a generic three (heavy) quark field made of fields located at the same point, $Q_iQ_jQ_k$ ($i,j,k=1,2,3$ denote color indices), transforms as a representation of $3\otimes3\otimes3$. The direct product can be decomposed into a sum of irreducible representations of SU(3)$_c$, namely $$3\otimes3\otimes3=1\oplus8\oplus8\oplus10\,. \label{prodinirreps}$$ In general, however, the three quarks are located at different spatial positions ${\bf x}_1$, ${\bf x}_2$ and ${\bf x}_3$. Under an SU(3)$_c$ gauge transformation, each heavy-quark field $ Q_i({\bf x},t)$ transforms as $\displaystyle Q_i({\bf x},t) \to U_{ii'}({\bf x},t)Q_{i'}({\bf x},t)$, where $U({\bf x},t)=\exp\left[i\theta^a({\bf x},t)T^a\right]$, and $T^a={\lambda^a}/{2}$ ($a=1, \ldots, 8$) denote the eight generators of SU(3)$_c$ in the fundamental representation; $\lambda^a$ are the Gell-Mann matrices. The decomposition (\[prodinirreps\]) requires the fields to be linked to a common point ${\bf R}$. For a multi-quark system a natural choice is the system’s center of mass. A way to link the quark fields to another point is through an equal-time straight Wilson string, $$\begin{aligned} \phi({\bf y},{\bf x},t)= {\cal P}\exp\left\{ig\int_0^1{\rm d}s\ ({\bf y}-{\bf x})\cdot{\bf A}({\bf x}+({\bf y}-{\bf x})s,t)\right\},\end{aligned}$$ where ${\bf A}={\bf A}^aT^a$ is the color gauge field, and ${\cal P}$ denotes path ordering of the color matrices. Due to its transformation property under SU(3)$_c$ gauge transformations, $\phi({\bf y},{\bf x},t)$ $\to U({\bf y},t)\phi({\bf y},{\bf x},t)U^{\dag}({\bf x},t)$, the Wilson string acts as a gauge transporter, and $\phi({\bf R},{\bf x},t)Q({\bf x},t)$ $\to U({\bf R},t)\phi({\bf R},{\bf x},t)Q({\bf x},t)$ indeed transforms like a quark field located at ${\bf R}$. Hence, the following three-quark field, $$\begin{aligned} \hspace*{-2.3mm}{\cal M}_{ijk}({\bf x}_1,{\bf x}_2,{\bf x}_3,t)= \phi_{ii'}({\bf R},{\bf x}_1,t)Q_{i'}({\bf x}_1,t)\phi_{jj'}({\bf R},{\bf x}_2,t)Q_{j'}({\bf x}_2,t) \phi_{kk'}({\bf R},{\bf x}_3,t)Q_{k'}({\bf x}_3,t), \label{QQQbzglR}\end{aligned}$$ transforms as a $3\otimes3\otimes3$ representation of the SU(3)$_c$ gauge group, and, following Eq. (\[prodinirreps\]), can be decomposed into a singlet, two octets and a decuplet field with respect to gauge transformations in ${\bf R}$. Since the quark fields do not commute, the order of the quark fields in Eq. (\[QQQbzglR\]) matters. This observation will play a crucial role in Sec. \[sec:sym\]. For simplicity, we have omitted an explicit reference to ${\bf R}$ in the argument of ${\cal M}$, which includes the time coordinate $t$ and the list of position coordinates (${\bf x}_1,{\bf x}_2,{\bf x}_3$) of the heavy-quark fields in the order (from left to right) of their appearance on the right-hand side of Eq. (\[QQQbzglR\]). The same convention is used for the color indices ($i,j,k$). The composite field ${\cal M}_{ijk}$ may be decomposed into a singlet, $S$, two octets, $O^{A}$ and $O^{S}$, and a decuplet, $\Delta$, according to $$\begin{gathered} {\cal M}_{ijk}({\bf x}_1,{\bf x}_2,{\bf x}_3,t)=S({\bf x}_1,{\bf x}_2,{\bf x}_3,t){\underline {\bf S}}_{ijk} +\sum_{a=1}^8O^{Aa}({\bf x}_1,{\bf x}_2,{\bf x}_3,t){\underline {\bf O}}^{Aa}_{ijk} \\ +\sum_{a=1}^8O^{Sa}({\bf x}_1,{\bf x}_2,{\bf x}_3,t){\underline {\bf O}}^{Sa}_{ijk} +\sum_{\delta=1}^{10}\Delta^{\delta}({\bf x}_1,{\bf x}_2,{\bf x}_3,t){\underline {\bf \Delta}}^{\delta}_{ijk}, \label{eq:dec}\end{gathered}$$ where ${\underline {\bf S}}_{ijk}$, ${\underline {\bf O}}^{Aa}_{ijk}$, ${\underline {\bf O}}^{Sa}_{ijk}$ and ${\underline {\bf \Delta}}^{\delta}_{ijk}$ are orthogonal and normalized color tensors that satisfy the relations $$\begin{aligned} &{\underline {\bf S}}_{ijk}{\underline {\bf S}}_{ijk}=1\,, \quad {\underline {\bf O}}^{Aa*}_{ijk}{\underline {\bf O}}^{Ab}_{ijk} =\delta^{ab}\,, \quad {\underline {\bf O}}^{Sa*}_{ijk}{\underline {\bf O}}^{Sb}_{ijk} =\delta^{ab}\,, \quad {\underline {\bf \Delta}}^{\delta}_{ijk}{\underline {\bf \Delta}}^{\delta'}_{ijk}=\delta^{\delta\delta'}\,, \nonumber\\ &{\underline {\bf S}}_{ijk}{\underline {\bf O}}^{Aa}_{ijk}={\underline {\bf S}}_{ijk}{\underline {\bf O}}^{Sa}_{ijk} ={\underline {\bf S}}_{ijk}{\underline {\bf \Delta}}^{\delta}_{ijk} ={\underline {\bf O}}^{Aa*}_{ijk}{\underline {\bf O}}^{Sb}_{ijk} ={\underline {\bf O}}^{Aa*}_{ijk}{\underline {\bf \Delta}}^{\delta}_{ijk} ={\underline {\bf O}}^{Sa*}_{ijk}{\underline {\bf \Delta}}^{\delta}_{ijk}=0\,, \label{OrthoN}\end{aligned}$$ with $a,b\in \{1, \ldots, 8\}$, and $\delta,\delta'\in \{1, \ldots, 10\}$ [@Brambilla:2005yk]. If the octet tensors ${\underline {\bf O}}^{Aa}_{ijk}$ and ${\underline {\bf O}}^{Sa}_{ijk}$ have the above properties, also the following linear combinations do, $$\begin{aligned} {\underline {\bf O}}'^{Aa}_{ijk}&={\rm e}^{{i}\varphi_A}\bigl({\underline {\bf O}}^{Aa}_{ijk}\cos\omega-{\underline {\bf O}}^{Sa}_{ijk}\sin\omega\bigr)\,, \\ {\underline {\bf O}}'^{Sa}_{ijk}&={\rm e}^{{i}\varphi_S}\bigl({\underline {\bf O}}^{Aa}_{ijk}\sin\omega+{\underline {\bf O}}^{Sa}_{ijk}\cos\omega\bigr)\,,\end{aligned}$$ where $\omega$ is an arbitrary angle and $\varphi_A$, $\varphi_S$ denote generic phases. The octet tensors ${\underline {\bf O}}'^{Aa}_{ijk}$ and ${\underline {\bf O}}'^{Sa}_{ijk}$ hence form an alternative basis for the $8\oplus8$ sector. Requiring $$\begin{aligned} O^{Aa}{\underline {\bf O}}^{Aa}_{ijk} + O^{Sa}{\underline {\bf O}}^{Sa}_{ijk} = O'^{Aa} {\underline {\bf O}}'^{Aa}_{ijk} + O'^{Sa} {\underline {\bf O}}'^{Sa}_{ijk} \,,\end{aligned}$$ the associated octet fields are related to the original ones through the dual relations $$\begin{aligned} O'^{Aa}({\bf x}_1,{\bf x}_2,{\bf x}_3,t)& = {\rm e}^{-{i}\varphi_A}\bigl[O^{Aa}({\bf x}_1,{\bf x}_2,{\bf x}_3,t)\cos\omega-O^{Sa}({\bf x}_1,{\bf x}_2,{\bf x}_3,t)\sin\omega\bigr]\,, \label{O_1} \\ O'^{Sa}({\bf x}_1,{\bf x}_2,{\bf x}_3,t)& = {\rm e}^{-{i}\varphi_S}\bigl[O^{Aa}({\bf x}_1,{\bf x}_2,{\bf x}_3,t)\sin\omega+O^{Sa}({\bf x}_1,{\bf x}_2,{\bf x}_3,t)\cos\omega\bigr]\,. \label{O_2}\end{aligned}$$ To work out the pNRQCD Lagrangian explicitly, we choose a specific (matrix) representation of the color tensors, namely that given in [@Brambilla:2005yk], appendix B2. In order to keep this paper self-contained, we reproduce it here. Sticking to this particular choice, the color-octet tensors are given by $$\begin{aligned} {\underline {\bf O}}^{Aa}_{ijk}=\frac{1}{2}\epsilon_{ijl}\lambda^a_{kl}\,, \label{O1}\end{aligned}$$ and $$\begin{aligned} {\underline {\bf O}}^{Sa}_{ijk}=\frac{1}{2\sqrt{3}}\left(\epsilon_{jkl}\lambda^a_{il}+\epsilon_{ikl}\lambda^a_{jl}\right)\,. \label{O2}\end{aligned}$$ The choice in Eqs. (\[O1\]) and (\[O2\]) is such that ${\underline {\bf O}}^{Aa}_{ijk}$ and ${\underline {\bf O}}^{Sa}_{ijk}$ are antisymmetric and symmetric in the first two color indices, respectively. Consequently, ${O}^{A}$ and ${O}^{S}$ will be referred to as the antisymmetric and symmetric octets. Moreover, the color-singlet tensor ${\underline {\bf S}}_{ijk}$ is chosen to be totally antisymmetric, $$\begin{aligned} {\underline {\bf S}}_{ijk}=\frac{1}{\sqrt{6}}\epsilon_{ijk}\,, \label{S}\end{aligned}$$ whereas the color-decuplet tensor ${\underline {\bf \Delta}}^{\delta}_{ijk}$ is totally symmetric (an alternative decuplet is in [@Brambilla:2009cd]), $$\begin{aligned} {\underline {\bf \Delta}}^{1}_{111}&={\underline {\bf \Delta}}^{4}_{222}={\underline {\bf \Delta}}^{10}_{333}=1\,, \quad\quad {\underline {\bf \Delta}}^{6}_{\{123\}}=\frac{1}{\sqrt{6}}\,, \nonumber\\ {\underline {\bf \Delta}}^{2}_{\{112\}}&={\underline {\bf \Delta}}^{3}_{\{122\}}={\underline {\bf \Delta}}^{5}_{\{113\}} ={\underline {\bf \Delta}}^{7}_{\{223\}}={\underline {\bf \Delta}}^{8}_{\{133\}}={\underline {\bf \Delta}}^{9}_{\{233\}} =\frac{1}{\sqrt{3}}\,. \label{Delta}\end{aligned}$$ The symbol $\{ijk\}$ denotes all permutations of the indices $ijk$; all components not listed explicitly in Eq. (\[Delta\]) are zero. Note that ${\underline {\bf S}}_{ijk}$ and ${\underline {\bf \Delta}}^{\delta}_{ijk}$ are real-valued quantities. From Eq. (\[QQQbzglR\]), it follows that the three-quark field $$\begin{aligned} \Phi_{ijk}({\bf x}_1,{\bf x}_2,{\bf x}_3,t)\equiv Q_{i}({\bf x}_1,t)Q_{j}({\bf x}_2,t)Q_{k}({\bf x}_3,t)\,, \label{QQQselbst}\end{aligned}$$ can be written as $$\begin{aligned} \Phi_{ijk}({\bf x}_1,{\bf x}_2,{\bf x}_3,t) =\phi_{ii'}({\bf x}_1,{\bf R},t)\phi_{jj'}({\bf x}_2,{\bf R},t)\phi_{kk'}({\bf x}_3,{\bf R},t) {\cal M}_{i'j'k'}({\bf x}_1,{\bf x}_2,{\bf x}_3,t)\,, \label{Phiijk}\end{aligned}$$ where we have used that $\displaystyle \phi^{-1}({\bf y},{\bf x},t)=\phi^{\dag}({\bf y},{\bf x},t)=\phi({\bf x},{\bf y},t)$. Finally, plugging Eq. (\[eq:dec\]) into Eq. (\[Phiijk\]) we may express the three-quark field $\Phi_{ijk}$ in terms of the composite singlet, octet and decuplet fields. The next step will consist in matching these composite fields to the corresponding ones in pNRQCD. Matching and multipole expansion {#subseq:match} -------------------------------- We denote with $|\Omega\rangle$ a generic Fock state containing no heavy quarks, but an arbitrary number of US gluons and light quarks: $Q_i({\bf x},t)|\Omega\rangle=0$. Therewith we define the three heavy-quark Fock state $$|QQQ\rangle=\frac{1}{\cal N}\int{\rm d}^3x_1\int{\rm d}^3x_2\int{\rm d}^3x_3\; \Phi_{ijk}({\bf x}_1,{\bf x}_2,{\bf x}_3,t)Q_k^{\dag}({\bf x}_3,t)Q_j^{\dag}({\bf x}_2,t)Q_i^{\dag}({\bf x}_1,t) |\Omega\rangle\,, \label{proj}$$ where ${\cal N}$ is a normalization factor and the composite field is now interpreted as independent of the heavy-quark fields. One can match NRQCD to pNRQCD by equating the expectation value of the NRQCD Hamiltonian in the state $|QQQ\rangle$ with the pNRQCD Hamiltonian (see [@Pineda:1997bj; @Brambilla:2004jw] for the matching in the $Q\bar{Q}$ case). Thus, the heavy-quark fields in pNRQCD are cast into singlet, $S$, octet, $O^{Aa}$ and $O^{Sa}$, and decuplet, $\Delta^{\delta}$, fields. The gluons in pNRQCD are explicitly rendered US by multipole expanding the gluon fields in the relative coordinates ${\bf r}_i$ ($i=1,2,3$) with respect to the center of mass coordinate ${\bf R}$. The reason is that the center of mass coordinate (the “location” of the three heavy-quark system) scales like the inverse of the recoiling total momentum of the three quarks, which is of the order of the US energy scale, while the relative coordinates of the three quarks (describing the “extension” of the triple heavy baryon) scale like the inverse of the typical relative momenta of the heavy quarks, which are of the order of the soft scale. As a result, ultrasoft gluons in pNRQCD are invariant under US gauge transformations, i.e. gauge transformations localized in ${\bf R}$. A Legendre transform of the pNRQCD Hamiltonian finally provides us with the pNRQCD Lagrangian. In the same way as NRQCD can be understood as an expansion of QCD in terms of the inverse of the heavy-quark masses, pNRQCD can be understood as an expansion of the gluon fields of NRQCD, projected onto the specific (two or three) heavy-quark Fock space, in powers of the relative coordinates of the heavy quarks. Quantum corrections of the order of the soft scale are encoded in the matching coefficients of pNRQCD in the same way as quantum corrections of the order of the heavy-quark masses are encoded in the matching coefficients of NRQCD. The matching coefficients of pNRQCD are typically non-analytic functions of the relative coordinates. The pNRQCD Lagrangian --------------------- The pNRQCD Lagrangian is organized as an expansion in $1/m$ and in the relative coordinates ${\bf r}_i$. Up to zeroth order in the $1/m$ expansion (static limit) and first order in the multipole expansion, the pNRQCD Lagrangian for $QQQ$ systems reads $$\begin{aligned} {\cal L}_{\rm pNRQCD}={\cal L}_{\rm pNRQCD}^{(0,0)}+{\cal L}_{\rm pNRQCD}^{(0,1)}\,. \label{LpNRQCD}\end{aligned}$$ An explicit derivation of this Lagrangian can be found in [@Brambilla:2005yk]; here we recall its expression. The term ${\cal L}_{\rm pNRQCD}^{(0,0)}$ describes at zeroth order in the multipole expansion the propagation of light quarks and US gluons as well as the temporal evolution of the static quarks, which are cast into singlet, $S\equiv S({\bf x}_1,{\bf x}_2,{\bf x}_3,t)$, octet, $O^A\equiv O^A({\bf x}_1,{\bf x}_2,{\bf x}_3,t)$ and $O^S\equiv O^S({\bf x}_1,{\bf x}_2,{\bf x}_3,t)$, and decuplet, $\Delta\equiv \Delta({\bf x}_1,{\bf x}_2,{\bf x}_3,t)$, fields (cf. Sec. \[subseq:comp\_fields\]), $$\begin{aligned} {\cal L}_{\rm pNRQCD}^{(0,0)}&=&\int{\rm d}^3\!\rho\,{\rm d}^3\!\lambda\;\Bigl\{ S^{\dag}\left[i\partial_0-V^s\right]S+\Delta^{\dag}\left[iD_0-V^{d}\right]\Delta+O^{A\dag}\left[iD_0-V^o_A\right]O^A \nonumber\\ &&\hspace*{1.7cm}+O^{S\dag}\left[iD_0-V^o_S\right]O^S +O^{A\dag}\left[-V_{AS}^o\right]O^S+O^{S\dag}\left[-V_{AS}^o\right]O^A\Bigr\} \nonumber\\ &&+\sum_{l}\bar{q}^{\,l}i\slashed{D}q^l-\frac{1}{4}F^a_{\mu\nu}F^{a\mu\nu}\,. \label{LpNRQCD1}\end{aligned}$$ The matching coefficients $V^s$, $V^o_A$, $V^o_S$ and $V^d$ correspond to singlet, (antisymmetric and symmetric) octet and decuplet potentials. The coefficient $V^o_{AS}$ is an octet mixing potential. The term ${\cal L}_{\rm pNRQCD}^{(0,1)}$ accounts for the interactions between static quarks and US gluons at first order in the multipole expansion, $$\begin{aligned} {\cal L}_{\rm pNRQCD}^{(0,1)}&=&\int{\rm d}^3\!\rho\,{\rm d}^3\!\lambda\;\Bigl\{ V^{(0,1)}_{S\pmb{\rho}\cdot{\bf E}O^S} \sum_{a=1}^8\tfrac{1}{2\sqrt{2}}\left[S^{\dag}\pmb{\rho}\cdot g{\bf E}^aO^{Sa}+O^{Sa\dag}\pmb{\rho}\cdot g{\bf E}^aS\right] \nonumber\\ &&\hspace*{1.75cm}-V^{(0,1)}_{S\pmb{\lambda}\cdot{\bf E}O^A} \sum_{a=1}^8\tfrac{1}{\sqrt{6}}\left[S^{\dag}\pmb{\lambda}\cdot g{\bf E}^aO^{Aa}+O^{Aa\dag}\pmb{\lambda}\cdot g{\bf E}^aS\right] \nonumber\\ &&\hspace*{1.75cm}-V^{(0,1)}_{O^A\pmb{\lambda}\cdot{\bf E}O^A} \sum_{a,b,c=1}^8\left(i\tfrac{f^{abc}}{6}+\tfrac{d^{\,abc}}{2}\right)O^{Aa\dag}\pmb{\lambda}\cdot g{\bf E}^bO^{Ac} \nonumber\\ &&\hspace*{1.75cm}+V^{(0,1)}_{O^S\pmb{\lambda}\cdot{\bf E}O^S} \sum_{a,b,c=1}^8\left(i\tfrac{f^{abc}}{6}+\tfrac{d^{\,abc}}{2}\right) O^{Sa\dag}\pmb{\lambda}\cdot g{\bf E}^bO^{Sc} \nonumber\\ &&\hspace*{1.75cm}-V^{(0,1)}_{O^A\pmb{\rho}\cdot{\bf E}O^S} \sum_{a,b,c=1}^8\left(\tfrac{if^{abc}+3d^{\,abc}}{4\sqrt{3}}\right) \left[O^{Aa\dag}\pmb{\rho}\cdot g{\bf E}^bO^{Sc}+O^{Sa\dag}\pmb{\rho}\cdot g{\bf E}^bO^{Ac}\right] \nonumber\\ &&\hspace*{1.75cm}+V^{(0,1)}_{O^A\pmb{\rho}\cdot{\bf E}\Delta} \sum_{a,b=1}^8\sum_{\delta=1}^{10}\left[\left( \epsilon_{ijk}T^a_{ii'}T^b_{jj'} \underline{\pmb{\Delta}}^{\delta}_{i'j'k}\right)O^{Aa\dag}\pmb{\rho}\cdot g{\bf E}^b\Delta^{\delta}\right. \nonumber\\ &&\hspace*{4.8cm}\left.-\left( \underline{\pmb{\Delta}}^{\delta}_{ijk}T^b_{ii'}T^a_{jj'}\epsilon_{i'j'k}\right) \Delta^{\delta\dag}\pmb{\rho}\cdot g{\bf E}^bO^{Aa}\right] \nonumber\\ &&\hspace*{1.75cm}+V^{(0,1)}_{O^S\pmb{\lambda}\cdot{\bf E}\Delta}\sum_{a,b=1}^8\sum_{\delta=1}^{10} \tfrac{2}{\sqrt{3}}\left[\left( \epsilon_{ijk}T^a_{ii'}T^b_{jj'} \underline{\pmb{\Delta}}^{\delta}_{i'j'k}\right)O^{Sa\dag}\pmb{\lambda}\cdot g{\bf E}^b\Delta^{\delta}\right. \nonumber\\ &&\hspace*{5.35cm}\left.-\left( \underline{\pmb{\Delta}}^{\delta}_{ijk}T^b_{ii'}T^a_{jj'}\epsilon_{i'j'k}\right)\Delta^{\delta\dag}\pmb{\lambda}\cdot g{\bf E}^bO^{Sa}\right]\Bigr\}, \label{LpNRQCDusw}\end{aligned}$$ where ${\bf E}={\bf E}^aT^a$ denotes the chromoelectric field evaluated at [**R**]{} and the coefficients $V^{(0,1)}_{...}$ are matching coefficients associated to chromoelectric dipole interactions between $QQQ$ fields in different color representations. The covariant derivatives, whose time components act on the octet and decuplet fields in Eq. (\[LpNRQCD1\]), are understood to be in the octet and decuplet representations, respectively. They are given explicitly in appendix \[app1\]. A mixing term, $-V_{AS}^o(O^{A\dag}O^S+O^{S\dag}O^A)$, has been included in ${\cal L}_{\rm pNRQCD}^{(0,0)}$. Such a term was not considered in [@Brambilla:2005yk], but was first recognized in [@Brambilla:2009cd]. The mixing potential will play a crucial role in the study of the symmetry of pNRQCD under exchange of the heavy-quark fields (see Sec. \[sec:sym\]) and in the calculation of the US corrections to the singlet static energy (see Sec. \[sec:singlet\]). For completeness, we list here the leading-order (LO) expressions for the various matching coefficients appearing in Eqs. (\[LpNRQCD1\]) and (\[LpNRQCDusw\]). At order $\alpha_{\rm s}$ the potentials in Eq. (\[LpNRQCD1\]) are given by (cf. [@Brambilla:2005yk], [@Brambilla:2009cd]) $$\begin{aligned} V^s({\bf r}_1,{\bf r}_2,{\bf r}_3)&=& -\frac{2}{3}\alpha_{\rm s}\left(\frac{1}{|{\bf r}_1|}+\frac{1}{|{\bf r}_2|}+\frac{1}{|{\bf r}_3|}\right), \label{Vs}\\ V^d({\bf r}_1,{\bf r}_2,{\bf r}_3)&=& \frac{1}{3}\alpha_{\rm s}\left(\frac{1}{|{\bf r}_1|}+\frac{1}{|{\bf r}_2|}+\frac{1}{|{\bf r}_3|}\right), \\ V^o_A({\bf r}_1,{\bf r}_2,{\bf r}_3)&=& \alpha_{\rm s}\left(-\frac{2}{3}\frac{1}{|{\bf r}_1|}+\frac{1}{12}\frac{1}{|{\bf r}_2|}+\frac{1}{12}\frac{1}{|{\bf r}_3|}\right), \label{VOA}\\ V^o_S({\bf r}_1,{\bf r}_2,{\bf r}_3)&=& \alpha_{\rm s}\left(\frac{1}{3}\frac{1}{|{\bf r}_1|}-\frac{5}{12}\frac{1}{|{\bf r}_2|}-\frac{5}{12}\frac{1}{|{\bf r}_3|}\right), \label{VOS}\\ V^o_{AS}({\bf r}_1,{\bf r}_2,{\bf r}_3)&=& -\frac{\sqrt{3}}{4}\alpha_{\rm s}\left(\frac{1}{|{\bf r}_2|}-\frac{1}{|{\bf r}_3|}\right), \label{VAS}\end{aligned}$$ whereas all matching coefficients in Eq. (\[LpNRQCDusw\]) are equal to one at LO. The expressions for $V^s$ up to next-to-next-to-leading order (NNLO), and for $V^d$, $V^o_A$, $V^o_S$ and $V^o_{AS}$ up to next-to-leading order (NLO) can be found in [@Brambilla:2009cd] (the expression for $V^s$ up to NNLO is also in appendix \[app2\]). Symmetry under exchange of the heavy-quark fields {#sec:sym} ================================================= As outlined in detail in Sec. \[subseq:comp\_fields\], the heavy-quark fields in the pNRQCD Lagrangian are written in terms of composite fields, which are proportional to $Q_{i}({\bf x}_1,t)Q_{j}({\bf x}_2,t)Q_{k}({\bf x}_3,t)$. However, as there is no preferred ordering, and the heavy-quark fields anticommute, different orderings of the heavy quarks lead to different composite fields. The orderings are however arbitrary and the pNRQCD Lagrangian should be invariant under different orderings of the heavy-quark fields. We call this invariance symmetry under exchange of the heavy-quark fields or, in short, exchange symmetry. A special case of exchange symmetry is the symmetry under permutation of the heavy-quark fields. A different ordering of the heavy-quark fields can be realized either [*(a)*]{} by relabeling the heavy-quark coordinates in the pNRQCD Lagrangian or [*(b)*]{} by anticommuting the heavy-quark fields in the composite fields. Since the two procedures lead to the same Lagrangian, this constrains the form of the heavy-quark potentials. In fact, the invariance of the Lagrangian under $(a)$ is trivially realized due to the additional integrations over the quark locations ${\rm x}_1$, ${\rm x}_2$ and ${\rm x}_3$, and only $(b)$ results in nontrivial transformations. [*(a)*]{} We may relabel the coordinates ${\bf x}_i$ and the relative vectors ${\bf r}_i$ in the pNRQCD Lagrangian according to one of the following two possibilities (other relabelings follow from these) $$\begin{aligned} {\bf x}_1\leftrightarrow{\bf x}_2\,,{\bf x}_3:\quad \begin{cases} {\bf r}_1\to-{\bf r}_1 \\ {\bf r}_2\to {\bf r}_3 \\ {\bf r}_3\to {\bf r}_2 \end{cases} \label{rwird1} ,\\ {\bf x}_1\leftrightarrow{\bf x}_3\,,{\bf x}_2:\quad \begin{cases} {\bf r}_1\to-{\bf r}_3 \\ {\bf r}_2\to-{\bf r}_2 \\ {\bf r}_3\to-{\bf r}_1 \end{cases} \label{rwird2} .\end{aligned}$$ The relabelings affect the pNRQCD potentials and the ordering of the quark fields in the composite fields of pNRQCD. [*(b)*]{} Because the heavy-quark fields $Q_{i}({\bf x})$ of NRQCD satisfy equal-time anticommutation relations, $\{Q_{i}({\bf x},t),$ $Q_{j}({\bf y},t)\}=0$, from Eq. (\[QQQselbst\]) it follows that $$\begin{aligned} && \Phi_{ijk}({\bf x}_1,{\bf x}_2,{\bf x}_3,t)=-\Phi_{jik}({\bf x}_2,{\bf x}_1,{\bf x}_3,t)\,, \label{soises0}\\ && \Phi_{ijk}({\bf x}_1,{\bf x}_2,{\bf x}_3,t)=-\Phi_{kji}({\bf x}_3,{\bf x}_2,{\bf x}_1,t)\,. \label{soises1}\end{aligned}$$ These identities hold also for ${\cal M}_{ijk}({\bf x}_1,{\bf x}_2,{\bf x}_3,t)$, which is related to $\Phi_{ijk}({\bf x}_1,{\bf x}_2,{\bf x}_3,t)$ through Eq. (\[Phiijk\]): $$\begin{aligned} && {\cal M}_{ijk}({\bf x}_1,{\bf x}_2,{\bf x}_3,t)=-{\cal M}_{jik}({\bf x}_2,{\bf x}_1,{\bf x}_3,t)\,, \label{Msoises0}\\ && {\cal M}_{ijk}({\bf x}_1,{\bf x}_2,{\bf x}_3,t)=-{\cal M}_{kji}({\bf x}_3,{\bf x}_2,{\bf x}_1,t)\,. \label{Msoises1}\end{aligned}$$ In turn, the identities for ${\cal M}_{ijk}({\bf x}_1,{\bf x}_2,{\bf x}_3,t)$ enable us to derive corresponding identities for the singlet, octet and decuplet fields just by multiplying Eqs. (\[Msoises0\]) and (\[Msoises1\]) with ${\underline {\bf S}}_{ijk}$, ${\underline {\bf \Delta}}^{\delta}_{ijk}$, ${\underline {\bf O}}^{Aa*}_{ijk}$, or ${\underline {\bf O}}^{Sa*}_{ijk}$, respectively, and summing over $i,j,k$: $$\begin{aligned} \begin{cases} S({\bf x}_1,{\bf x}_2,{\bf x}_3,t) \hspace{-3mm} &= S({\bf x}_2,{\bf x}_1,{\bf x}_3,t) \\ \Delta^{\delta}({\bf x}_1,{\bf x}_2,{\bf x}_3,t) \hspace{-3mm} &= -\Delta^{\delta}({\bf x}_2,{\bf x}_1,{\bf x}_3,t) \\ O^{Aa}({\bf x}_1,{\bf x}_2,{\bf x}_3,t) \hspace{-3mm} &= O^{Aa}({\bf x}_2,{\bf x}_1,{\bf x}_3,t) \\ O^{Sa}({\bf x}_1,{\bf x}_2,{\bf x}_3,t) \hspace{-3mm} &= -O^{Sa}({\bf x}_2,{\bf x}_1,{\bf x}_3,t) \label{block1} \end{cases} ,\end{aligned}$$ and $$\begin{aligned} \begin{cases} S({\bf x}_1,{\bf x}_2,{\bf x}_3,t) \hspace{-3mm} &= S({\bf x}_3,{\bf x}_2,{\bf x}_1,t)\\ \Delta^{\delta}({\bf x}_1,{\bf x}_2,{\bf x}_3,t) \hspace{-3mm} &= -\Delta^{\delta}({\bf x}_3,{\bf x}_2,{\bf x}_1,t)\\ O^{Aa}({\bf x}_1,{\bf x}_2,{\bf x}_3,t) \hspace{-3mm} &= - \tfrac{1}{2} O^{Aa}({\bf x}_3,{\bf x}_2,{\bf x}_1,t) + \tfrac{\sqrt{3}}{2} O^{Sa}({\bf x}_3,{\bf x}_2,{\bf x}_1,t)\\ O^{Sa}({\bf x}_1,{\bf x}_2,{\bf x}_3,t) \hspace{-3mm} &= \tfrac{\sqrt{3}}{2} O^{Aa}({\bf x}_3,{\bf x}_2,{\bf x}_1,t) + \tfrac{1}{2} O^{Sa}({\bf x}_3,{\bf x}_2,{\bf x}_1,t) \label{block2} \end{cases} . \end{aligned}$$ At variance with the relabeling [*(a)*]{}, anticommuting the heavy-quarks in the composite fields only indirectly affects the pNRQCD potentials. Note that the octet transformations in and may be interpreted as a special case of the transformations  and for $\varphi_S=0$, $\varphi_A=\pi$ and $\omega=\pi/3$. By relabeling [*(a)*]{} or by anticommuting the heavy-quark fields [*(b)*]{} we get two versions of the pNRQCD Lagrangian that must be the same. This requires the pNRQCD potentials to change in a well defined manner under the transformations and . In particular, if we restrict ourselves to the potentials in Eq. (\[LpNRQCD1\]), the singlet and decuplet potentials remain invariant, whereas the octet potentials transform as $$\begin{aligned} \begin{cases} V^o_A(-{\bf r}_1,{\bf r}_3,{\bf r}_2) \hspace{-3mm} &= V^o_A({\bf r}_1,{\bf r}_2,{\bf r}_3) \\ V^o_S(-{\bf r}_1,{\bf r}_3,{\bf r}_2) \hspace{-3mm} &= V^o_S({\bf r}_1,{\bf r}_2,{\bf r}_3) \\ V_{AS}^o(-{\bf r}_1,{\bf r}_3,{\bf r}_2) \hspace{-3mm} &= - V_{AS}^o({\bf r}_1,{\bf r}_2,{\bf r}_3) \label{Vrel1} \end{cases} , \end{aligned}$$ and $$\begin{aligned} \begin{cases} V^o_A(-{\bf r}_3,-{\bf r}_2,-{\bf r}_1) \hspace{-3mm} &= \tfrac{1}{4} V^o_A({\bf r}_1,{\bf r}_2,{\bf r}_3) + \tfrac{3}{4} V^o_S({\bf r}_1,{\bf r}_2,{\bf r}_3) - \tfrac{\sqrt{3}}{2} V_{AS}^o({\bf r}_1,{\bf r}_2,{\bf r}_3) \\ V^o_S(-{\bf r}_3,-{\bf r}_2,-{\bf r}_1) \hspace{-3mm} &= \tfrac{3}{4} V^o_A({\bf r}_1,{\bf r}_2,{\bf r}_3) + \tfrac{1}{4} V^o_S({\bf r}_1,{\bf r}_2,{\bf r}_3) + \tfrac{\sqrt{3}}{2} V_{AS}^o({\bf r}_1,{\bf r}_2,{\bf r}_3) \\ V_{AS}^o(-{\bf r}_3,-{\bf r}_2,-{\bf r}_1) \hspace{-3mm} &= \tfrac{\sqrt{3}}{4} \bigl[V^o_S({\bf r}_1,{\bf r}_2,{\bf r}_3) -V^o_A({\bf r}_1,{\bf r}_2,{\bf r}_3)\bigr] +\tfrac{1}{2} V_{AS}^o({\bf r}_1,{\bf r}_2,{\bf r}_3) \label{Vrel2} \end{cases} , \end{aligned}$$ for transformations of type and respectively. We emphasize that the above transformations are general and do not rely on any specific geometry of the three quarks. They also do not rely on any perturbative expansion. Furthermore, they are valid also beyond the static limit for any order in $1/m$.[^2] As a simple application of the above formulas, let us consider for instance the LO expression of $V^o_{AS}({\bf r}_1,{\bf r}_2,{\bf r}_3)$ given in Eq. (\[VAS\]). Under (\[Vrel2\]) it transforms into $$\begin{aligned} V^o_{AS}(-{\bf r}_3,-{\bf r}_2,-{\bf r}_1) = -\frac{\sqrt{3}}{4}\alpha_{\rm s}\left(\frac{1}{|{\bf r}_2|}-\frac{1}{|{\bf r}_1|}\right)\,, \label{eq:sojaso}\end{aligned}$$ which is the result expected from relabeling the coordinates according to Eq. . Let us emphasize again that the inclusion of the octet mixing potential $V^o_{AS}$ in Eq.  is essential for reproducing the correct transformation properties of the octet potentials. Finally, it is interesting to apply relations (\[Vrel1\]) and (\[Vrel2\]) to the most simple case of an equilateral geometry. In such a geometry we have a single length scale $r=|{\bf r}_1|=|{\bf r}_2|=|{\bf r}_3|$ and a single angle $\hat{\bf r}_1\cdot \hat{\bf r}_2 =-\hat{\bf r}_1\cdot\hat{\bf r}_3 =\hat{\bf r}_2\cdot\hat{\bf r}_3=\cos({\pi}/{3})$. Whenever the potentials are invariant under the transformations and , which is surely the case for two-body interactions but may not hold at higher orders, from Eq. (\[Vrel1\]) it follows that $V_{AS}^o = 0$ and from Eq. (\[Vrel2\]) that $$\begin{aligned} V^o_A (r)= V^o_S(r) \equiv V^o(r) \,. \label{eq:Osgleich}\end{aligned}$$ The $QQQ$ singlet static energy at ${\cal O}(\alpha_{\rm s}^4\ln\alpha_{\rm s})$ {#sec:singlet} ================================================================================ The potentials of pNRQCD depend in general on a factorization scale $\mu$ separating soft from US contributions,[^3] whereas the singlet static energy $E^s$ is an observable and therewith independent of $\mu$. As in the $Q\bar Q$ case [@Brambilla:1999qa], the $QQQ$ singlet static potential $V^s$ is expected to become $\mu$ dependent at next-to-next-to-next-to leading order (NNNLO), i.e. at order $\alpha_{\rm s}^4$ [@Brambilla:2005yk]. The difference between the singlet static energy and the singlet static potential is encoded in an ultrasoft contribution denoted $\delta^s_{\rm US}$, which starts contributing at order $\alpha_s^4$. It depends on $\mu$ in such a way that $E^s$, given by $$E^s({\bf r}_1,{\bf r}_2,{\bf r}_3)=V^s({\bf r}_1,{\bf r}_2,{\bf r}_3;\mu)+\delta^s_{\rm US}({\bf r}_1,{\bf r}_2,{\bf r}_3;\mu), \label{E0}$$ is $\mu$ independent. The cancelation of the $\mu$ dependence of $V^s$ against $\delta_{\rm US}^s$ at NNNLO leaves in $E^s$ a remnant, which is a contribution of order $\alpha_{\rm s}^4\ln\alpha_{\rm s}$. This is the leading perturbative contribution to $E^s$ that is non-analytic in $\alpha_{\rm s}$. The most convenient way to calculate the $\alpha_{\rm s}^4\ln\mu$ term in $V^s$, and the $\alpha_{\rm s}^4\ln\alpha_{\rm s}$ term in $E^s$, is by looking at the leading divergence of $\delta_{\rm US}^s$. This requires the one-loop calculation of the color-singlet self energy as opposed to the three-loop calculation necessary to extract the term $\alpha_{\rm s}^4\ln\mu$ directly from $V^s$. We will perform this calculation in the following section. Determination of $\delta_{\rm US}^s$ {#detUS} ------------------------------------ We aim at calculating $\delta_{\rm US}^s$ up to order $\alpha_s^4$. For this purpose we need the singlet and octet propagators, and the octet mixing potential at leading order \[cf. Eq. (\[LpNRQCD1\])\], ![image](props_bare_v3){width="11.5cm"} $$\begin{aligned} \label{theprops} \end{aligned}$$ \ as well as the singlet-to-octet interaction vertices at order ${\bf r}_i$ in the multipole expansion \[cf. Eq. (\[LpNRQCDusw\]), note that the singlet couples differently to the symmetric and antisymmetric octets\], ![image](vertices_so_v3){width="5.2cm"} $$\begin{aligned} \label{thevertices} \end{aligned}$$ \ The parameter $T$ in Eq. (\[theprops\]) is the propagation time. The wavy lines in Eq. (\[thevertices\]) represent ultrasoft gluons; note that we have written the vertices with US gluons treating the gluons as external fields. The most noteworthy difference with respect to the calculation of $\delta_{\rm US}^s$ in the $Q\bar{Q}$ case is that here the singlet couples to two distinct octet fields and that the octet fields mix. For this reason the calculation in the baryonic case exhibits some novel features with respect to the analogous mesonic case. Since the mixing of the octet fields is an effect of the same order as the energies of the octets, it must be accounted for to all orders when computing the physical octet-to-octet propagators. The resummation of the octet mixing potential gives rise to three different types of resummed octet propagators: - a resummed octet propagator, $G^o_{S}$, that describes the propagation from a symmetric initial state to a symmetric final state: ![image](propOdressed_SS_v2){width="12cm"} - a resummed octet propagator, $G^o_{A}$, that describes the propagation from an antisymmetric initial state to an antisymmetric final state: ![image](propOdressed_AA_v2){width="11.1cm"} - a resummed octet propagator, $G^o_{AS}$, that describes the propagation from a symmetric initial state to an antisymmetric final state or vice versa: ![image](propOdressed_SA){width="5.3cm"} The explicit expressions for the resummed octet propagators are most conveniently computed in momentum space and read $$\begin{aligned} -i\left[G^o_{S}(E)\right]_{ab}&=\frac{i\delta_{ab}(E-V_A^o)}{(E-V_S^o+i\epsilon)(E-V_A^o+i\epsilon)-(V^{o}_{AS})^2}\,, \label{pe}\\ -i\left[G^o_{A}(E)\right]_{ab}&=\frac{i\delta_{ab}(E-V_S^o)}{(E-V_S^o+i\epsilon)(E-V_A^o+i\epsilon)-(V^{o}_{AS})^2}\,, \\ -i\left[G^o_{AS}(E)\right]_{ab}&=\frac{i\delta_{ab}V^{o}_{AS}}{(E-V_S^o+i\epsilon)(E-V_A^o+i\epsilon)-(V^{o}_{AS})^2}\,, \label{pa}\end{aligned}$$ with $\epsilon\to0^+$. After performing a Fourier transform from energy $E$ to time $T$, we obtain ![image](props_v3){width="12cm"} $$\begin{aligned} \label{dressedprops} \end{aligned}$$ where $$E_{1,2}=\frac{V_A^o+V_S^o}{2}\pm\sqrt{\left(\frac{V_A^o-V_S^o}{2}\right)^2+(V^{o}_{AS})^2}-i\epsilon\,. \label{E12}$$ ![Leading-order contributions to $\delta_{\rm US}^s$. As there is no direct coupling between decuplet and singlet fields at first order in the multipole expansion, we do not have contributions involving decuplet degrees of freedom.[]{data-label="match"}](S_matching_v2){width="10cm"} The US contribution $\delta_{\rm US}^s$ is given at LO by the color-singlet self-energy diagrams shown in Fig. \[match\]. Because the singlet couples to two distinct octet fields and they mix, we have four such diagrams \[cf. Eq. \]. They give $$\begin{aligned} \delta_{\rm US}^s &=& - i g^2\left(\frac{1}{2\sqrt{2}}\right)^2\int_{0}^{\infty}{\rm d}t\, \frac{1}{E_1-E_2} \left[(E_1-V_A^o){\rm e}^{-it(E_1-V^s)}\right. \nonumber \\ &&\hspace*{5.3cm}\left.-(E_2-V_A^o){\rm e}^{-it(E_2-V^s)}\right]\langle\pmb{\rho}\cdot{\bf E}^a(t)\pmb{\rho}\cdot{\bf E}^a(0)\rangle \nonumber \\ && -i g^2\left(\frac{1}{\sqrt{6}}\right)^2\int_{0}^{\infty}{\rm d}t\, \frac{1}{E_1-E_2}\left[(E_1-V_S^o){\rm e}^{-it(E_1-V^s)}\right. \nonumber \\ &&\hspace*{5.1cm}\left.-(E_2-V_S^o){\rm e}^{-it(E_2-V^s)}\right]\langle\pmb{\lambda}\cdot{\bf E}^a(t)\pmb{\lambda}\cdot{\bf E}^a(0)\rangle \nonumber \\ && +2ig^2\frac{1}{2\sqrt{2}}\frac{1}{\sqrt{6}} \int_{0}^{\infty}{\rm d}t\,\frac{V^o_{AS}}{E_1-E_2}\left[{\rm e}^{-it(E_1-V^s)}\right. \left.-{\rm e}^{-it(E_2-V^s)}\right] \langle\pmb{\rho}\cdot{\bf E}^a(t)\pmb{\lambda}\cdot{\bf E}^a(0)\rangle , \label{pNRQCDdiags}\end{aligned}$$ where $\langle \cdots \rangle$ stands for a vacuum expectation value. In writing the various contributions in Eq. , we have kept the same order as in Fig. \[match\]: the first two terms correspond to the two diagrams shown in the first line of Fig. \[match\], and the last contribution is the sum of the two diagrams in the second line of Fig. \[match\], which are equal. The vacuum expectation value of two chromoelectric fields reads in dimensional regularization ($d = 4-2\varepsilon$ is the number of dimensions) $$\langle{\bf a}\cdot{\bf E}^a(t){\bf b}\cdot{\bf E}^a(0)\rangle= {\bf a}\cdot{\bf b}\,\frac{4(d-2)}{(d-1)}\mu^{4-d}\int\frac{{\rm d}^{d-1}q}{(2\pi)^{d-1}}|{\bf q}|{\rm e}^{-i|{\bf q}|t}+{\cal O}(\alpha_s)\,,$$ where ${\bf a}$ and ${\bf b}$ are two generic vectors and $t>0$. Performing the integrals in (\[pNRQCDdiags\]) we obtain $$\begin{aligned} \delta_{\rm US}^s = \frac{4}{3}\frac{\alpha_{\rm s}}{\pi}\frac{1}{E_1-E_2}\hspace*{-3mm} && \left[\left(\frac{|\pmb{\rho}|^2}{4}(E_1-V_A^o)+\frac{|\pmb{\lambda}|^2}{3}(E_1-V_S^o) -\frac{\pmb{\rho}\cdot\pmb{\lambda}}{\sqrt{3}}V^{o}_{AS}\right)(E_1-V^s)^3\right. \nonumber\\ &&\hspace*{1cm} \times\left(\frac{1}{\varepsilon}-\gamma_E-\ln\frac{(E_1-V^s)^2}{\pi\mu^2}+\frac{5}{3}\right) \nonumber\\ && -\left(\frac{|\pmb{\rho}|^2}{4}(E_2-V_A^o)+\frac{|\pmb{\lambda}|^2}{3}(E_2-V_S^o) -\frac{\pmb{\rho}\cdot\pmb{\lambda}}{\sqrt{3}}V^{o}_{AS}\right)(E_2-V^s)^3 \nonumber\\ && \hspace*{1cm}\left. \times\left(\frac{1}{\varepsilon}-\gamma_E -\ln\frac{(E_2-V^s)^2}{\pi\mu^2}+\frac{5}{3}\right)\right]\,, \label{US1}\end{aligned}$$ where $\gamma_E$ is the Euler–Mascheroni constant. Equation  comprises the entire US contribution up to order $\alpha_s^4$. The explicit expressions may be obtained by replacing $E_1$ and $E_2$ with the right-hand side of Eq. (\[E12\]), and $V^s$, $V_A^o$, $V_S^o$ and $V_{AS}^o$ by the LO expressions given in Eqs. (\[Vs\]), (\[VOA\]), (\[VOS\]) and (\[VAS\]) respectively. Equation (\[US1\]) corrects the expression derived in [@Brambilla:2005yk] where the mixing of the octet fields was not taken into account. Hence, the result of [@Brambilla:2005yk] is retained from Eq.  by setting $V_{AS}^o=0$. Invariance of $\delta_{\rm US}^s$ under exchange symmetry {#sec:invdeltaUS} --------------------------------------------------------- The US correction, $\delta_{\rm US}^s$, calculated in the previous section is expected to be invariant under the exchange symmetry discussed in Sec. \[sec:sym\]. To verify this we observe that according to Eqs. (\[Vrel1\]) and (\[Vrel2\]) the combinations $(V_A^o+V_S^o)$ and $\left[\left(V_A^o-V_S^o\right)^2/4+(V^{o}_{AS})^2\right]$ are each invariant. This implies that both $E_1$ and $E_2$ are invariant according to the definition (\[E12\]). Also the singlet static potential, $V^s$, is invariant at LO \[see Eq. (\[Vs\])\]. If we rewrite explicitly the expression ${|\pmb{\rho}|^2}/{4}+{|\pmb{\lambda}|^2}/{3}$ in terms of the positions of the heavy quarks with the help of Eqs. (\[rx123\]) and (\[rholambda\]), $$\frac{|\pmb{\rho}|^2}{4}+\frac{|\pmb{\lambda}|^2}{3}= \frac{1}{3}\left({\bf x}_1^2+{\bf x}_2^2+{\bf x}_3^2-{\bf x}_1\cdot{\bf x}_2-{\bf x}_1\cdot{\bf x}_3-{\bf x}_2\cdot{\bf x}_3\right), \label{A}$$ it is evident that this expression is invariant under the transformations and . Finally, we have to show that the expression $$V^o_A \frac{|\pmb{\rho}|^2}{4} + V^o_S\frac{|\pmb{\lambda}|^2}{3} + V^o_{AS}\frac{\pmb{\rho}\cdot\pmb{\lambda}}{\sqrt{3}}, \label{B}$$ is also invariant. This is a straightforward, although not manifest, consequence of the transformations (\[rwird1\]), (\[rwird2\]), (\[Vrel1\]) and (\[Vrel2\]), which completes the proof that $\delta_{\rm US}^s$ is invariant under the exchange symmetry. The invariance of $\delta_{\rm US}^s$ is directly inherited by the contribution to $V^s$ at order $\alpha_{\rm s}^4\ln \mu$ and the singlet static energy $E^s$ at order $\alpha_{\rm s}^4\ln \alpha_{\rm s}$. The $QQQ$ Singlet Static Potential and Energy --------------------------------------------- According to Eq. (\[E0\]), the divergence and the $\alpha_{\rm s}^4\ln\mu$ term in $\delta_{\rm US}^s$ must cancel against a divergence and a term $\alpha_{\rm s}^4\ln\mu$ in the singlet static potential $V^s$. Therefore the $\alpha_{\rm s}^4\ln\mu$ part of the potential may be read off from Eq. (\[US1\]). In a minimal subtraction scheme, the singlet static potential up to order $\alpha_{\rm s}^4\ln\mu$ is then given by $$\begin{aligned} V^s({\bf r}_1,{\bf r}_2,{\bf r}_3;\mu) &=& V^s_{\rm NNLO}({\bf r}_1,{\bf r}_2,{\bf r}_3) \nonumber\\ - \frac{\alpha_{\rm s}^4}{3\pi}\ln\mu && \hspace{-6mm} \left[ \left({\bf r}_1^2+\frac{({\bf r}_2+{\bf r}_3)^2}{3}\right) \left(\frac{1}{|{\bf r}_1|^2} + \frac{1}{|{\bf r}_2|^2} + \frac{1}{|{\bf r}_3|^2} -\frac{1}{4}\frac{|{\bf r}_1| + |{\bf r}_2| + |{\bf r}_3|}{|{\bf r}_1||{\bf r}_2||{\bf r}_3|}\right) \right. \nonumber\\ && \hspace{3.4cm} \times \left(\frac{1}{|{\bf r}_1|} + \frac{1}{|{\bf r}_2|} + \frac{1}{|{\bf r}_3|} \right) \nonumber\\ && \hspace{-6mm} + \left({\bf r}_1^2-\frac{({\bf r}_2+{\bf r}_3)^2}{3}\right) \left(\frac{1}{|{\bf r}_1|^2} + \frac{1}{|{\bf r}_2|^2} + \frac{1}{|{\bf r}_3|^2} +\frac{5}{4}\frac{|{\bf r}_1| + |{\bf r}_2| + |{\bf r}_3|}{|{\bf r}_1||{\bf r}_2||{\bf r}_3|}\right) \nonumber\\ && \hspace{3.4cm} \times \left(\frac{1}{|{\bf r}_1|} - \frac{1}{2|{\bf r}_2|} - \frac{1}{2|{\bf r}_3|} \right) \nonumber\\ && \hspace{-6mm} + {\bf r}_1\cdot({\bf r}_2+{\bf r}_3) \left(\frac{1}{|{\bf r}_1|^2} + \frac{1}{|{\bf r}_2|^2} + \frac{1}{|{\bf r}_3|^2} +\frac{5}{4}\frac{|{\bf r}_1| + |{\bf r}_2| + |{\bf r}_3|}{|{\bf r}_1||{\bf r}_2||{\bf r}_3|}\right) \nonumber\\ && \hspace{3.4cm} \left. \times \left(\frac{1}{|{\bf r}_2|} - \frac{1}{|{\bf r}_3|} \right) \right] \,. \label{Vs3loop}\end{aligned}$$ The singlet static potential up to order $\alpha_{\rm s}^3$, which we have denoted by $V^s_{\rm NNLO}$, has been calculated in Ref. [@Brambilla:2009cd] and is reproduced in appendix \[app2\]. At order $\alpha_{\rm s}^3$, $V^s_{\rm NNLO}$ contains the leading three-body potential; also the new term proportional to $\alpha_{\rm s}^4\ln\mu$ that we have added here is a genuine three-body potential. Summing up the singlet static potential (\[Vs3loop\]) with the US contribution (\[US1\]) we obtain the singlet static energy up to order $\alpha_{\rm s}^4\ln\alpha_{\rm s}$, which reads $$\begin{aligned} E^s({\bf r}_1,{\bf r}_2,{\bf r}_3) &=& V^s_{\rm NNLO}({\bf r}_1,{\bf r}_2,{\bf r}_3) \nonumber\\ - \frac{\alpha_{\rm s}^4}{3\pi}\ln\alpha_{\rm s} && \hspace{-6mm} \left[ \left({\bf r}_1^2+\frac{({\bf r}_2+{\bf r}_3)^2}{3}\right) \left(\frac{1}{|{\bf r}_1|^2} + \frac{1}{|{\bf r}_2|^2} + \frac{1}{|{\bf r}_3|^2} -\frac{1}{4}\frac{|{\bf r}_1| + |{\bf r}_2| + |{\bf r}_3|}{|{\bf r}_1||{\bf r}_2||{\bf r}_3|}\right) \right. \nonumber\\ && \hspace{3.4cm} \times \left(\frac{1}{|{\bf r}_1|} + \frac{1}{|{\bf r}_2|} + \frac{1}{|{\bf r}_3|} \right) \nonumber\\ && \hspace{-6mm} + \left({\bf r}_1^2-\frac{({\bf r}_2+{\bf r}_3)^2}{3}\right) \left(\frac{1}{|{\bf r}_1|^2} + \frac{1}{|{\bf r}_2|^2} + \frac{1}{|{\bf r}_3|^2} +\frac{5}{4}\frac{|{\bf r}_1| + |{\bf r}_2| + |{\bf r}_3|}{|{\bf r}_1||{\bf r}_2||{\bf r}_3|}\right) \nonumber\\ && \hspace{3.4cm} \times \left(\frac{1}{|{\bf r}_1|} - \frac{1}{2|{\bf r}_2|} - \frac{1}{2|{\bf r}_3|} \right) \nonumber\\ && \hspace{-6mm} + {\bf r}_1\cdot({\bf r}_2+{\bf r}_3) \left(\frac{1}{|{\bf r}_1|^2} + \frac{1}{|{\bf r}_2|^2} + \frac{1}{|{\bf r}_3|^2} +\frac{5}{4}\frac{|{\bf r}_1| + |{\bf r}_2| + |{\bf r}_3|}{|{\bf r}_1||{\bf r}_2||{\bf r}_3|}\right) \nonumber\\ && \hspace{3.4cm} \left. \times \left(\frac{1}{|{\bf r}_2|} - \frac{1}{|{\bf r}_3|} \right) \right] \,. \label{eq:E0full}\end{aligned}$$ The logarithm of $\alpha_{\rm s}$ signals that an ultraviolet divergence from the US scale has canceled against an infrared divergence from the soft scale. Finally, it may be useful to express Eqs.  and in a way that makes manifest the invariance under exchange symmetry proven in Sec. \[sec:invdeltaUS\]. First, we recall that ${\bf r}_1$, ${\bf r}_2$ and ${\bf r}_3$ are not independent (cf. Sec. \[subseq:geomQQQ\]) and write $$E^s({\bf r}_1,{\bf r}_2,{\bf r}_3)=E^s({\bf r}_2-{\bf r}_3,{\bf r}_2,{\bf r}_3)\equiv E^s({\bf r}_2,{\bf r}_3),$$ then we observe that $$E^s({\bf r}_2,{\bf r}_3)=E^s({\bf r}_3,{\bf r}_2).$$ Hence an expression of the singlet static energy, which is manifestly invariant under exchange symmetry, is $$E^s({\bf r}_1,{\bf r}_2,{\bf r}_3)=\frac{E^s({\bf r}_2,{\bf r}_3)+E^s({\bf r}_1,-{\bf r}_3)+E^s(-{\bf r}_2,-{\bf r}_1)}{3}.$$ Similarly one can obtain a manifestly invariant expression of the singlet static potential. Renormalization group improvement of the singlet static potential in an equilateral geometry {#sec:towardsmuinV} ============================================================================================ The US logarithms that start appearing in the static potential at NNNLO may be resummed to all orders by solving the corresponding renormalization group equations. These are a set of equations that describe the scale dependence of the static potentials in the different color representations. They follow from requiring that the static energies of the $QQQ$ system and its gluonic excitations are independent of the renormalization scheme. The potentials in the different color representations mix under renormalization. This may be easily understood by looking at the renormalization group equation for the singlet potential that can be derived from $\mu\,{\rm d}V^s/{\rm d}\mu = - \mu\,{\rm d}\delta_{\rm US}^s/{\rm d}\mu$ and Eq. , $$\begin{aligned} \mu\frac{\rm d}{{\rm d}\mu}V^s&=&-\frac{8}{3}\frac{\alpha_{\rm s}}{\pi} \left\{\left[\frac{V^o_S-V^o_A}{2}\left(\frac{|\pmb{\rho}|^2}{4}-\frac{|\pmb{\lambda}|^2}{3}\right) -V^o_{AS}\,\frac{\pmb{\rho}\cdot\pmb{\lambda}}{\sqrt{3}}\right]\right. \nonumber\\ && \hspace{3cm} \times\left[3\,\left(\frac{V^o_S+V^o_A}{2}-V^s\right)^2+\frac{(V^o_S-V^o_A)^2}{4} + (V^o_{AS})^2\right] \nonumber\\ && \hspace*{1.15cm}+\left(\frac{V^o_S+V^o_A}{2}-V^s\right)\left(\frac{|\pmb{\rho}|^2}{4} +\frac{|\pmb{\lambda}|^2}{3}\right) \nonumber\\ && \hspace{3cm} \times\left.\left[\left(\frac{V^o_S+V^o_A}{2}-V^s\right)^2+3\frac{(V^o_S-V^o_A)^2}{4} + 3(V^o_{AS})^2\right]\right\}\,. \label{eq:RgVsgen}\end{aligned}$$ It shows the explicit dependence of the running of $V^s$ on the octet potentials and octet mixing potential. In the $Q\bar{Q}$ case the renormalization group equations have been solved for the singlet static potential at next-to-next-to-leading logarithmic (NNLL) accuracy in [@Pineda:2000gza] and at next-to-next-to-next-to-leading logarithmic (NNNLL) accuracy in [@Brambilla:2009bi].[^4] In the $QQQ$ case similar results can be obtained by solving Eq.  with the corresponding renormalization group equations for the octet and decuplet potentials. There is however a difference between the $Q\bar{Q}$ and the $QQQ$ case that is worth highlighting. While in a $Q\bar{Q}$ system there is just one length, the distance between the heavy quark and antiquark, the generic three-body system is characterized by more than one length. For a general three-body geometry, therefore, logarithmic corrections in the US scale could be numerically as important as finite logarithms involving ratios among the different lengths of the system. The calculation of these finite logarithms requires the calculation of the $QQQ$ static Wilson loop. However, these logarithms are unimportant if the distances between the heavy quarks are similar. In the following, we will therefore restrict ourselves to the simplest case of three static quarks located at the corners of an equilateral triangle. In this situation, the three-body system is characterized, like the two-body one, by just one fundamental length, which can be identified with the length of each side of the triangle: $|{\bf r}_1|=|{\bf r}_2|=|{\bf r}_3|=r$. ![ Leading-order ultrasoft contributions to the singlet, $\delta_{\rm US}^s$, octet, $\delta_{\rm US}^o$, and decuplet, $\delta_{\rm US}^d$, energies in an equilateral geometry. The triple lines represent the decuplet propagator, $\theta(T)e^{-iV^dT}\delta_{\delta\delta'}$; the decuplet can couple to a symmetric octet, with vertex $ig\tfrac{2}{\sqrt{3}}\left(\epsilon_{ijk}T^a_{ii'}T^b_{jj'}\underline{\pmb{\Delta}}^{\delta}_{i'j'k}\right)\pmb{\lambda}\cdot {\bf E}^b$, or to an antisymmetric octet, with vertex $ig\left(\epsilon_{ijk}T^a_{ii'}T^b_{jj'}\underline{\pmb{\Delta}}^{\delta}_{i'j'k}\right)\pmb{\rho}\cdot{\bf E}^b$; the other propagators and vertices have been introduced in Eqs. (\[theprops\]) and (\[thevertices\]).[]{data-label="equimatch"}](equilatOS_matching_v2){width="15cm"} In the equilateral limit at least up to NLO, the different octet fields do not mix, moreover, as has been shown in Eq. (\[eq:Osgleich\]), the two octet potentials $V_S^o$ and $V_A^o$ are equal. The US contribution for the singlet static energy follows by specializing the general formula to the equilateral limit. The US contributions for the octet and decuplet static energies can be derived along the same lines (cf. also the calculation of the US corrections for the $Q\bar Q$ octet potential in Ref. [@Brambilla:1999xf]). In particular, in the equilateral limit one has to consider only the diagrams shown in Fig. \[equimatch\], since octet-to-octet diagrams with an intermediate octet propagator in the loop are scaleless for $V_S^o=V_A^o=V^o$, and thus vanish in dimensional regularization. Moreover, the US leading-order contribution for the symmetric octet is equal to the one for the antisymmetric octet; we call it, $\delta_{\rm US}^o$. The divergent parts of the diagrams shown in Fig. \[equimatch\] give rise to the following renormalization group equations valid for the singlet, octet and decuplet static potentials of three quarks located at the corners of an equilateral triangle of side length $r$: $$\left\{ \begin{array}{l} \displaystyle \mu\frac{\rm d}{{\rm d}\mu} V^s =-\frac{4}{3\pi}\alpha_{\rm s}r^2(V^o-V^s)^3+{\cal O}(\alpha_{\rm s}^5) \\ \displaystyle \mu\frac{\rm d}{{\rm d}\mu} V^o = \frac{1}{12\pi}\alpha_{\rm s}r^2\left[(V^o-V^s)^3+5(V^o-V^d)^3\right]+{\cal O}(\alpha_{\rm s}^5) \\ \displaystyle \mu\frac{\rm d}{{\rm d}\mu} V^d =-\frac{2}{3\pi}\alpha_{\rm s}r^2(V^o-V^d)^3+{\cal O}(\alpha_{\rm s}^5) \\ \displaystyle \mu\frac{\rm d}{{\rm d}\mu} \alpha_{\rm s} = \alpha_{\rm s}\beta(\alpha_{\rm s}) \end{array} \right. \,. \label{RG}$$ The first equation is just the equilateral limit of Eq. . The last equation describes the running of the strong coupling constant, where $\beta(\alpha_{\rm s}) = - \alpha_{\rm s}\beta_0/(2\pi) + {\cal O}(\alpha_{\rm s}^2)$ is the beta function; the first coefficient of the beta function is $\beta_0 = 11 -2/3n_l$ with $n_l$ the number of light-quark flavors. By observing that $$V^o-V^s=-(V^o-V^d)+{\cal O}(\alpha_{\rm s}^3)\,, \label{rel}$$ as follows straightforwardly from the results of [@Brambilla:2009cd], the system of equations (\[RG\]) can be split into two sets of decoupled equations: $$\left\{ \begin{array}{l} \displaystyle \mu\frac{\rm d}{{\rm d}\mu} V^s =-\frac{4}{3\pi}\alpha_{\rm s}r^2(V^o-V^s)^3+{\cal O}(\alpha_{\rm s}^5) \\ \displaystyle \mu\frac{\rm d}{{\rm d}\mu} V^o = -\frac{1}{3\pi}\alpha_{\rm s}r^2(V^o-V^s)^3+{\cal O}(\alpha_{\rm s}^5) \\ \displaystyle \mu\frac{\rm d}{{\rm d}\mu} \alpha_{\rm s} = \alpha_{\rm s}\beta(\alpha_{\rm s}) \end{array} \right. \,, \label{RG2}$$ and $$\left\{ \begin{array}{l} \displaystyle \mu\frac{\rm d}{{\rm d}\mu} V^d =-\frac{2}{3\pi}\alpha_{\rm s}r^2(V^o-V^d)^3+{\cal O}(\alpha_{\rm s}^5) \\ \displaystyle \mu\frac{\rm d}{{\rm d}\mu} V^o = \frac{1}{3\pi}\alpha_{\rm s}r^2(V^o-V^d)^3+{\cal O}(\alpha_{\rm s}^5) \\ \displaystyle \mu\frac{\rm d}{{\rm d}\mu} \alpha_{\rm s} = \alpha_{\rm s}\beta(\alpha_{\rm s}) \end{array} \right. \,. \label{RG3}$$ The two sets of equations can be solved as in [@Pineda:2000gza] leading to[^5] $$\begin{aligned} V^s(r;\mu) &=& V^s_{\rm NNLO}(r)-9\frac{\alpha_{\rm s}^3(1/r)}{\beta_0r} \ln\frac{\alpha_{\rm s}(1/r)}{\alpha_{\rm s}(\mu)}\,, \label{vsr}\\ V^o(r;\mu)&=&V^o_{\rm NNLO}(r)-\frac{9}{4}\frac{\alpha_{\rm s}^3(1/r)}{\beta_0r}\ln\frac{\alpha_{\rm s}(1/r)}{\alpha_{\rm s}(\mu)}\,, \label{vor}\\ V^d(r;\mu) &=& V^d_{\rm NNLO}(r)+\frac{9}{2}\frac{\alpha_{\rm s}^3(1/r)}{\beta_0r}\ln\frac{\alpha_{\rm s}(1/r)}{\alpha_{\rm s}(\mu)}\,. \label{vdr}\end{aligned}$$ The singlet static potential is known at NNLO, hence Eq.  provides the complete expression of the singlet static potential at NNLL accuracy in an equilateral geometry. This is the most accurate perturbative determination of this quantity. Instead neither the octet nor the decuplet potentials are known beyond NLO (see [@Brambilla:2009cd]). Conclusions {#sec:conclusions} =========== In the paper, we have reconsidered the construction of pNRQCD for systems made of three heavy quarks with equal masses. We have, in particular, rederived the pNRQCD Lagrangian in the static limit and put special attention to the symmetry under exchange of the heavy-quark fields. Although the symmetry is an obvious property of these systems, its consequences for the pNRQCD Lagrangian and in particular for its octet sector have been explored here for the first time. Three static quarks may be cast either in a color-singlet, two distinct color-octets or a color-decuplet configuration. Whereas the color singlet is completely antisymmetric and the color decuplet is completely symmetric in the color-indices, the color-octet transformations depend on the color indices that are exchanged. The fact that color-octet fields are specially sensitive to the ordering of the quarks reflects in the fact that they mix, in general, under exchange of the heavy-quark fields and dynamically through a one-gluon exchange. As a consequence, also the octet potentials and the mixing potential transform non trivially under exchange symmetry; we have listed their transformation properties in Eqs.  and . Thereafter, we have computed the leading ultrasoft contribution to the $QQQ$ singlet static energy, $\delta_{\rm US}^s$. Its expression can be found in . Because of the two different octet fields and their mixing, the calculation of $\delta_{\rm US}^s$ requires the evaluation of four diagrams and the resummation of the octet mixing potential for all of them. The calculation is therefore more involved than the analogous one of the US contribution in the $Q\bar{Q}$ case. The expression for $\delta_{\rm US}^s$ in the $QQQ$ case offers also a non-trivial test for the invariance under exchange symmetry; this has been performed in Sec. \[sec:invdeltaUS\]. A consequence of the calculation of $\delta_{\rm US}^s$ at leading order is that we can determine the singlet static potential at order $\alpha_{\rm s}^4\ln\mu$, see Eq. , and the singlet static energy at order $\alpha_{\rm s}^4\ln\alpha_{\rm s}$, see Eq. . These results represent the new computational outcome of this work and so far the most accurate determinations of the $QQQ$ singlet static potential and energy in perturbative QCD. The new contribution computed for the potential is valid for any configuration in space that the three quarks may take and it is a three-body interaction. Together with the three-body interaction at two-loop order computed in [@Brambilla:2009cd] it may provide new insight on the emergence of a long-range three-body interaction governed by just one fundamental length that is observed in lattice studies (see e.g. [@Takahashi:2000te; @Takahashi:2002bw; @Takahashi:2004rw]). In the last part of the paper, we have focused on the special situation where the three quarks are located at the corners of an equilateral triangle of side length $r$. In this limit, where the two octet potentials become degenerate, we have solved the renormalization group equations for the color singlet, octet and decuplet potentials at NNLL accuracy. The corresponding expressions can be found in Eqs. -. Hence, for an equilateral geometry, the $QQQ$ singlet static potential is now known up to order $\alpha_{\rm s}^{3}(\alpha_{\rm s}\ln\mu r)^n$ for all $n \in \mathbb{N}_0$. Work supported in part by DFG and NSFC (CRC 110), and by the DFG cluster of excellence “Origin and structure of the universe” (www.universe-cluster.de). F.K. gratefully acknowledges financial support from the FAZIT foundation and inspiring discussions with E. Thoma. Covariant derivative operators {#app1} ============================== In this appendix, we list the explicit matrix representations for the covariant derivative operators in the octet and decuplet representations of SU(3)$_c$ that appear in Eq. (\[LpNRQCD1\]). The SU(3)$_c$ covariant derivative is of the general form $$\begin{aligned} D_{\mu}=\partial_{\mu}+igA^a_{\mu}T_r^a\,,\end{aligned}$$ where $a=1, \ldots, 8$ and $T_r^a$ refers to the SU(3)$_c$ generators in the representation $r$. The generators in the octet ($r=8$) and in the decuplet ($r=10$) representation are [@Brambilla:2005yk] $$\begin{aligned} (T_8^a)_{bc}&= - if^{abc}\,,\hspace*{4cm}b,c=1, \ldots, 8, \nonumber\\ (T_{10}^a)_{\delta\delta'}&= \frac{3}{2}\,\underline{\Delta}_{ijk}^{\delta}\lambda^a_{ii'}\underline{\Delta}_{i'jk}^{\delta'}\,, \hspace*{2.6cm}\delta,\delta'=1, \ldots, 10, \end{aligned}$$ where $f^{abc}$ are the structure constants of SU(3)$_c$. An explicit representation of the decuplet tensor $\underline{\Delta}^{\delta}_{ijk}$ is in (\[Delta\]). The singlet static potential up to order $\alpha_{\rm s}^3$ {#app2} =========================================================== We reproduce here for completeness the expression of the singlet static potential up to order $\alpha_{\rm s}^3$ computed in [@Brambilla:2009cd]: $$\begin{aligned} V^s_{\rm NNLO}({\bf r}_1,{\bf r}_2,{\bf r}_3) &=& -\frac{2}{3}\sum_{i=1}^3\frac{\alpha_{\rm s}(1/|{\bf r}_i|)}{|{\bf r}_i|} \left[1+\tilde{a}_1\frac{\alpha_{\rm s}(1/|{\bf r}_i|)}{4\pi}\right] \label{Vs2loop}\\ && \hspace{-22mm} -\alpha_{\rm s}\left(\frac{\alpha_{\rm s}}{4\pi}\right)^2 \left[\frac{2}{3}\,\tilde{a}_{2,s}\left(\frac{1}{|{\bf r}_1|}+\frac{1}{|{\bf r}_2|} +\frac{1}{|{\bf r}_3|}\right)+v_{\cal H}({\bf r}_2,{\bf r}_3)+v_{\cal H}({\bf r}_1,-{\bf r}_3)+v_{\cal H}(-{\bf r}_2,-{\bf r}_1)\right]\!. \nonumber \end{aligned}$$ The one-loop and two-loop coefficients $\tilde{a}_1$ and $\tilde{a}_{2,s}$ depend on the number of light (massless) quark flavors, $n_l$, and are given by $$\begin{aligned} \tilde{a}_1&=&\frac{31}{3}+22\gamma_E-\left(\frac{10}{3}+4\gamma_E\right)\frac{n_l}{3}\,, \\ \tilde{a}_{2,s}&=&\frac{4343}{18}+\frac{3\pi^4}{4}+\frac{121\pi^2}{3}+66\zeta(3)-484\gamma_E^2+204\gamma_E \nonumber\\ && -\left(\frac{1229}{9}+\frac{44\pi^2}{3}+52\zeta(3) -176\gamma_E^2+76\gamma_E\right)\frac{n_l}{3}+\left(\frac{100}{9}+\frac{4\pi^2}{3}-16\gamma_E^2\right)\left(\frac{n_l}{3}\right)^2 \nonumber\\ && +4\gamma_E\left(11-2\frac{n_l}{3}\right)\tilde{a}_1\,.\end{aligned}$$ At two loop, a genuine three-body potential shows up. It is encoded in the function $v_{\cal H}$ defined as $$\begin{aligned} v_{\cal H}({\bf r}_2,{\bf r}_3)&=& 16\pi\int_0^1{\rm d}x \int_0^1{\rm d}y\, \left\{\frac{\hat{\bf r}_2\cdot\hat{\bf r}_3}{|{\bf R}|} \left[\left(1-\frac{M^2}{|{\bf R}|^2}\right) \arctan\frac{|{\bf R}|}{M}+\frac{M}{|{\bf R}|}\right]\right. \\ && \hspace*{3.6cm} +\left.\frac{(\hat{\bf r}_2\cdot\hat{\bf R})(\hat{\bf r}_3\cdot\hat{\bf R})}{|{\bf R}|} \left[\left(1+3\frac{M^2}{|{\bf R}|^2}\right)\arctan\frac{|{\bf R}|}{M}-3\frac{M}{|{\bf R}|}\right]\right\}, \nonumber\end{aligned}$$ with ${\bf R}({\bf r}_2,{\bf r}_3)\equiv x{\bf r}_2-y{\bf r}_3$ and $M({\bf r}_2,{\bf r}_3)\equiv |{\bf r}_2|\sqrt{x(1-x)}+|{\bf r}_3|\sqrt{y(1-y)}$. Note that the three-body potential in (\[Vs2loop\]) is manifestly invariant under the transformations and . [10]{} R. Sommer and J. Wosiek, Nucl. Phys. B [**267**]{}, 531 (1986). T. T. Takahashi, H. Matsufuru, Y. Nemoto and H. Suganuma, Phys. Rev. Lett.  [**86**]{}, 18 (2001) \[hep-lat/0006005\]. T. T. Takahashi, H. Suganuma, Y. Nemoto and H. Matsufuru, Phys. Rev. D [**65**]{}, 114509 (2002) \[hep-lat/0204011\]. H. Suganuma, H. Matsufuru, Y. Nemoto and T. T. Takahashi, Nucl. Phys. A [**680**]{}, 159 (2001) \[hep-lat/0205029\]. C. Alexandrou, P. de Forcrand and O. Jahn, Nucl. Phys. Proc. Suppl.  [**119**]{}, 667 (2003) \[hep-lat/0209062\]. T. T. Takahashi and H. Suganuma, Phys. Rev. Lett.  [**90**]{}, 182001 (2003) \[hep-lat/0210024\]. T. T. Takahashi, H. Matsufuru, Y. Nemoto and H. Suganuma, hep-lat/0304009. V. G. Bornyakov [*et al.*]{} \[DIK Collaboration\], Phys. Rev. D [**70**]{}, 054506 (2004) \[hep-lat/0401026\]. V. G. Bornyakov, M. N. Chernodub, H. Ichie, Y. Koma, Y. Mori, M. I. Polikarpov, G. Schierholz and H. Stuben [*et al.*]{}, Prog. Theor. Phys.  [**112**]{}, 307 (2004) \[hep-lat/0401027\]. T. T. Takahashi and H. Suganuma, Phys. Rev. D [**70**]{}, 074506 (2004) \[hep-lat/0409105\]. K. Hübner, F. Karsch, O. Kaczmarek and O. Vogt, Phys. Rev. D [**77**]{}, 074504 (2008) \[arXiv:0710.5147 \[hep-lat\]\]. T. Iritani and H. Suganuma, Phys. Rev. D [**83**]{}, 054502 (2011) \[arXiv:1011.4767 \[hep-lat\], arXiv:1102.0920 \[hep-lat\]\]. S. Meinel, Phys. Rev. D [**85**]{}, 114510 (2012) \[arXiv:1202.1312 \[hep-lat\]\]. E. Klempt and J. -M. Richard, Rev. Mod. Phys.  [**82**]{}, 1095 (2010) \[arXiv:0901.2055 \[hep-ph\]\]. N. Brambilla, A. Vairo and T. Rösch, Phys. Rev.  D [**72**]{}, 034021 (2005) \[hep-ph/0506065\]. N. Brambilla, J. Ghiglieri and A. Vairo, Phys. Rev. D [**81**]{}, 054031 (2010) \[arXiv:0911.3541 \[hep-ph\]\]. A. Vairo, Few Body Syst.  [**49**]{}, 263 (2011) \[arXiv:1008.4473 \[nucl-th\]\]. N. Brambilla, A. Pineda, J. Soto and A. Vairo, Rev. Mod. Phys.  [**77**]{}, 1423 (2005) \[hep-ph/0410047\]. W. E. Caswell and G. P. Lepage, Phys. Lett.  B [**167**]{}, 437 (1986). G. T. Bodwin, E. Braaten and G. P. Lepage, Phys. Rev.  D [**51**]{}, 1125 (1995) \[Erratum-ibid.  D [**55**]{}, 5853 (1997)\] \[hep-ph/9407339\]. A. Pineda and J. Soto, Nucl. Phys. Proc. Suppl.  [**64**]{}, 428 (1998) \[hep-ph/9707481\]. N. Brambilla, A. Pineda, J. Soto and A. Vairo, Nucl. Phys.  B [**566**]{}, 275 (2000) \[hep-ph/9907240\]. N. Brambilla, A. Pineda, J. Soto and A. Vairo, Phys. Rev. D [**60**]{}, 091502 (1999) \[hep-ph/9903355\]. E. Eichten and B. R. Hill, Phys. Lett. B [**234**]{}, 511 (1990). A. Pineda and J. Soto, Phys. Lett.  B [**495**]{}, 323 (2000) \[hep-ph/0007197\]. N. Brambilla, A. Vairo, X. Garcia i Tormo and J. Soto, Phys. Rev. D [**80**]{}, 034016 (2009) \[arXiv:0906.1390 \[hep-ph\]\]. [^1]: In a three-body system, we may in general expect to have more than one typical relative momentum and more than one US energy scale. To keep our discussion simple, we assume all relative momenta to be of the same order and so for all US energy scales. In the dynamical case, this is realized when the masses of the heavy quarks are of the same order. In the static limit, which will be our main concern in the following, this condition is realized by locating the three quarks at distances of the same order. We emphasize that this condition may be (also largely) violated in different geometrical configurations. [^2]: Note however that a generalization to finite heavy-quark masses, $m_1$, $m_2$ and $m_3$, would also require some adjustment in Eqs.  and , as – besides the heavy-quark locations – also the masses would have to be exchanged, e.g. in Eq. , $m_1 \leftrightarrow m_2$, etc. [^3]: This dependence, which will be displayed explicitly in the following, has been dropped in Eqs.  and . [^4]: An NNLL accuracy amounts at resumming $\alpha_{\rm s}^3 (\alpha_{\rm s} \ln \mu)^n$ terms and an NNNLL accuracy amounts at resumming $\alpha_{\rm s}^4 (\alpha_{\rm s} \ln \mu)^n$ terms, with $n\in\mathbb{N}_0$. [^5]: All coupling constants in $V^s_{\rm NNLO}(r)$, $V^o_{\rm NNLO}(r)$ and $V^d_{\rm NNLO}(r)$ are evaluated at the scale $1/r$.
--- abstract: 'We study non-uniform states and possible glassiness triggered by a competition between distinct local orders in disorder free systems. Both in Ginzburg-Landau theories and in simple field theories, such inhomogeneous states arise from negative gradient terms between the competing order parameters. We discuss applications of these ideas to a variety of strongly correlated systems.' author: - 'Z. Nussinov' - 'I. Vekhter' - 'A. V. Balatsky' title: 'Non-uniform glassy electronic phases from competing local orders' --- Introduction. ============= Accumulated experimental evidence strongly suggests that in many correlated electronic systems, different types of ordering phenomena compete and coexist over a wide range of tunable parameters. The most ubiquitous such cohabitation is between magnetic and superconducting orders. Itinerant antiferromagnetism (AFM) coexists with superconductivity in the 115 heavy fermion series (CeMIn$_5$, where $M=$Co,Ir, or In)[@Zapf]. In $UPt_{3}$, superconductivity emerges at $T_{c} \approx 0.5 K$ from a strongly correlated heavy electron state with small moment AFM below 6K [@upt3]. In some of the high-T$_c$ cuprates charge density order coexists with spin density order (“stripes” [@jan; @steve; @tranquada]) and may be relevant to the onset of the superconductivity and quantum critical behavior [@review; @dirk]. Recent measurements indicate that in URu$_2$Si$_2$ there is a proliferation of competing phases under an applied magnetic field [@KHKim:2003]. Experiments also suggest multiple phases in the skutterudite superconductor PrOs$_4$Sb$_{12}$ [@Yuji:PrOsSb], manganites [@salamon], and a number of other materials. Two trends are common to these experimental findings. First, the coexistence of different orders is often inhomogeneous. Second, this coexistence is frequently most pronounced near a Quantum Critical Point (QCP), where the transition temperature for one of the order parameters vanishes [@review; @subir]. Additionally, dynamics of compounds with inhomogeneous coexistence of distinct orders is often glassy [@BSimovic:2003; @BSimovic:2004; @TPark:2005; @CPanagopoulos:2005]. In some systems, such as manganites [@dagotto05] the glassy behavior is, most likely, due to disorder upon doping the system. In others, including cuprates, the glassiness may be self-generated (not simply due to doping disorder [@VMitrovic:2008]), and arise out of competing interactions at different length scales [@jorg]. The question remains, however, whether inhomogeneous and/or glassy behavior can arise out of a theory with local interactions and no disorder. In this article we address this question for a class of Ginzburg-Landau theories with competing order parameters. A comprehensive survey of classical systems with frustration and no disorder that display glassy behavior and proliferation of inhomogeneous ground states, can be found in Refs. . We study a minimal Ginzburg-Landau (GL) [@tol] theory which includes amplitude-gradient coupling between two distinct local orders, and find the conditions for resultant inhomogeneous phases. A related interesting work examining gradient coupling in GL theories [@Mi] appeared slightly after the initial dissemination of our results [@oldus]. Extensions of the GL gradient couplings considered here are found in some studies of the supersolid transition [@arun]. We show that, for a range of parameters described below, our theory maps onto an effective model that is likely to exhibit glassiness. Whether a particular system does or does not show glassy behavior upon cooling depends on the rate of temperature change and other dynamical variables that are not part of our equilibrium analysis. However, our approach allows us to conclude whether a glassy phase is possible and likely to occur. In this we follow the established approaches in the field [@jorg]. The mapping that strongly suggests glassiness in our approach is to a Brazovskii-like model for one of the order parameters. The Brazovskii model [@Brazovskii] for a single component order parameter is defined by a GL functional of the form $$\begin{aligned} {\cal{F}} = \frac{V}{(2 \pi)^{d}} \int d^{d} k [\frac{r_{0}}{2} + D(|\vec{k}|- q)^{2})] |\Phi_{k}|^{2} + ..., \label{br}\end{aligned}$$ in momentum ($k$) space with $V$ the volume of the system. In Eq.(\[br\]), the ellipses denote cubic, quartic, and higher order terms in the order parameter field $\Phi$. As the mass term, $r_0$ changes sign, the transition to a broken symmetry state $\Phi\neq 0$,involves the appearance of structures characterized by a finite wavenumber on a shell of radius $q>0$). Structures that satisfy definite commensurability relations amongst the wavenumbers are most preferred. In Ref.  Brazovskii found that large phase space available for fluctuations around the minimizing shell alters the character of the transition to the ordered state once the fluctuations are accounted for, and suggested that it becomes first order. Thermal fluctuations renormalize the cubic terms of the GL theory. More recent replica calculations [@jorg; @glass; @loh; @DMFT] showed that the model has extensive configurational entropy, indicating proliferation of modulated low-energy states, and strongly suggesting slow dynamics and glassiness under generic experimental conditions. Once again, these replica calculations only establish that glassiness is a plausible and likely alternative to the first order transition into a uniformly modulated phase. Whether a finite temperature Brazovskii transition does or does not transpire before the system undergoes a dynamical arrest (the glass transition outlined below) depends on microscopic details of the model. The known theoretical techniques (SCSA, DMFT, and others) do not enable the proof of a glassy phase. These methods only enable us to determine whether a glassy phase is possible. [@glass; @jorg; @DMFT] Below we find the mapping of systems with competing orders to Brazovskii type models. This mapping allows us to (i) Find resultant inhomogeneous phases in the GL analysis; (ii) Include fluctuations via a self-consistent field theory to establish that one of the two scenarios is realized: (a) the critical temperature for the onset of non-uniform states is suppressed to zero, suggesting that these states are more likely to be observed near a QCP; or, alternatively, (b) fluctuations lead to a low temperature Brazovskii transition; (iii) Appeal to existing replica calculation results to confirm the extensive configurational entropy associated with these incommensurate structures in disorder free systems with competing local orders, which strongly suggests slow dynamics and glassiness. Finally, we comment on possible realizations of our model and applicability of the results to itinerant electronic systems. Ginzburg-Landau theory: instability of uniform coexistence. =========================================================== To empirically account for competing orders, we analyze the Ginzburg-Landau (GL) functional with two order parameters, $\Phi_1$ and $\Phi_2$, which we will choose to be real and scalar without loss of generality. We remark that our very general GL approach applies to various types of order parameters. Of course, the symmetry, number of components of the order parameters etc. changes. Nevertheless, the conclusions are generally much the same. The uniform part of the free energy is ${\cal F}_0=\int d{\bf x} F_0$ where $$\begin{aligned} F_0=\frac{r_1}{2} |\Phi_1|^2 + \frac{r_2}{2} |\Phi_2|^2 + \frac{t}{2} |\Phi_1|^2 |\Phi_2|^2 +\frac{1}{4}|\Phi_1|^4 + \frac{u}{4}|\Phi_2|^4. \label{f0}\end{aligned}$$ In the spirit of the GL theory, $r_{1,2}= a_{1,2} (T-T_{1,2})$, with $T_{i}$ the mean field transition temperatures. All other coefficients are taken to be temperature-independent. The quadratic coupling of the order parameters is allowed for all symmetries. We consider competing orders, $t>0$, so that the uniform coexistence region ($\Phi_1\neq 0, \Phi_2\neq 0$) occurs only below the lower of the transition temperatures, and for $u>t_1^2$. In that case, the values of the fields minimizing the free energy are $\widetilde\Phi_1^2=(r_2t-r_1 u)/(u-t^2)$, and $\widetilde\Phi_2^2=(r_1t-r_2)/(u-t^2)$. In disorder free systems, the only alternative to the uniform coexistence is phase separation unless non-trivial gradient terms are present [@Pryadko]. Therefore we include the inhomogeneous contribution to the free energy, ${\cal F}_q=\int d{\bf x} F_q({\bf x})$, where $$\begin{aligned} F_q = \sum_{i} |\nabla\Phi_i|^2 - \sum_{i,j} g_{ij}|\Phi_i|^2 |\nabla \Phi_j|^2 + \sum_{i} p_i |\nabla^2 \Phi_i|^2 . \label{fq}\end{aligned}$$ Here, we included general symmetry allowed low order gradient terms. To flesh out the quintessential physics in what follows, we set $g_{11}=g_{22}=g_{21}=0,g_{12}>0$, and $p_1=0$. This is the essential aspect of the model that allows us to investigate the appearance of the inhomogeneous states. The coupling of the form $-g_{12}|\Phi_1|^2 |\nabla \Phi_2|^2$ implies that in the effective theory for the order parameter $\Phi_2$ the coefficient of the gradient term, $1-g_{12}|\Phi_1|^2$, may become negative, making the transition of the Brazovskii type. We now investigate when this is possible. With $F({\bf x}) = F_{0} + F_{q}$, the order parameter profiles satisfy the Euler-Lagrange equations, $[\nabla \cdot (\partial F/\partial (\nabla \Phi_{i}))] = (\partial F/\partial \Phi_{i})$. By constructing inhomogeneous variational states whose free energy is lower than the minimum amongst all possible uniform configurations, we prove that the uniform solution is unstable towards the appearance of inhomogeneities. We study the phase diagram of the model assuming that the mean field transition temperatures $T_i$ can be tuned by an external parameter, $x$ (pressure, doping, magnetic field etc.), as shown in Fig.\[Fig:GLPlot\], with $T_{1}(x)$ monotonically decreasing, and $T_{2}(x)$ monotonically increasing. That is, $$\begin{aligned} T_{1} = T_{1}^{(0)} - a_{1} x, \nonumber \\ T_{2} = T_{2}^{(0)} + a_{2} x,\end{aligned}$$ with $T_{1,2}^{(0)}$ and $a_{1,2}$ positive constants. We first concentrate on the region $T_1>T_2$. Upon lowering the temperature, the first transition is into the uniform state with $\Phi_2=0$ and $\Phi_{1}({\bf x}) =-r_1$. Consequently, below the $T_{q} = T_{1} - 1/(g_{12}a_{1})$ the coefficient of the $|\nabla \Phi_2|^2$ term becomes negative indicating the tendency towards the development of an inhomogeneous $\Phi_{2}$ phase. The structure of this modulation depends on the difference $T_q-T_2$. If this difference is sufficiently large, it is disadvantageous to create non-vanishing bulk average of $\Phi_2$. Local “bubbles” of the order may appear upon lowering $T$, but their study is not our focus in the present work. In order to make the connection with the slow dynamics and Brazovskii transition, we study the onset of the periodically modulated phase of the form $\Phi_2({\bf x})=\Theta_2\cos({\bm q}_i\cdot {\bf x})$. Of the numerous contending low (free) energy configurations, we will focus on analytically tractable modulated structures; we do so in order to obtain stringent variational bounds that we are able to extremize, and based on the original analysis that showed the single modulation structures are most advantageous [@Brazovskii]. In the regime $T_q\geq T_2$ minimization of the GL functional with respect to both $q$ and $\Theta_2$ gives the transition temperature $$\begin{aligned} T_{c2}&=&T_q-(g_{12} a_1)^{-1} \biggl[\sqrt{z^2+\frac{2tp}{g_{12}}+2pa_2(T_q-T_2)}-z\biggr] , \nonumber \\ z &\equiv& \frac{pa_2-tpa_1}{g_{12}a_1}\end{aligned}$$ to the phase $\Theta_2\neq 0$ with modulations at a [*finite*]{} wave vector, $$\begin{aligned} q = \sqrt{ \frac{g_{12} a_{1} (T_{1} - T_{c2})}{2p_{2}}}.\end{aligned}$$ ![The phase diagram obtained from the Ginzburg-Landau expansion. The lines $T_{1,2}$ denote the bare mean field transition temperatures as a function of a tuning parameter $x$. $T_q$ is defined in text. An inhomogeneous phase appears below $T_{c2}$. Double line denotes the first order transition.[]{data-label="Fig:GLPlot"}](GLPhase.eps){width="7cm"} In the regime $T_q < T_{1,2}$ the first transition, at $T_{q} < T$, occurs into a spatially homogeneous phase. We next investigate the phase diagram for the more general variational ansatz $\Phi_2^{var}({\bf x})=\overline{\Phi}_2+\Theta_2\cos({\bm q}_i\cdot {\bf x})$. Introduction of spatial modulations reduces the condensation energy and therefore is unfavorable, unless compensated by a significant gain due to the negative gradient term. As a result we find a (generically first order, but dependent on the magnitude of the coefficients in the GL expansion) transition from the homogeneous to modulated, with a finite $q$, phase at low $T$. In Fig.(\[Fig:GLPlot\]), we show the phase diagram of Eqs.(\[f0\])-(\[fq\]) for $t=0$. Of course, since we allowed only for the restricted variational states in the above analysis, our bounds are more potent for the global free energy minima - $\Phi_{2}$ is strictly inhomogeneous for all $T<T_{c2}(x)$; unrestricted inhomogeneous states (not bound to the form of $\Phi_2^{var}$) may extend to temperatures somewhat higher than $T_{c2}(x)$. Self-consistent field theory for competing order parameters. ============================================================ To improve on the GL analysis and incorporate the effect of fluctuations self-consistently, we generalize our model to $n$-component vector fields and utilize a large $n$ expansion. As well known, the $n=\infty$ limit is equivalent to the spherical model describing single component (scalar) particles [@spherical]. The physical engine for the inhomogeneities is, as in preceding section, the amplitude gradient coupling which drives non-uniformities in $\Phi_{2}$ once $\Phi_{1}$ is finite. For a finite $\Phi_{1}({\bf{x}})= \Phi_{1}$, the effective free energy for $\Phi_{2}$ is $$\begin{aligned} {\cal F}_{eff;2} &=& \int \frac{d^{d}k}{(2 \pi)^{d}} \Big[ (\frac{r_{2}}{2} + \frac{t}{2} \Phi_{1}^{2}) + (1- g_{12} \Phi_{1}^{2}) k^{2} \nonumber + p k^{4} \big] \nonumber \\ &\times& \Phi_{2}({\bf k}) \Phi_{2}(-{\bf k}) + \frac{u}{4} \int \frac{d^{d}k_{1}}{(2 \pi)^{d}} \frac{d^{d}k_{2}}{(2 \pi)^{d}} \frac{d^{d}k_{3}}{(2 \pi)^{d}} \nonumber \\ &\times& \Phi_{2}({\bf k}_{1}) \Phi_{2}({\bf k}_{2}) \Phi_{2}({\bf k}_{3}) \Phi_{2}(- {\bf k}_{1} - {\bf k}_{2} - {\bf k}_{3})\,, \label{feff}\end{aligned}$$ where $d$ is the dimensionality of the system. The bare inverse Green’s functions are given by $G_{0}^{-1} = [r_{2}/2 + t/2 + (1-g_{12} \Phi_{1}^{2}) k^{2} + pk^{4}]$. Incorporating fluctuations self-consistently, we have $G^{-1} = [{\overline{r}}_{2}/2 + (1-g_{12} \Phi_{1}^{2}) k^{2} + pk^{4}]$ where, by the Dyson equation, ${\overline{r}}_{2}/2 = r_{2}/2 + t/2 + \Sigma$. To lowest order in $1/n$, the self-energy is given by $\Sigma^{0} = \int \frac{d^{d}k}{(2 \pi)^{d}} G({\bf k})$, see Ref.  . This leads to a self-consistency equation for ${\overline{r}}_{2}$. Similar self-consistency equations appear for $\Phi_{1}$; before the transition to an ordered $\Phi_{2}$ state, $\Phi_{1}^{2} = - r_{1}$. A phase transition to an ordered state $\Phi_{2} \neq 0$ occurs when the Green’s function acquires a pole on the real $k$ axis. If the pole is at $k_{\min} =0$, the transition is to a uniform phase of $\Phi_{2}$; if the pole first appears for $k_{\min} \neq 0$, the transition is into a modulated phase. When $[1- g_{12} \Phi_{1}^{2} ]>0$ the minimum of $G^{-1}$ is always at $k=0$, and both $\Phi_{1}$ and $\Phi_2$ may exhibit uniform orders. On the other hand, if $[1- g_{12} \phi_{1}^{2} ]<0$, the minimum for the $\Phi_{2}$ inverse Green’s function, $G^{-1}({\bf k})$ occurs at $k_{min} = - [1- g_{12} \Phi_{1}^{2} ]/(2p)$ leading to a real axis pole when ${\overline{r}}_{2}= {\overline{r}}_{2~\min} = [1- g_{12} \Phi_{1}^{2} ]^{2}/(2p_{2})$. The quartic $G^{-1}$ has two pairs of complex conjugate poles in the $k$ plane which lie on a circle of radius $\rho = ({\overline{r}}_{2}/ (2 p_{12}))^{1/4}$. The finite real component of the poles means that the correlation function $\langle \Phi_{2}({\bf x}) \Phi_{2}({\bf y}) \rangle$ exhibits sinusoidal modulations in addition to exponential decay. The modulation and correlation lengths are given, respectively, by $$\begin{aligned} l_{2} = 4 \pi [\sqrt{{\overline{r}}_{2}/2} + (1- g_{12} \Phi_{1}^{2}) /2]^{-1/2}, \nonumber \\ \xi_{2} = 2[\sqrt{ {\overline{r}}_{2}/2} -(1- g_{12} \Phi_{1}^{2})/2]^{-1/2}, \label{G2}\end{aligned}$$ with $\Phi_{1}$ the uniform competing order field. Irrespective of the spatial dimensionality, whenever $[1- g_{12} \Phi_{1}^{2} ]<0$, as ${\overline{r}}_{2} \to {\overline{r}}_{2~ \min}$ the self-energy diverges as $\Sigma \sim ({\overline{r}}_{2} - {\overline{r}}_{2~\min})^{-1/2}$. The phase transition which would occur (at the mean field level) when ${\overline{r}}_{2} ={\overline{r}}_{2~\min}$ is thwarted by the divergence of the self energy due to fluctuations. This implies that $T_{c}=0$ similar to systems with competing long range interactions [@us]. However, finite $n$ corrections (especially for $n=1$), may make the transition temperature finite. In this case, this low temperature transition for $\Phi_{2}$ is of Brazovskii type, with a shell of minimizing modes. A similar analysis holds for competing local orders in large $n$ quantum systems by extending [@us; @glass]. For bosonic fields, after a summation over Matsubara frequencies, the $\Phi_{2}$ correlator is $$\begin{aligned} G_{2}({\bf k}) = \frac{\frac{1}{2} + n_{B}\left(\sqrt{[\frac{{\overline{r}}_{2}} {2} + (1-g_{12} \Phi_{1}^{2}) k^{2} + pk^{4}]/ (k_{B}T)}\right)} {\sqrt{\frac{{\overline{r}}_{2}}{2} + (1-g_{12} \Phi_{1}^{2}) k^{2} + pk^{4}}}, \nonumber\end{aligned}$$ with $n_{B}(x) = [\exp(x)-1]^{-1}$. Order, at large $n$, is still inhibited in the quantum rendition of our system, although the divergence of the self-energy in this case is less severe than for its classical counterpart. In the bosonic system, due to integration over imaginary time, the $\Phi_{2}$ self energy diverges as $- \ln|{\overline{r}}_{2}- {\overline{r}}_{2~\min}|$ when ${\overline{r}}_{2} \to {\overline{r}}_{2~\min})$, whereas in the classical system it diverges as $[{\overline{r}}_{2}- {\overline{r}}_{2~\min}]^{-1/2}$. Thermal fluctuations in classical $n=2$ systems (e.g. complex scalar fields) also lead to a divergence of the $- \ln|{\overline{r}}_{2}- {\overline{r}}_{2~\min}|$\] type. [@glass; @us; @Schmalian] The order probed by $\Phi_{2}$ is stabilized if the degeneracy of the minimizing wave-numbers is lifted by augmenting the rotationally symmetric Hamiltonian by additional lattice point group symmetry terms. In such instances, the critical temperature of the large $n$ system often remains anomalously low, and attains its minimal value exactly at the onset of incommensurate order (e.g., when the minimizing modes are of vanishing norm($q \to 0^{+}$)) [@us]. Degeneracy can also be lifted by an external field which lowers the full rotational symmetry of $H$ to a lower rotational symmetry in a plane orthogonal to the field direction. For small $n$, a “quantum” finite temperature Brazovskii transition may occur for $\Phi_{2}$. Slow dynamics and glassiness. ============================= The key conclusion of the previous section was that our model of competing orders, Eqs.(\[f0\])-(\[fq\]), maps onto a Brazovskii-like model for the subdominant order parameter. The transition temperature for the onset of this order is then suppressed relative to the mean field conclusions due to a large phase space available to low energy thermal and quantum fluctuations. There are two possibilities for the transition itself. It may take place as a first order fluctuation-induced transition as originally envisioned by Brazovskii. Recent work on a model equivalent to our Eq. (\[feff\]) suggests that a glass transition may be realized as an alternative [@jorg; @loh]. When applied to Eq.(\[feff\]), the self-consistent screening approximation shows [@jorg; @loh] that the configurational entropy, $S_c=k_B\log N_m$, with $N_m$ the number of the metastable states, is extensive (proportional to the volume) over a finite temperature range ($T_{A} > T >T_{K}$), which depends on the coefficients of our GL expansion. This entropy is due exclusively to the inhomogeneous field $\Phi_{2}({\bf x})$ triggered by the competing uniform order $\Phi_{1}({\bf x}) = \Phi_{1}$. At the onset ($T=T_{A}$) $$\begin{aligned} S_c(T_{A}) \approx C k_{B} (g_{12} \Phi_{1}-1)^{3} V,\end{aligned}$$ where, in 3D, the numerical constant $C \simeq 1.18 \times 10^{-3}$ and $V$ is the volume [@jorg]. Our resulting effective model for the inhomogeneous order is exactly equivalent to that of Ref. , after shift in the $k^2$ term and recognition that the self-consistently determined effective temperature ($\overline{r}$), depends on the dominant order $\Phi_{1}$. The reason for equivalence is that in the presence of only quartic and biquadratic terms, the propagator lines for the fields $\Phi_{1}$ and $\Phi_{2}$ are continuous and only allow for self energies with the same field index as for a single field problem with shifted parameters. In Ref.  the dynamical mean field theory calculation yielded extensive $S_{c}$ for the single component problem, and hence precisely the same conclusion is applicable to our model. The extensive value of $S_{c}$ implies that $N_m\propto e^V$, and strongly suggests glassiness for $T< T_{K}$ [@kirk]. The condition for possible glassiness formulated in Refs.  is that that ratio of the coherence length to the modulation scale exceeds a number of order two. As seen from Eq.(\[G2\]) in our model at low temperatures $\xi_{2}/l_{2} \ge 2$ satisfying this condition. Once again, the realization of the glassy phase depends on the details of dynamics in a particular measurement, but extensive entropy makes such an outcome likely. The high degree of low temperature entropy can be made rigorous. In all large $n$ (and several Ising) systems [@glass], the extensive configurational entropy found at higher temperatures by replica calculations is supplanted by a ground state degeneracy scaling as the surface area of the system $(S_{ground} \propto q^{d-1} V^{(d-1)/d})$ [@glass],[@long] in $d$ spatial dimensions. By explicit construction, these systems can be shown to possess a multitude of zero energy domain walls [@glass]. These low temperature excitations going hand in hand with a multiple metastable low energy states. Numerical simulations of single component systems in similar clasical models of liquids also report exceptionally sluggish dynamics [@Grousson; @GR] with strong indications of glassiness [@Grousson]. Thus, the non-uniform structures arising in our model of competing order parameters naturally exhibit slow dynamics and is likely to become glassy. Summarizing,the field theoretical analysis accounting for fluctuations around the inhomogeneous minimizing structure extends the GL picture and strongly suggests the phase diagram shown in Fig.(\[final1\]). For the low $n$ systems of relevance, the low temperature first order Brazovskii transition can be pre-empted by a transition into a glass. ![Schematic phase diagram beyond the GL theory. Here we highlight the possibility of a glassy phase triggered by the competition of two local orders. Alternatives include the first order Brazovskii transition into a modulated state, or a transition with a severely suppressed $T_c$. []{data-label="final1"}](GlassPhase.eps){width="7cm"} Relevance to electronic systems. ================================ We showed that, when there is a competition between two order parameters of different origin, and when a general symmetry allowed gradient-amplitude coupling in a [*local*]{} theory is negative ([*even if of moderate magnitude*]{}), the coexistence of two orders is inhomogeneous, and, generally, either the dynamics of the system is slow or a first order Brazovskii transition occurs. Crucially, even though we start with a local theory, the inhomogeneous coexistence leads to a low-energy theory of the same class as considered in models of self-generated glassiness due to competing length scales of interaction [@jorg; @loh], although the origin of the phenomenon is very different. Moreover, the transition temperatures for [*both*]{} order parameters are suppressed compared to the mean field value. We emphasize that the gradient-amplitude coupling is required to stabilize an inhomogeneous state: in its absence only uniform or phase-separated configurations are thermodynamically stable, as has been shown for stripe orders [@Pryadko]. Therefore in competing coexisting phases, $T_c$ is lower than in other parts of the phase diagram, structure factor measurements will indicate non-uniform order, and dynamical measurements likely display slow dynamics. The natural question to ask is what systems offer the best chance for realization of the model considered above. Cuprates provide one obvious example of such competing orders when static low temperature spin and charge density waves (stripes) are inhibited in the presence of superconducting order. In these materials, STM measurements [@lang] indicate incommensurate coexistence of superconductivity and a pseudo-gap state at nanoscale level; however, the dynamics in this situation is strongly energy dependent, which suggests that the mapping on a simple GL theory with temperature-independent coefficients is insufficient. At least in one example the scaling form of the dielectric function in the glassy state goes smoothly to quantum critical scaling as the glass transition temperature tends to zero [@TPark:2005]. Heavy fermion systems provide perhaps the best chance for observing the phenomena described here. In materials of the 115 family proximity or coexistence of antiferromagnetic and superconducting phases is now well established [@TPark:2006; @TPark:PNAS], and experiments indicate an inhomogeneous coexistence of the two orders in a magnetic field[@MKenzel; @Curro3]. Moreover, there is strong evidence that Cd and Hg dopants [@LPham; @EBauer] create antiferromagnetic regions in their vicinity [@Yoshi; @Curro1; @Curro2], suggesting that the system is on the border of inhomogeneous coexistence of two orders. The Neel temperature drops precipitously if superconducting transition occurs first [@TPark:2006]. No dynamical measurements have yet been carried out in the relevant regime of the phase diagram, but it would be interesting to see if, for example, in CeRhIn$_5$ under pressure the spin dynamics as determine by NMR shows signatures of slowing down or freezing at low temperatures. In several systems the inhomogeneous coexistence was proposed in the presence of coupling terms that exist only under special circumstances [@heine; @mohamed; @Littlewood]. A particularly relevant example are manganites where the coupling due to deviation from half-filling that promotes the inhomogeneous coexistence of the magnetic and charge orders was proposed recently based on considerations similar to ours [@Littlewood]. As mentioned above, in these materials glassiness may emerge due to bona fide disorder, and not be self-generated. Non-trivial couplings appear in some of the multiferroic materials, e.g. spiral magnets such as RMnO$_{3}$ with R= Tb, Ho, Dy. [@multif] It is important to note that, if we extend the treatment to include external parameters such as strain and field to act as a massive (i.e. with fluctuations towards order but no symmetry breaking since the quadratic coefficient in the GL expansion remains positive) “competing orders” within the GL framework, the resulting inhomogeneous state only occurs for moderately large coupling. One candidate for such a scenario is MnSi, where the low energy theory exhibiting these features has recently been put forward on the basis of Dzyaloshinskii-Moriya coupling [@Schmalian]. In conclusion, we believe that many of the observed low temperature transitions, inhomogeneities, and slow dynamics/glassiness found in strongly correlated electronic systems are a natural consequence of competing local orders. As we illustrated, competing local orders may trigger inhomogeneities with likely first order transition or possible glassiness. In our calculations, the proliferation of incommensurate ground and metastable states is the common origin of both the dramatic lowering of the transition temperature or viable first order Brazovskii transition and possible glassy dynamics. Acknowledgments. ================ This research was supported by the US DOE under LDRD X1WX (Z. N. and A. V. B.) and DE-FG02-08ER46492 (I. V.), and by the CMI of WU (Z. N.). [99]{} V. Zapf et al., Phys. Rev. B [**65**]{}, 14506 (2002); T. Mito et al., Phys. Rev. Lett. [**90**]{}, 077004 (2003); G.-q. Zheng et al., Phys. Rev. [**B**]{} 70, 014511 (2004). B. S. Adenwalla, S. W. Lin, Q. Z. Ran, Z. Zhao, J. B. Ketterson, J. A. Sauls, L. Taillefer, D. G. Hinks, M. Levy, and Bimal K. Sarma,, Phys. Rev. Lett. [**65**]{}, 2298 (1990); G. Bruls, D. Weber, B. Wolf, P. Thalmeier, B. Lüthi, A. de Visser, and A. Menovsky et al., Phys. Rev. Lett. [**65**]{}, 2294 (1990) J. Zaanen and O. Gunnarson, Phys. Rev. B [**40**]{}, 7391 (1989); K. Machida, Physica C [**158**]{}, 192 (1989); H. J. Shulz, Phys. Rev. Lett. [**64**]{}, 1445 (1990) V.  J.  Emery and S.  A. Kivelson, Physica C, [**26**]{}, 44 (1996); U.  Low, V. J. Emery, K. Fabricius, and S. A. Kivelson, Phys. Rev. Lett. [**72**]{}, 1918 (1994) J. M. Tranquada, B. J. Sternlieb, J. D. Axe, Y. Nakamura, and S. Uchida, Nature [**375**]{}, 561 (1995) C.  M.  Varma, Z. Nussinov, W.  van  Saarloos, Physics Reports 361(5-6) (May 2002), cond-mat/0103393 D. van der Marel, H. J. A. Molegraaf, J. Zaanen, Z.. Nussinov, F.. Carbone, A. Damascelli, H.. Eisaki, M. Greven, P. H. Kes, and M. Li, Nature [**425**]{}, 271 (2003) K. H. Kim, N. Harrison, M. Jaime, G. S. Boebinger, and J. A. Mydosh, Phys. Rev. Lett. [**91**]{}, 256401 (2003) K. Izawa, Y. Nakajima, J. Goryo, Y. Matsuda, S. Osaki, H. Sugawara, H. Sato, P. Thalmeier, and K. Maki, Phys. Rev. Lett. [**90**]{}, 117001 (2003) M. B. Salamon and M. Jamie, Rev. Mod. Phys., [**76**]{}, 583 (2001) S.  Sachdev, “Quantum Phase Transitions”, Cambridge University Press 1999, 2004 B. Simovi[c]{}, P. C. Hammel, M. H[ü]{}cker, B. B[ü]{}chner, and A. Revcolevschi, Phys. Rev. B [**68**]{}, 012415 (2003) B. Simovi[c]{}, M. Nicklas, P. C Hammel, M. H[ü]{}cker, B. B[ü]{}chner, and J. D. Thompson, Europhys. Lett. [**66**]{}, 722 (2004). T. Park, Z. Nussinov, K. R. Hazzard, V. A. Sidorov, A. V. Balatsky, J. L. Sarrao, S.-W. Cheong, M. F. Hundley, Jang-Sik Lee, Q. X. Jia, and J. D. Thompson, Phys. Rev. Lett. [**94**]{}, 017002 (2005). C. Panagopoulos and V. Dobrosavljevi[ć]{} Phys. Rev. B [**72**]{}, 014536 (2005). E. Dagotto. Science [**309**]{}, 257 (2005) V. F. Mitrovi[ć]{}, M.-H. Julien, C. de Vaulx, M. Horvati[ć]{}, C. Berthier, T. Suzuki, and K. Yamada, Phys. Rev. B [**78**]{}, 014504 (2008) J. Schmalian and P. G. Wolynes, Phys. Rev. Lett. [**85**]{}, 836 (2000); H. Westfahl, Jr., J. Schmalian, and P. G. Wolynes, Phys. Rev. B [**64**]{}, 174203 (2001) L. F. Cugliandolo, “Dynamics of glassy systems” Lecture notes, Les Houches, SESSION LXXVII July 2002, Slow Relaxations and Nonequilibrium Dynamics in Condensed Matter, available on cond-mat/0210312. J.P. Bouchaud and M. M´ezard; J. Phys. I (France) [**4**]{}, 1109 (1994); E. Marinari, G. Parisi and F. Ritort; J. Phys. A [**27**]{}, 7615 (1994; J. Phys. A [**27**]{}, 7647 (1994);. L. F. Cugliandolo, J. Kurchan, G. Parisi and F. Ritort, Phys. Rev. Lett. [**74**]{}, 1012 (1995). P. Chandra, L. B. Ioffe and D. Sherrington, Phys. Rev. Lett. [**75**]{}, 713 (1996). P. Chandra, M. V. Feigelman, L. B. Ioffe and D. M. Kagan, Phys. Rev. B [**56**]{}, 11553 (1997). G. Franzese and A. Coniglio, Phys. Rev. E [**58**]{}, 2753 (1998); Phys. Rev. E [**59**]{}, 6409 (1999), Phil. Mag. B [**79**]{}, 1807 (1999). A. Fierro, G. Franzese, A. de Candia, A. Coniglio, Phys. Rev. E [**59**]{}, 60 (1999) For an excellent very comprehensive classical review of Landau Ginzburg theories see, e.g., J-C. Toledano and P. Toledano, “The Landau Theory of Phase Transitions” (Singapore: World Scientific, 1987) and references therein. Z. nussinov, I. Vekhter, and A. V. Balatsky, arXiv:cond-mat/0409474 M. Mihailescu, arXiv:0710.5076 E. Zhao and A. Paramekanti, Phys. Rev. Lett. [**96**]{}, 105303 (2006) S. A. Brazovskii, Sov. Phys. JETP, [**41**]{}, 85 (1975) Z. Nussinov, Phys. Rev. B [**69**]{}, 014208 (2004); Z. Nussinov, cond-mat/0105253; K. K. Loh, K. Kawasaki, A. R. Bishop, T. Lookman, A. Saxena, Z. Nussinov, and J. Schmalian., Phys. Rev. E, [**69**]{}, 010501 (2004) S. Wu, J. Schmalian, G. Kotliar, and P. G. Wolynes, Phys. Rev. B [**70**]{}, 024207 (2004) L. P. Pryadko, S. A. Kivelson. V. J. Emery, Y. B. Bazaliy, and E. A. Demler , Phys. Rev. B [**60**]{}, 7541 (1999). This was proved under a global constraint of the conservation of the total number of electrons. In general, [**q**]{} is incommensurate. In electronic systems the Umklapp terms may lead to terms in ${\cal F}$ of the form $\rho^{n} \cos n \phi$ (where $\Phi_i=\rho e^{i\phi}$), favoring commensurability. T. H. Berlin and M. Kac, Phys. Rev. [**86**]{}, 821 (1952) ; H. E. Stanley, Phys. Rev. [**176**]{}, no. 2, 718 (1968) S. K. Ma, Phys. Rev. A 7, 2172 (1973). L. Chayes.V. J. Emery, S. A. Kivelson, Z. Nussinov, and G. Tarjus,, Physica A [**225**]{}, 129 (1996); Z. Nussinov, J. Rudnick, S. A. Kivelson, and L. N. Chayes, Phys. Rev. Lett. [**83**]{}, 472 (1999) T. R. Kirkpatrick and P. G. Wolynes, Phys. Rev. B [**35**]{}, 3072 (1987); [**36**]{}, 8552 (1987); Phys. Rev. A [**40**]{}, 1045 (1989); T. R. Kirkpatrick and D. Thirumalai, Phys. Rev. Lett. [**58**]{}, 2091 (1987) Z. Nussinov, I. Vekhter, A. V. Balatsky, in preparation M. Grousson, G. Tarjus, and P. Viot, Phys. Rev. E [**65**]{}, 065103 (2002). P. L. Geissler and D. R. Reichman, Phys. Rev. E [**69**]{}, 021501 (2004). K. M. Lang, V. Madhavan,J. E. Hoffman, E. W. Hudson, H. Eisaki, S. Uchida, and J. C. Davis, Nature [**415**]{}, 412 (2002) L. D. Pham, Tuson Park, S. Maquilon, J. D. Thompson, Z. Fisk, Phys. Rev. Lett. [**97**]{}, 056404 (2006) E.D. Bauer, F. Ronning, S. Maquilon, L.D. Pham, J.D. Thompson and Z. Fisk, Physica B [**403**]{}, 1135 (2008). Tuson Park, F. Ronning, H. Q. Yuan, M. B. Salamon, R. Movshovich, J. L. Sarrao, J. D. Thompson, Nature [**440**]{}, 65 (2006) M. Nicklas, O. Stockert, Tuson Park, K. Habicht, K. Kiefer, L. D. Pham, J. D. Thompson, Z. Fisk, F. Steglich, Phys. Rev. B [**76**]{}, 052401 (2007). T. Park, M. J. Graf, L. Boulaevskii, J. L. Sarrao, J. D. Thompson, Proc. Nat. Acad. Sci. [**105**]{}, 6825 (2008) M. Kenzelmann, Th. Strässle, C. Niedermayer, M. Sigrist, B. Padmanabhan, M. Zolliker, A. D. Bianchi, R. Movshovich, E. D. Bauer, J. L. Sarrao, and J. D. Thompson, Science [**321**]{}, 1652 (2008). Y. Tokiwa, R. Movshovich, F. Ronning, E. D. Bauer, P. Papin, A. D. Bianchi, J. F. Rauscher, S. M. Kauzlarich, and Z. Fisk, Phys. Rev. Lett. [**101**]{}, 037001 (2008) Ján Rusz, Peter M. Oppeneer, Nicholas J. Curro, Ricardo R. Urbano, Ben-Li Young, S. Lebègue, Pascoal G. Pagliuso, Long D. Pham, Eric D. Bauer, John L. Sarrao, and Zachary Fisk, Phys. Rev. B [**77**]{}, 245124 (2008) R. R. Urbano, B.-L. Young, N. J. Curro, J. D. Thompson, L. D. Pham, and Z. Fisk, Phys. Rev. Lett. [**99**]{}, 146402 (2007). B.-L. Young, R. R. Urbano, N. J. Curro, J. D. Thompson, J. L. Sarrao, A. B. Vorontsov, and M. J. Graf, Phys. Rev. Lett. [**98**]{}, 036402 (2007) G. C. Milward, M. J. Calderon, P. B. Littlewood, Nature (London) [**433**]{} 607 (2005). V. Heine and J. McConnell, J. Phys. C: Solid State Phys., [**17**]{}, 1199 (1984) M. Laradji, H. Guo, M. Grant, and M. J Zuckermann, J. Phys.: Condens. Matter, [**4**]{}, 6715 (1992) M. Mostovoty, Phys. Rev. Lett. [**96**]{}, 067601 (2006); J. Betouras, G. Gionvannetti, and J. v. d. Brink, Phys. Rev. Lett. [**98**]{}, 257602 (2007); A. B. Harris, A. Aharony, Ora Entin-Wohlman, Phys. Rev. Lett. [**100**]{}, 217202 (2008); J. Hu, Phys. Rev. Lett. [**100**]{}, 077202 (2008) J. Schmalian and M. Turlakov, Phys. Rev. Lett. [**93**]{}, 036405 (2004).
--- abstract: 'Building systems that possess the sensitivity and intelligence to identify and describe high-level attributes in music audio signals continues to be an elusive goal, but one that surely has broad and deep implications for a wide variety of applications. Hundreds of papers have so far been published toward this goal, and great progress appears to have been made. Some systems produce remarkable accuracies at recognising high-level semantic concepts, such as music style, genre and mood. However, it might be that these numbers do not mean what they seem. In this paper, we take a state-of-the-art music content analysis system and investigate what causes it to achieve exceptionally high performance in a benchmark music audio dataset. We dissect the system to understand its operation, determine its sensitivities and limitations, and predict the kinds of knowledge it could and could not possess about music. We perform a series of experiments to illuminate what the system has actually learned to do, and to what extent it is performing the intended music listening task. Our results demonstrate how the initial manifestation of music intelligence in this state-of-the-art can be deceptive. Our work provides constructive directions toward developing music content analysis systems that can address the music information and creation needs of real-world users.' author: - 'Bob L. Sturm' bibliography: - '../../bibliographies/genre.bib' - '../../bibliographies/emotion.bib' - '../../bibliographies/tagging.bib' - '../../bibliographies/BibAnnon.bib' title: 'The “Horse” Inside: Seeking Causes Behind the Behaviours of Music Content Analysis Systems' --- Author’s address: School of Electronic Engineering and Computer Science, Queen Mary University of London, Mile End Road London E1 4NS, UK. Introduction ============ A significant amount of research in the disciplines of music content analysis and content-based music information retrieval (MIR) is plagued by an inability to distinguish between solutions and “horses” [@Gouyon2013; @Urbano2013; @Sturm2013g; @Sturm2013h]. In its most basic form, a “horse” is a system that [*appears*]{} as if it is solving a particular problem when it actually is not [@Sturm2013g]. This was exactly the case with Clever Hans [@Pfungst1911], a [*real*]{} horse that was claimed to be capable of doing arithmetic and other feats of abstract thought. Clever Hans appeared to answer complex questions posed to him, but he had actually learned to respond to involuntary cues of his many inquisitors confounded with the tapping of his hoof the correct number of times. The “trick” evaded discovery for a few reasons: 1) the cues were nearly undetectable; and 2) in light of undetected cues, the demonstration was thought by many to constitute valid evidence for the claim that the horse possessed such abilities. It was not until controlled experiments were designed and implemented that his true abilities were discovered [@Pfungst1911]. If the aim of a music content analysis system is to enhance the connection between users (e.g., private listener, professional musician, scholar, journalist, family, organisation and business) and music (e.g., recordings in the format of a score and audio recording) and information about music (e.g., artist, tempi, instrumentation and title) [@Casey2008a] – and to do so at a far lower cost than that required of human labor – then the system must operate with characteristics and criteria [*relevant*]{} to the information needs of users. For instance, a relevant characteristic for generating tempo information is periodic onsets; an irrelevant characteristic is instrumentation. If the aim of a music content analysis system is to facilitate creative pursuits, such as composing or performing music in particular styles [@Dubnov2003a; @Dubnov2014a], then it must operate with characteristics and criteria [*relevant*]{} to the creative needs of users. For instance, a relevant criterion for a Picardy third is the suggestion of a minor resolution; an irrelevant criterion is avoidance of parallel fifths. The importance of “relevant criteria” in music content analysis is evinced by frustration surrounding what has been termed the “semantic gap”: a chasm of disconnection between accessible but low-level features and high-level abstract ideas [@Aucouturier2009; @Wiggins2009; @Turnbull2008]. A music content analysis system’s reproduction of dataset ground truth is, by and large, considered valid evidence that the system is using relevant characteristics and criteria, or possesses “musical knowledge,” or has learned to listen to music in a way that is meaningful with respect to some music listening task. In one of the most cited papers in MIR, train and test several systems with what would become the most-used public benchmark dataset in music genre recognition [@Sturm2013h]. Since these systems reproduced an amount of ground truth inconsistent with that expected when choosing classes randomly, Tzanetakis and Cook concluded that the features “provide some information about musical genre and therefore musical content in general,” and even that the systems’ performances are comparable to that of humans [-@Tzanetakis2002]. conclude from such evidence that the features they propose “have enough information for genre classification because \[classification accuracy\] is significantly above the baselines of random classification.” Measuring the reproduction of the ground truth of a dataset is typical when developing content analysis systems. For instance, and perform a large number of computational experiments to find the “most relevant” features, “optimal” parameters, “best” classifiers, and combinations thereof, all defined with respect to the reproduction of the ground truth. The measurement of reproduced ground truth has been thought to be objective. avoid the “pitfall” of subjective evaluation of rhythm descriptors by “measuring their rate of success in genre classification experiments." argue that such “directly measured” numbers “\[facilitate\] (1) the comparison of feature sets and (2) the assessment of the suitability of particular classifiers for specific feature sets.” This is also echoed by .[^1] During the 10-year life-span of MIREX – an established annual event that facilitates the exchange and scientific evaluation of new techniques for a wide variety of music content analysis tasks [@Downie2004b; @Downie2008; @Downie2010; @Cunningham2012] – thousands of systems have been ranked according to the amount of ground truth they reproduce. Several literature reviews, e.g., [@Scaringella2006; @Fu2011; @Humphrey2013], tabulate results of many published experiments, and make conclusions about which features and classifiers are “useful” for listening tasks such as music genre and mood classification. remark on the progress up to that time, “Given the steady and significant improvement in classification \[accuracy\], we wonder if automatic methods are not already more efficient at learning genres than some people.” Seven years later, surmise from the plateauing of such numbers that progress in MIR has stalled. However, could it be that progress was never made at all? Might it be that the precise measurement of reproduced ground truth is not a reliable reflection of the “intelligence” so hoped for? ![Figure of merit (FoM, $\times 100$) of the music content analysis system DeSPerF-BALLROOM, the cause of which we seek in this article. Column is ground truth label, and row is class selected by system. Off diagonals are confusions. Precision is the right-most column, F-score is the bottom row, recall is the diagonal, and normalised accuracy (mean recall) is at bottom-right corner.[]{data-label="fig:DeSPerF_expt00"}](FoM_DeSPerF_BALLROOM_random.eps){width="2.8in"} Consider the systems reproducing the most ground truth in the 2013 MIREX edition of the “Audio Latin Music Genre classification task” (ALGC).[^2] The aim of ALGC is to compare music content analysis systems built from algorithms submitted by participants in the task of classifying the music genres of recordings in the benchmark Latin Music Dataset ([*LMD*]{}) [@Silla2008b]. In ALGC, participants submit their feature extraction and machine learning algorithms, a MIREX organiser then uses these to build music content analysis systems, applies them to subsets of [*LMD*]{}, and computes a variety of figures of merit (FoM) based on the reproduction of ground truth. In ALGC of 2013, the most ground truth (accuracy of $0.776$) was reproduced by systems built using deep learning [@Pikrakis2013]. Figure \[fig:DeSPerF\_expt00\] shows the FoM of the system resulting from using the same winning algorithms, but training and testing it with the public benchmark [*BALLROOM*]{} dataset [@Dixon2004]. ([*LMD*]{} is not public.) Of little doubt is that the classification accuracy of this system – DeSPerF-BALLROOM – greatly exceeds that expected when selecting labels of [*BALLROOM*]{} randomly. The system has clearly learned something. Now, [*what is that something*]{}? What musical characteristics and criteria – “musical knowledge” – is this system using? How do the internal models of the system reflect the music “styles” in [*BALLROOM*]{}? Are the labels of [*BALLROOM*]{} even related to “style”? What has this system [*actually*]{} learned to do with [*BALLROOM*]{}? The success of DeSPerF-BALLROOM for the analytic or creative objectives of music content analysis turns on the [*cause*]{} of Fig. \[fig:DeSPerF\_expt00\]. How is the cause relevant to a user’s music information or creation needs? Is the system [*actually*]{} fit to enhance the connections between users, music, and information about music? Or is it as Clever Hans, only appearing to be intelligent? In this article, it is the cause of Fig. \[fig:DeSPerF\_expt00\] with which we are principally concerned. We seek to answer what this system has learned about the music it is classifying, its [*musical intelligence*]{}, i.e., its decision machinery involving high-level acoustic and musical characteristics of the “styles” from which the recordings [*BALLROOM*]{} appear to be sampled. Broader still, we seek to encourage completely new methods for evaluating any music content analysis system with respect to its objective. It would have been a simple matter if Hans could have been asked how he was accomplishing his feat; but the nature of his “condition” allowed only certain questions to be asked. In the end, it was not about finding the definitive set of questions that accurately measured his mental brawn, but of thinking skeptically and implementing appropriately controlled experiments designed to test hypotheses like, “Clever Hans can solve problems of arithmetic.” One faces the same problem in evaluating music content analysis systems: the kinds of questions that can be asked are limited. For DeSPerF-BALLROOM in Fig. \[fig:DeSPerF\_expt00\], a “question” must come in the form of a 220,500-dimensional vector (10 second monophonic acoustic music signal uniformly sampled at 22050 Hz). Having the system try to classify the Waltz recordings that are thought to be the hardest in some way will not illuminate much about the criteria it is using, the sanity of its internal models, or the causes of the FoM in Fig. \[fig:DeSPerF\_expt00\]. Ways forward are given by adopting and adapting Pfungst’s approach to testing Clever Hans [@Pfungst1911], and above all not dispensing with skepticism. Teaching a machine to listen to music, to automatically recognise music style or genre, are achievements so great that they require extraordinary and valid evidence. That these tasks defy the explicit definition necessary to the formal nature of algorithms produces great pause in accepting Fig. \[fig:DeSPerF\_expt00\] as evidence that DeSPerF-BALLROOM is, unlike Clever Hans, not a “horse.” In Section \[sec:problemofMCA\], we provide a brief but explicit definition of the problem of music content analysis, and what a music content analysis system is. In Section \[sec:DeSPerF\], we dissect DeSPerF-based systems, describing in detail their construction and operation. In Section \[sec:BALLROOM\], we analyse the methods of teaching and testing encompassed by the [*BALLROOM*]{} dataset. We are then prepared for the series of experiments in Section \[sec:experiments\] that seek to explain Fig. \[fig:DeSPerF\_expt00\]. We discuss our results more broadly in Section \[sec:discussion\]. We make available a reproducible research package with which one may generate all figures and tables in this article: <http://manentail.com>. (Made anonymous for the time being.) The problem of music content analysis {#sec:problemofMCA} ===================================== Since this article is concerned with algorithms defined in no uncertain terms by a formal language and posed to solve some problem of music content analysis, we must define what all these things are. Denote the [*music universe*]{} ${{\Omega}}$, the [*music recording universe*]{} ${{\mathcal{R}_{{\Omega}}}}$ (notated or performed), [*vocabularies*]{} ${{\mathbb F }}$ (features) and ${{\mathcal V}}$ (tokens), and define the [*Boolean semantic rules*]{} $A': f \to \{T,F\}$ and $A: s \to \{T,F\}$, where $f$ is a sequence of features from ${{\mathbb F }}$ and $s$ is a sequence of tokens from ${{\mathcal V}}$. Define the [*semantic universe*]{} built from ${{\mathcal V}}$ and $A$: $${{\mathcal U}_{{{\mathcal V}},A}}:= \{s \in \mathcal{V}^n | n \in \mathbb{N} \land A(s) = T\}.$$ The [*semantic feature universe*]{} ${{\mathcal{U}_{\mathbb{F},A'}}}$ is similarly built using ${{\mathbb F }}$ and $A'$. Define a [*use case*]{} as the specification of ${{\Omega}}$, ${{\mathcal{R}_{{\Omega}}}}$, ${{\mathcal U}_{{{\mathcal V}},A}}$, and a set of success criteria. A music universe ${{\Omega}}$ is the set of intangible music – whatever that is [@Goehr1994] – from which the tangible recording music universe ${{\mathcal{R}_{{\Omega}}}}$ is produced. This distinction is important because the real world contains only tangible records of music. One can point to a score of Beethoven’s 5th, but not to Beethoven’s 5th. Perhaps one wishes to say something about Beethoven’s 5th, or about a recording of Beethoven’s 5th. These are categorically different. The definition of ${{\Omega}}$ specifies the music in a use case, e.g., “music people call ‘disco’.” The definition of ${{\mathcal{R}_{{\Omega}}}}$ includes the specification of the dimensions of the tangible material, “30 second audio recording uniformly sampled at 44.1 kHz of an element of ${{\Omega}}$.” The definition of ${{\mathcal U}_{{{\mathcal V}},A}}$ provides the semantic space in which elements of ${{\mathcal{R}_{{\Omega}}}}$ are described. Finally, the success criteria of a use case specify requirements for music content analysis systems to be deemed successful. A [*music content analysis system*]{} ${{\mathscr{S}}}$ is a map from ${{\mathcal{R}_{{\Omega}}}}$ to ${{\mathcal U}_{{{\mathcal V}},A}}$: $${{\mathscr{S}}}: {{\mathcal{R}_{{\Omega}}}}\rightarrow {{\mathcal U}_{{{\mathcal V}},A}}$$ which itself is a composition of two maps, $\mathscr{E}: {{\mathcal{R}_{{\Omega}}}}\rightarrow {{\mathcal{U}_{\mathbb{F},A'}}}$ and $\mathscr{C}: {{\mathcal{U}_{\mathbb{F},A'}}}\rightarrow {{\mathcal U}_{{{\mathcal V}},A}}$. The map $\mathscr{E}$ is commonly known as a “feature extractor,” taking ${{\mathcal{R}_{{\Omega}}}}$ to ${{\mathcal{U}_{\mathbb{F},A'}}}$; the map $\mathscr{C}$ is commonly known as a “classifier” or “regression function,” mapping ${{\mathcal{U}_{\mathbb{F},A'}}}$ to ${{\mathcal U}_{{{\mathcal V}},A}}$. The [*problem of music content analysis*]{} is to build a system ${{\mathscr{S}}}$ that meets the success criteria of a use case. A typical procedure for building an $\mathscr{S}$ is to seek a way to reproduce all the ground truth of a [*recorded music dataset*]{}, defined as an indexed sequence of tuples sampled in some way from the population ${{\mathcal{R}_{{\Omega}}}}\times{{\mathcal U}_{{{\mathcal V}},A}}$, i.e., $$\mathcal{D} := \left( (r_i,s_i) : i \in \mathcal{I} \right ) \subset R_\Omega\times {{\mathcal U}_{{{\mathcal V}},A}}$$ where $\mathcal{I}$ indexes the dataset. We call $(s_i)_{i \in \mathcal{I}}$ the [*ground truth*]{} of $\mathcal{D}$. As a concrete example, take the [*Shazam*]{} music content analysis system [@Wang2003].[^3] One can define its use case as follows. ${{\mathcal{R}_{{\Omega}}}}$ and ${{\Omega}}$ are defined entirely from the digitised music recordings in the Shazam database. ${{\Omega}}$ is defined as the set of music [*exactly*]{} as it appears in specific recordings. ${{\mathcal{R}_{{\Omega}}}}$ is defined by all 10-second audio recordings of elements of ${{\Omega}}$. ${{\mathcal U}_{{{\mathcal V}},A}}$ is defined as a set of single tokens, each token consisting of an artist name, song title, album title, and other metadata. The [*Shazam*]{} music content analysis system maps a 10 second audio recording of ${{\mathcal{R}_{{\Omega}}}}$ to an element of ${{\mathcal{U}_{\mathbb{F},A'}}}$ consisting of many tuples of time-frequency anchors ${{\mathcal{U}_{\mathbb{F},A'}}}$. The classifier then finds matching time-frequency anchors in a database of all time-frequency anchors from ${{\mathcal{R}_{{\Omega}}}}$, and finally picks an element of ${{\mathcal U}_{{{\mathcal V}},A}}$. The success criteria might include making correct mappings (retrieving the correct song and artist name of the specific music heard) in adverse recording conditions, or increased revenue from music sales. DeSPerF-based Music Content Analysis Systems {#sec:DeSPerF} ============================================ In the following subsections, we dissect DeSPerF-BALLROOM, first analysing its feature extraction, and then its classifier. This helps determine its sensitivities and limitations. The feature extraction of DeSPerF-based systems maps ${{\mathcal{R}_{{\Omega}}}}$ to ${{\mathcal{U}_{\mathbb{F},A'}}}$, using [*spectral periodicity features*]{} (SPerF), first proposed by . Its classifier maps ${{\mathcal{U}_{\mathbb{F},A'}}}$ to ${{\mathcal U}_{{{\mathcal V}},A}}$ using deep neural networks (DNN). In the case of DeSPerF-BALLROOM in Fig. \[fig:DeSPerF\_expt00\], ${{\mathcal U}_{{{\mathcal V}},A}}:= \{$“Cha cha”, “Jive”, “Quickstep”, “Rumba”, “Tango”, “Waltz”$\}$. Feature extraction {#sec:features} ------------------ SPerF describe temporal periodicities of modulation sonograms. The hope is that SPerF reflect, or are correlated with, high-level musical characteristics such as tempo, meter and rhythm [@Pikrakis2013]. The feature extraction is defined by six parameters: $\{{{T_\textrm{seg}}}, {{T_\textrm{seghop}}}, {{T_\textrm{fr}}}, {{T_\textrm{frhop}}}, {{N_\textrm{MFCCs}}}, {{N_\textrm{fr}}}\}$. It takes an element of ${{\mathcal{R}_{{\Omega}}}}$ and partitions it into multiple [*signal segments*]{} of duration ${{T_\textrm{seg}}}$ seconds (s) which [*hop*]{} by ${{T_\textrm{seghop}}}$ s. Each signal segment is divided into [*frames*]{} of duration ${{T_\textrm{fr}}}$ s with a hop of ${{T_\textrm{frhop}}}$ s. From the ordered frames of a segment, a sequence of the first ${{N_\textrm{MFCCs}}}$ Mel-frequency cepstral coefficients (MFCCs) are computed, which we call the [*segment modulation sonogram*]{} $$\mathcal{M} = \left (\vm_i \in \mathbb{R}^{{{N_\textrm{MFCCs}}}} : i \in [0, \ldots, {\max}_i] \right ) \label{eq:segmodsonogram}$$ where $\vm_i $ is a vector of MFCCs extracted from the frame spanning time $[i{{T_\textrm{frhop}}},i{{T_\textrm{frhop}}}+{{T_\textrm{fr}}}]$, and ${\max}_i := \lfloor ({{T_\textrm{seg}}}-{{T_\textrm{fr}}})/{{T_\textrm{frhop}}}\rfloor +1$ is the index of the last vector. The MFCCs of a frame are computed by a modification of the approach of . The magnitude discrete Fourier transform (DFT) of a Hamming-windowed frame is weighted by a “filterbank” of 64 triangular filters, the centre frequencies of which are spaced by one semitone. Figure \[fig:SPerFMFCC\] shows these filters. Each filter is weighted inversely proportional to its bandwidth. The lowest centre frequency is 110 Hz, and the highest is 4.43 kHz. Irregularities in filter shape at low frequencies arise from the uniform resolution of the DFT and the frame duration ${{T_\textrm{fr}}}$ s. Finally, the discrete cosine transform (DCT) of the $\log_{10}$ rectified filterbank output is taken, and the first ${{N_\textrm{MFCCs}}}$ MFCCs are selected to form $\vm_i$. The period corresponding to the $k$th MFCC is $128/k$ semitones, $k \in \{1, \ldots, {{N_\textrm{MFCCs}}}-1\}$, and $0$ for $k=0$. The first MFCC ($k=0$) is related to the mean energy over all 64 semitones. The third MFCC is related to the amount of energy of a component with a period of the entire filterbank. And the eleventh MFCC is related to the amount of energy of a component with a period of an octave. ![MFCC filterbank used by the feature extraction of DeSPerF-based systems.[]{data-label="fig:SPerFMFCC"}](SPerF_MFCC.eps){width="4in"} For each [*lag*]{} $l \in \{1, \ldots, {{N_\textrm{fr}}}\}$, define the two [*lagged*]{} modulation sonograms $$\begin{aligned} \mathcal{M}_{:l} & = (\vm_i \in \mathcal{M} : i \in [0, {\max}_i-l]) \\ \mathcal{M}_{l:} & = (\vm_i \in \mathcal{M} : i \in [l, {\max}_i]).\end{aligned}$$ $\mathcal{M}_{:l}$ starts from the beginning of the segment, and $\mathcal{M}_{l:}$ ends at the segment’s conclusion. A lag $l$ corresponds to a time-shift of $l{{T_\textrm{frhop}}}$ s between the sonograms. Now, define the [*mean distance*]{} between these modulation sonograms at lag $l$ $$d[l] = \frac{\|\textrm{vec}(\mathcal{M}_{:l}) - \textrm{vec}(\mathcal{M}_{l:})\|_2}{|\mathcal{M}_{:l}|} \label{eq:meandistancelaggedmodsonograms}$$ where $\|\cdot \|_2$ is the Euclidean norm, and $\textrm{vec}(\mathcal{M})$ stacks the ordered elements of sequence $\mathcal{M}$ into a column vector. The sequence $d[l]$ is then filtered, $y[l] = \left ( (d * h) * h \right )[l]$, where $$h[n] = \begin{cases} \frac{1}{n}, & -{{T_\textrm{fr}}}/{{T_\textrm{frhop}}}\le n \in \mathbb{Z}\backslash 0 \le {{T_\textrm{fr}}}/{{T_\textrm{frhop}}}\\ 0, & \textrm{otherwise} \end{cases}$$ and adapting $h[n]$ around the end points of $d[l]$ (shortening its support to a minimum of two). This sequence $y[l]$ approximates the second derivate of $d[l]$. Finally, a SPerF of an audio segment is created by the sigmoid normalisation of $y[l]$: $$x[l] = [1 + \exp\left (- (y[l] - \hat \mu_y)/ \hat \sigma_y \right )]^{-1}, 1 \le l \le {{N_\textrm{fr}}}\label{eq:SPerF}$$ where $\hat \mu_y$ is the mean of $(y[l] : 1 \le l \le {{N_\textrm{fr}}})$ and $\hat \sigma_y$ is its standard deviation. The output of the feature extraction is a sequence $f$ of SPerFs (\[eq:SPerF\]), each element of which is computed from one segment of the recording $r$. In this case, the feature vocabulary is defined $\mathbb{F} := (0,1)^{{{N_\textrm{fr}}}}$. The semantic rule is defined $A'(f) := (|f| \le (|r|_s-{{T_\textrm{seg}}})/{{T_\textrm{seghop}}}+1)$, where $|r|_s$ is the duration of the recording from ${{\mathcal{R}_{{\Omega}}}}$ in seconds. Together, these define ${{\mathcal{U}_{\mathbb{F},A'}}}$. Table \[tab:featureExtraction\] summarises the six parameters of the feature extraction and their interpretation, as well as the values used in the system of Fig. \[fig:DeSPerF\_expt00\]. We can relate characteristics of a SPerF to low-level characteristics of a segment of a recording. For instance, if $\mathcal{M}$ is periodic with period $T$, then for $l \approx k T/{{T_\textrm{frhop}}}, k \in \{1, 2, \ldots\}$ the mean distance sequence $d[l]$ should be small, $y[l]$ should be large positive ($d[l]$ is convex around these lags), and $x[l]$ should be close to one. If $\mathcal{M}$ is such that $d[l]$ is approximately constant over its support, then $y[l]$ will be approximately zero, and $x[l] \approx 0.5$. This is the case if a recording is silent or is not periodic within the maximum lag ${{N_\textrm{fr}}}{{T_\textrm{frhop}}}$ s. If $x[l]$ is approximately zero at a lag $l$, then $y[l]$ is very negative, and there is a large distance $d[l]$ between lagged modulation spectrograms around that lag. Moving to higher-level characteristics, we can see that if the recording has a repeating timbral structure within the segment duration ${{T_\textrm{seg}}}$ s, and if these repetitions occur within ${{N_\textrm{fr}}}{{T_\textrm{frhop}}}$ s, then $x[l]$ should have peaks around those lags corresponding to the periodicity of those repetitions. The mean difference between lags of successive peaks might then be related to the mean tempo of music in the segment, or at least the periodic repetition of some timbral structure. If periodicities at longer time-scales exist in $x[l]$, then these might be relatable to the meter of the music in the segment, or at least a longer time-scale repetition of some timbral structure. ![Examples of SPerF (\[eq:SPerF\]) extracted from [*BALLROOM*]{} Waltz recording [*Albums-Chrisanne1-08*]{}.[]{data-label="fig:SPerFexample"}](SPerF_BALLROOM.eps){width="4.2in"} Figure \[fig:SPerFexample\] shows several SPerF extracted from recording of [*BALLROOM*]{}. The SPerF shows a short-term periodicity of about $0.33$ s, and about $1$ s between each of the first three highest peaks. The tempo of the music in this recording is about 180 beats per minute (BPM), and it has a triple meter. The few SPerF that do not follow the main trend are from the introduction of the recording, during which there are not many strong and regular onsets. Classification -------------- From a recording $r$, $\mathscr{E}$ extracts a sequence of $N(r)$ SPerF, $f = (\vx_1, \vx_2, \ldots, \vx_{N(r)} )$, where $\vx_j \in \mathbb{F}$ is a vectorised SPerF (\[eq:SPerF\]). The classifier $\mathscr{C}$ maps $f$ to ${{\mathcal U}_{{{\mathcal V}},A}}$ by a cascade of $K$ steps. At step $1 \le k < K$, the $j$th SPerF $\vx_j^{(0)} \leftarrow \vx_j$ has been transformed iteratively by $$\vx_j^{(k)} \leftarrow \sigma \left(\MW_k\vx_j^{(k-1)} + \vb_k\right ) \label{eq:neuronoutput}$$ where $\MW_k$ is a real matrix, $\vb_k$ is a real vector, and $$\sigma(\vy) := \frac{1}{1+\exp(-\vy)}. \label{eq:signmoid}$$ Step $K$ produces a vector of posterior probabilities over ${{\mathcal U}_{{{\mathcal V}},A}}$ by a softmax output $$\vx_j^{(K)} \leftarrow \frac{\exp \left [ \MW_{K}\vx_j^{(K-1)} + \vb_K \right ]}{\oneb^T\exp \left [ \MW_{K}\vx_j^{(K-1)} + \vb_K \right ]} \label{eq:DNNoutput}$$ where $\oneb$ is an appropriately sized vector of all ones. The cascade from $\vx_j^{(0)}$ to $\vx_j^{(K)}$ is also known as a [*deep neural network*]{} (DNN), with (\[eq:DNNoutput\]) being interpreted as posterior probabilities over the sample space defined by ${{\mathcal U}_{{{\mathcal V}},A}}$. If all elements of $\vx_j^{(K)}$ are the same, then the DNN has no “confidence” in any particular element of ${{\mathcal U}_{{{\mathcal V}},A}}$ given the observation $\vx_j$. If all but one element of $\vx_j^{(K)}$ are zero, then the DNN has the most confidence that $\vx_j$ points only to a specific element of ${{\mathcal U}_{{{\mathcal V}},A}}$. Finally, $\mathscr{C}$ maps the sequence of posterior probabilities $(\vx_1^{(K)}, \ldots, \vx_{N(r)}^{(K)})$ to ${{\mathcal U}_{{{\mathcal V}},A}}$ by [*majority vote*]{}, i.e., $$\hat s (f) := \arg \max_{s \in {{\mathcal U}_{{{\mathcal V}},A}}} \sum_{j=1}^{N(r)} I_{{{\mathcal U}_{{{\mathcal V}},A}}}\left(\vx_j^{(K)}, s \right)$$ where $I_{{{\mathcal U}_{{{\mathcal V}},A}}}(\vx, s) = 1$ if $s$ is the element of ${{\mathcal U}_{{{\mathcal V}},A}}$ associated with the largest value in $\vx$, and zero otherwise. The classifier of the system of Fig. \[fig:DeSPerF\_expt00\] has $K=6$ layers, with the matrices and biases being: $\MW_1 \in \mathbb{R}^{500\times800}$; $\MW_2,\MW_3, \MW_4 \in \mathbb{R}^{500\times500}$; $\MW_5 \in \mathbb{R}^{2000\times500}$; $\MW_6 \in \mathbb{R}^{7\times 2000}$; $\vb_1, \vb_2, \vb_3, \vb_4 \in \mathbb{R}^{500}$; $\vb_5 \in \mathbb{R}^{2000}$; and finally $\vb_6 \in \mathbb{R}^{7}$. The set of parameters $\{\{\MW,\vb\}_k : 1\le k\le 6\}$ are found by using a training dataset and [*deep learning*]{} [@Deng2014].[^4] Interpreting these parameters is not straightforward, save those at the input to the first hidden layer, i.e., $\MW_1$ and $\vb_1$. The weights $\MW_1$ describe what information of a SPerF $\vx_j$ is passed to the hidden layers of the DNN. The $m$th element of the vector $\MW_1\vx_j$ is the input to the $m$th neuron in the first hidden layer. Hence, this neuron receives the product of the $m$th row of $\MW_1$ with $\vx_j$. When those vectors point in nearly the same direction, this value will be positive; when they point in nearly opposite directions, this product will be negative; and when they are nearly orthogonal, the product will be close to zero. We might then interpret each row of $\MW_1$ as being exemplary of some structures in SPerF that the DNN has determined to be important for ${{\mathcal U}_{{{\mathcal V}},A}}$. Figure \[fig:DNNweights\](a) shows for DeSPerF-BALLROOM the ten rows of $\MW_1$ with the largest Euclidean norm. Many of them bear resemblance to the kinds of structures seen in the SPerF in Fig. \[fig:SPerFexample\]. We can determine the bandwidth of the input to the first hidden layer by looking at the Hann-windowed rows of $\MW_1$ in the frequency domain. Figure \[fig:DNNweights\](b) shows the sum of the magnitude responses of each row of $\MW_1$ for the system of Fig. \[fig:DeSPerF\_expt00\]. We see that the majority of energy of a SPerF transmitted into the hidden layers of its DNN is concentrated at frequencies below 10 Hz. The magnitude of the product of the $m$th row of $\MW_1$ and $\vx_j$ is proportional to the product of their Euclidean norms; and the bias of the $m$th neuron – the $m$th row of $\vb_1$ – pushes its output (\[eq:neuronoutput\]) to saturation. A large positive bias pushes the output toward $1$ and a large negative bias pushes it to $0$. Figure \[fig:DNNparameters\](a) shows the Euclidean norms of all rows of $\MW_1$ for the classifier of the system of Fig. \[fig:DeSPerF\_expt00\], sorted by descending norm. Figure \[fig:DNNparameters\](b) shows the bias of these neurons in the same order. We immediately see from this that the inputs to almost half of the neurons in the first hidden layer will have energies that are more than 20 dB below the neurons receiving the most energy, and that they also display very small biases. This suggests that about the half of the neurons in the first hidden layer might be inconsequential to the system’s behaviour. In fact, when we neutralise the 250 neurons in the first hidden layer of DeSPerF-BALLROOM having the smallest norm weights (by setting to zero the corresponding columns in $\MW_2$), its FoM is identical to Fig. \[fig:DeSPerF\_expt00\]. A possible explanation for this is that the DNN has more parameters than are necessary to map its input to its target. Sensitivities and limitations {#sec:sensitivities} ----------------------------- From our analyses of the components of DeSPerF-based systems, we can infer the sensitivities and limitations of its feature extraction with respect to mapping ${{\mathcal{R}_{{\Omega}}}}$ to ${{\mathcal{U}_{\mathbb{F},A'}}}$, and its classification mapping ${{\mathcal{U}_{\mathbb{F},A'}}}$ to ${{\mathcal U}_{{{\mathcal V}},A}}$. All of these limitations naturally restrict the ${{\Omega}}$, ${{\mathcal{R}_{{\Omega}}}}$ and success criteria of a use case to which the system can be applied. First, the MFCC filterbank (Fig. \[fig:SPerFMFCC\]) means this mapping is independent of any information outside of the frequency band $[0.110, 4.43]$ kHz. This could exclude most of the energy of bass drum kicks, cymbal hits and crashes, shakers, and so on. Figure \[fig:samplespectra\] shows for segments of four recordings from [*BALLROOM*]{} that a large amount of energy can exist outside this band. If the information relevant for solving a problem of music content analysis is outside this band, then a DeSPerF-based system may not be successful. Second, since the segment modulation sonograms (\[eq:segmodsonogram\]) consist of only the first 13 MFCCs, their bandwidth is restricted to $[0, 0.093)$ cycles per semitone, with a cepstral analysis resolution of not less than $10.7$ semitones.[^5] Spectral structures smaller than about $11$ semitones will thus not be present in a segment modulation sonogram. If the information relevant for solving a problem of music content analysis is contained only in spectral structures smaller than about 11 semitones (e.g., harmonic relationships of partials), then a DeSPerF-based system may not be successful. Third, the computation of the mean distance between lagged modulation sonograms (\[eq:meandistancelaggedmodsonograms\]) destroys the quefrency information in the modulation sonograms. In other words, there exist numerous modulation sonograms (\[eq:segmodsonogram\]) that will produce the same mean distance sequence (\[eq:meandistancelaggedmodsonograms\]). This implies that SPerF (\[eq:SPerF\]) are to a large extent invariant to timbre and pitch, and thus DeSPerF-based systems should not be sensitive to timbre and pitch, as long as the “important information” remains in the frequency band $[0.110, 4.43]$ kHz mentioned above. This again restricts the kinds of problems of music content analysis for which a DeSPerF-based system could be successful. Fourth, since the mean distance between lagged modulation sonograms (\[eq:meandistancelaggedmodsonograms\]) is uniformly sampled at a rate of $1/{{T_\textrm{frhop}}}= 200$ Hz, the frequency of repetition that can be represented in a SPerF (\[eq:SPerF\]) is limited to the bandwidth $[0,100]$ Hz. Furthermore, all repetitions at higher frequencies will be aliased to that band. From our analysis of the front end of the DNN, we see from Fig. \[fig:DNNweights\](b) that DeSPerF-BALLROOM is most sensitive to modulations in SPerF below 10 Hz. In fact, the FoM of DeSPerF-BALLROOM in Fig. \[fig:DeSPerF\_expt00\] does not change when we filter all input SPerF with a zero-phase lowpass filter having a -3dB frequency of 10.3 Hz. This implies that DeSPerF-BALLROOM is not sensitive to SPerF modulations above 10 Hz, which entails periods in SPerF of 100 ms or more. Hence, DeSPerF-BALLROOM may have little sensitivity to periodicities in SPerF that are shorter than 100 ms. Finally, since each segment of a recording $r$ is of duration ${{T_\textrm{seg}}}= 10$ s for the system of Fig. \[fig:DeSPerF\_expt00\], then a SPerF can only contain events repeating within that duration. Since the largest lag considered is ${{N_\textrm{fr}}}{{T_\textrm{frhop}}}= 4$ s, this limits the duration of the periodic structures a SPerF can capture. For instance, if a periodic pattern of interest is of duration of one bar of music, then a SPerF may only describe it if it repeats at least twice within 4 s. For two consecutive repetitions, this implies that the tempo must be greater than 120 BPM for a 4/4 time signature, 90 BPM for 3/4, and 180 BPM for 6/8. If a repeated rhythm occurs over two bars, then a SPerF may only contain it if at least four bars occur within 4 s, or as long as the tempo is greater than 240 BPM for a 4/4 time signature, 180 BPM for 3/4, and 360 BPM for 6/8. Conclusion ---------- We have now dissected the system in Fig. \[fig:DeSPerF\_expt00\]. We know that the DeSPerF-based systems are sensitive to temporal events that repeat within a specific frequency band and particular time window. This limits what DeSPerF-BALLROOM can be using to produce the FoM in Fig. \[fig:DeSPerF\_expt00\]. For instance, because of its lack of spectral resolution, it cannot be using melodies or harmonies to recognise elements of ${{\mathcal U}_{{{\mathcal V}},A}}$. Because it marginalises the quefrency information it cannot be discriminating based on instrumentation. It seems like the only knowledge a DeSPerF-based system can be using must be temporal in nature within a 10-second window. Before we can go further, we must develop an understanding of how DeSPerF-BALLROOM was trained and tested, and thus what Fig. \[fig:DeSPerF\_expt00\] might mean. In the next section, we analyse the teaching and testing materials used to produce DeSPerF-BALLROOM, and its FoM in Fig. \[fig:DeSPerF\_expt00\]. The Materials of Teaching and Testing {#sec:BALLROOM} ===================================== What is in the benchmark dataset [*BALLROOM*]{}? What problem does it pose? What is the task to reproduce its ground truth? What is the goal or hope of training a music content analysis system with it? We now analyse the [*BALLROOM*]{} dataset used to train and test DeSPerF-BALLROOM, and how it has been used to teach and test other music content analysis systems. The contents and use of the [*BALLROOM*]{} dataset -------------------------------------------------- The dataset [*BALLROOM*]{}[^6] consists of 698 half-minute music audio recordings downloaded in Real Audio format around 2004 from an on-line resource about Standard and Latin ballroom dancing [@Dixon2004]. Each excerpt comes from the “beginning” of a music track, presumably ripped from a CD by an expert involved with the website. call the labels of the dataset both “style” and “genre,” and allude to each excerpt being “reliably labelled” in one of eight ways. Table \[tab:BALLROOMdataset\] shows the distribution of the number of excerpts over the labels of [*BALLROOM*]{} (we combine excerpts labeled “Viennese Waltz” and “Waltz” into one), as well as the 70/30 distribution of recordings we used for training and testing DeSPerF-BALLROOM in Fig. \[fig:DeSPerF\_expt00\]. \[tab:BALLROOMdataset\] \[tab:BALLROOMresults\] Thus far, [*BALLROOM*]{} has appeared in the evaluations of at least 24 conference papers, journal articles, and PhD dissertations [@Dixon2004; @Flexer2006; @Gouyon2004; @Gouyon2004b; @Gouyon2005; @Holzapfel2008b; @Holzapfel2009; @Lidy2005; @Lidy2006; @Lidy2007; @Lidy2008; @Lidy2010b; @Mayer2010c; @Peeters2005; @Peeters2011; @Pikrakis2013; @Pohle2009; @Schindler2012b; @Schluter2011; @Schnitzer2011; @Schnitzer2012; @Seyerlehner2010; @Seyerlehner2010b; @Seyerlehner2012; @Tsunoo2011]. Twenty of these works use it in the experimental design [*Classify*]{} [@Sturm2014d], which is the comparison of ground truth to the output of a music content analysis system. Table \[tab:BALLROOMresults\] shows the highest accuracies reported in the publications using [*BALLROOM*]{} this way. Four others [@Schluter2011; @Schnitzer2011; @Schnitzer2012; @Seyerlehner2012] use [*BALLROOM*]{} in the experimental design [*Retrieve*]{} [@Sturm2014d], which is the task of retrieving music signals from the training set given a query. The dataset was also used for the Rhythm Classification Train-test Task of ISMIR2004,[^7] and so sometimes appears as [*ISMIRrhythm*]{}. Some tasks posed by the [*BALLROOM*]{} dataset {#sec:BALLROOMproblems} ---------------------------------------------- and pose one task of [*BALLROOM*]{} as to extract and learn “repetitive rhythmic patterns” from recorded music audio indicating the correct label. Motivating their work and the creation of the dataset, propose the hypothesis: “rhythmic patterns are not randomly distributed amongst musical genres, but rather they are indicative of a genre.” While “rhythm” is an extraordinarily difficult thing to define [@Gouyon2005], examples illuminate what and intend. For instance, they give one “rhythmic pattern” typical of Cha cha and Rumba as one bar of three crochets followed by two quavers. Auditioning the Cha cha recordings reveals that this pattern does appear but that it can be quite difficult to hear through the instrumentation. In fact, this pattern is also apparent in many of the Tango recordings (notated in Fig. \[fig:patterns\_BALLROOM\_tango\]). We find that major differences between recordings of the two labels are instrumentation, the use of accents, and syncopated accompaniment. It should be noted that much of the “rhythmic information” in excerpts of several labels of [*BALLROOM*]{} is contributed by instruments other than percussion, such as the piano and guitar in Cha cha, Rumba, Jive, Quickstep, and Tango; brass sections, woodwinds and electric guitar in Jive and Quickstep; and vocals and orchestra in Waltz. Figure \[fig:patterns\_BALLROOM\] shows examples of the rhythmic patterns appearing in [*BALLROOM*]{}. By “rhythmic pattern” we mean a combination of metrical structure, and relative timing and accents in a combination of voices. Many Cha cha recordings feature a two bar pattern with a strong cowbell on every beat, a guiro on one and three, and syncopated piano and/or brass with notes held over the bars (notated in Fig. \[fig:patterns\_BALLROOM\_chacha\]). On the other hand, Rumba recordings sound much slower and sparser than those of Cha cha, often featuring only guitar, clave, conga, shakers, and the occasional chime glissando (Fig. \[fig:patterns\_BALLROOM\_rumba\]). Rhythmic patterns heard in Jive and Quickstep recordings involve swung notes, notated squarely in Fig. \[fig:patterns\_BALLROOM\_jive\] and Fig. \[fig:patterns\_BALLROOM\_quickstep\]. We find no Waltz recordings to have duple or quadruple meter. Even though this dataset was explicitly created for the task of learning “repetitive rhythmic patterns,” it actually poses other tasks. In fact, a music content analysis system need not know one thing about rhythm to reproduce the ground truth in [*BALLROOM*]{}. One such task is the identification of instruments. For instance, bandoneon only appears in Tango recordings. Jive and Quickstep recordings often feature toms and brass, but the latter also has woodwinds. Rumba and Waltz recordings feature string orchestra, but the former also has chimes and conga. Cha cha recordings often have piano, along with guiro and cowbell. Finally, Samba recordings feature instruments that do not occur in any other recordings, such as pandeiro, repinique, whistles, and cuica. Hence, a system completely naive to rhythm could reproduce the ground truth of [*BALLROOM*]{} just by recognising instruments. This clearly solves a completely different problem from that posed by and . It is aligned more with the task posed by : “to extract suitable features from a benchmark music collection and to classify the pieces of music into a given list of genres.” ![Distribution of the tempi of recordings (dots) in BALLROOM, assembled from onset data of . For each label: red solid line is median tempo; red dotted lines are half and double media tempo; upper and lower blue lines are official tempos for acceptable dance competition music by the (see Table \[tab:BALLROOMregulations\]); black dots are recordings in training dataset used to build DeSPerF-BALLROOM, and grey dots are recordings in the test dataset to compute its FoM in Fig. \[fig:DeSPerF\_expt00\]. []{data-label="fig:BALLROOM_tempo"}](tempo_BALLROOM.eps){width="4in"} There exists yet another way to reproduce the ground truth of [*BALLROOM*]{}. Figure \[fig:BALLROOM\_tempo\] shows the distribution of tempi. We immediately see a strong correlation between tempo and label. This was also noted by . To illustrate the strength of this relationship, we construct a music content analysis system using simple nearest neighbour classification [@Hastie2009] with tempo alone. Figure \[fig:BALLROOM\_tempo\_NN\](a) shows the FoM of this system using the same training and testing partition of [*BALLROOM*]{} as in Fig. \[fig:DeSPerF\_expt00\]. Clearly, this system produces a significant amount of ground truth, but suffers from a confusion predictable from Fig. \[fig:BALLROOM\_tempo\] – which curiously does not appear in Fig. \[fig:DeSPerF\_expt00\]. If we modify annotated tempi by the following factors: Cha cha $\times 2$; Jive $\times 0.5$; Quickstep $\times 0.5$; Rumba $\times 2$; Samba $\times 0.5$; Tango $\times 1$; and Waltz $\times 2$ (keeping Viennese Waltz the same), then the new system produces the FoM in Figure \[fig:BALLROOM\_tempo\_NN\](b). Hence, “teaching” the system to “tap its foot” half as fast for some labels, and twice as fast for others, ends up reproducing a similar amount of ground truth to DeSPerF-BALLROOM in Fig. \[fig:DeSPerF\_expt00\]. While such a foot-tapping system can reproduce the labels of [*BALLROOM*]{}, the particular problem it is actually solving is not aligned with that of detecting “repetitive rhythmic patterns” [@Dixon2004; @Gouyon2004]. The system of Fig. \[fig:BALLROOM\_tempo\_NN\] is also not solving the problem posed by as long as “genre” is not so strongly characterised by tempo. Of course, there are official tempos set by the for music to be acceptable for dance competitions (see Fig. \[fig:BALLROOM\_tempo\] and Table \[tab:BALLROOMregulations\]), but arguably these rules are created to balance skill and competition difficulty, and are not derived from surveys of musical practice, and certainly are not proscriptions for the composition and performance of music in these styles. In fact, Fig. \[fig:BALLROOM\_tempo\] shows several [*BALLROOM*]{} recordings do not satisfy these criteria. \[tab:BALLROOMregulations\] Reproducing the ground truth of [*BALLROOM*]{} by performing any of the tasks above – discrimination by “rhythmic patterns,” instrumentation, and/or tempo – clearly involves using high level acoustic and musical characteristics. However, there are yet other tasks that a system might be performing to reproduce the ground truth of [*BALLROOM*]{}, and ones with no clear relationship to music listening. For instance, if we use single nearest neighbour classification with features composed of only the variance and mean of a SPerF, and the number of times it passes through $0.5$, then with majority voting this system obtains a classification accuracy of over $0.70$ – far above that expected by random classification. It is not clear what task this system is performing, and how it relates to high-level acoustic and musical characteristics. Hence, this fourth approach to reproducing the ground truth of [*BALLROOM*]{} solves an entirely different problem from the previous three: “to classify the music documents into a predetermined list of classes” [@Lidy2005], i.e., [*by any means possible.*]{} Conclusion ---------- Though the explicit and intended task of [*BALLROOM*]{} is to recognise and discriminate between rhythmic patterns, we see that there actually exists many other tasks a system could be performing in reproducing the ground truth. The common experimental approach in music content analysis research, i.e., that used to produce the FoM in Fig. \[fig:DeSPerF\_expt00\], has no capacity to distinguish between any of them. Just as in the case for the demonstrations of Clever Hans, were a music content analysis system actually recognising characteristic rhythms of some of the labels of [*BALLROOM*]{}, its FoM might pale in comparison to that of a system with no idea at all about rhythm (Fig. \[fig:BALLROOM\_tempo\_NN\]). Figure \[fig:DeSPerF\_expt00\] gives no evidence at all for claims that DeSPerF-BALLROOM is identifying waltz by recognising its characteristic rhythmic patterns, tempo, instrumentation, and/or any other factor. From our analysis of DeSPerF-based systems, however, we can rule out instrument recognition since such knowledge is outside its purview. Nonetheless, what exact ask DeSPerF-BALLROOM is performing, the [*cause*]{} of Fig. \[fig:DeSPerF\_expt00\], remains to be seen. The experiments in the next section shed light on this. Seeking the “Horse” Inside the Music Content Analysis System {#sec:experiments} ============================================================ It is obvious that DeSPerF-BALLROOM knows [*something*]{} about the recordings in [*BALLROOM*]{}; otherwise its FoM in Fig. \[fig:DeSPerF\_expt00\] would not be so significantly different from chance. As discussed in the previous section, this might be due to the system performing any of a number of tasks, whether by identifying rhythms, detecting tempo, or using the distributions of statistics with completely obscured relationships to music content. In this section, we describe several experiments designed to explain Fig. \[fig:DeSPerF\_expt00\]. Experiment 1: The nature of the cues ------------------------------------ We first seek the nature of the cues used by DeSPerF-BALLROOM to reproduce the ground truth. We watch how its behaviour changes when we modify the input along two orthogonal dimensions: frequency and time. We transform recordings of the test dataset by pitch-preserving time stretching, and time-preserving pitch shifting.[^8] We seek the minimum scalings to make the system obtain a perfect classification accuracy, or one consistent with random classification (14.3%). To “inflate” the FoM, we take each test recording for which DeSPerF-BALLROOM is incorrect and transform it using a scale that increments by $0.01$ until the system is no longer incorrect. To “deflate” the FoM, we take each test recording for which DeSPerF-BALLROOM is correct and transform it using a scale that increments by $0.01$ until it is no longer correct. A pitch-preserving time stretching of scale $1.05$ increases the recording duration by $5\%$, or decreases the tempo of the music in the recording (if it has a tempo) by $5\%$. A time-preserving pitch shifting of scale $1.05$ increases all pitches in a recording by $5\%$. Figure \[fig:BALLROOM\_expt01\] shows the results. As expected from our analysis in Section \[sec:sensitivities\], time-preserving pitch shifting of the test recordings has little effect on the FoM, even up to changes of $\pm 16\%$. In stark contrast is the effect of pitch-preserving time stretching, where the F-score of DeSPerF-BALLROOM in each label quickly decays for scales of at most $\pm 5\%$. That scale is equivalent to lengthening or shortening a 30 s recording by only 1.5 s. Figure \[fig:BALLROOM\_expt01\_tempo\] shows the new tempi of the test recordings after these procedures, i.e., when the normalised classification accuracy is either perfect or no better than random. We see in most cases that the tempo changes are very small. The tempi of the 16 test recordings initially classified wrong move toward the median tempo of each class. Figure \[fig:BALLROOM\_expt01\_tempo\](b) shows that the opposite occurs in deflation for the 190 test recordings initially classified correctly. The effects of these transformations clearly show that the nature of the cues DeSPerF-BALLROOM uses to reproduce ground truth is temporal, and that its performance is completely disrupted by minor changes in music tempo. The mean tempo change of the 12 [*BALLROOM*]{} Cha cha excerpts in Fig. \[fig:BALLROOM\_expt01\_tempo\](b) is an increase of $3.7$ BPM, which situate all of them on the cusp of the Cha cha cha competition dance tempo regulation (Table \[tab:BALLROOMregulations\]). Most of these transformed recordings are then classified by the system as Tango. In light of this, it is problematic to claim, e.g., DeSPerF-BALLROOM has such a high precision in identifying Cha cha (Fig. \[fig:DeSPerF\_expt00\]) because its internal model of Cha cha embodies “typical rhythmic patterns” of cha cha. Something else is at play. Experiment 2: System dependence on the rate of onsets ----------------------------------------------------- The results of the previous experiment suggest that if the internal models of DeSPerF-BALLROOM have anything to do with rhythmic patterns, they are such that minor changes to tempo produce major confusion. We cannot say that the specific temporal cue used by DeSPerF-BALLROOM is tempo – however that is defined – alone or in combination with other characteristics, such as accent and meter. Indeed, comparing Fig. \[fig:DeSPerF\_expt00\] with Fig. \[fig:BALLROOM\_tempo\_NN\] motivates the hypothesis that DeSPerF-BALLROOM is using tempo, but reduces confusions by halving or doubling tempo based on something else. In this experiment, we investigate the inclinations of DeSPerF-BALLROOM to classify synthetic recordings exhibiting unambiguous onset rates. We synthesise each recording in the following manner. We generate one realisation of a white noise burst with duration 68 ms, windowed by half of a Hann window (attack and smooth decay). The burst has a bandwidth covering the bandwidth of the filterbank in DeSPerF-BALLROOM (Section \[sec:features\]). We synthesise a recording by repeating the same burst (no change in its amplitude) at a regular periodic interval (reciprocal of onset rate), and finally add white Gaussian noise with a power of 60 dB SNR (to avoid producing features that are not numbers). We create 200 recordings in total, with onset rates logarithmically spaced from 50 to 260 onsets per minute. Finally, we record the output of the system for each recording, as well as the mean DNN output posterior (\[eq:DNNoutput\]) over all segments. Figure \[fig:DeSPerF\_BALLROOM\_expt02\] shows the results of this experiment. Each black circle in Fig. \[fig:DeSPerF\_BALLROOM\_expt02\](a) represents a recording with some onset rate (y-axis), classified by the system in some way (grouped in classes and ordered by increasing onset rate) with a mean posterior $p$ (size of circle). Figure \[fig:DeSPerF\_BALLROOM\_expt02\](b) shows an estimates of the conditional distributions of onset rate given the classification by using Parzen windowing with the posteriors as weights. We also show the estimate of the conditional distribution of tempo given the [*BALLROOM*]{} label from the training data, and include a halving and doubling of tempo (gray). We can clearly see ranges of onset rates to which the system responds confidently in its mapping. Comparing the two conditional distributions, we see some that align very well. All octaves of the tempo of Jive, Quickstep and Tango overlap the ranges of onsets that are confidently so classified by DeSPerF-BALLROOM. For Samba, however, only the distribution of half the tempo overlaps the Samba-classified synthetic recordings at low onset rates; for Cha cha and Rumba, it is the distributions of double the tempo that overlap the Cha cha- or Rumba-classified synthetic recordings at high onset rates. These are some of the tempo multiples used to produce the FoM in Fig. \[fig:BALLROOM\_tempo\_NN\](b) by single nearest neighbour classification. These results point to the hypothesis that DeSPerF-BALLROOM is using a cue to “hear” an input recording at a “tempo” that best separates it from the other labels. Of interest is whether that cue has to do with meter and/or rhythm, and how the system’s internal models reflect high level attributes of the styles in [*BALLROOM*]{}. We explore these in the next three experiments. Experiment 3: System output dependence on the rate of onsets and periodic stresses ---------------------------------------------------------------------------------- In this experiment, we watch how the system’s behaviour changes when the input exhibits repeating structures that have a period encompassing several onsets. We perform this experiment in the same manner as the previous one. We synthesise each recording in the same way, but stress every second, third or fourth repetition of the white noise burst. We create a stress in two different ways. In the first, each stressed onset has an amplitude four times that of an unstressed onset. In the second, all unstressed onsets are produced by a highpass filtering of the white noise burst (passband frequency 1 kHz). We create 200 recordings in total for each of the stress periods, and each kind of stress, with onset rates logarithmically spaced from 50 to 260 onsets per minute. Finally, we record the output of the system for each recording, as well as the mean DNN output posterior (\[eq:DNNoutput\]) for all segments. Figure \[fig:DeSPerF\_BALLROOM\_expt03\] shows results quite similar to the previous experiment. The results of both stress kinds are nearly the same, so we only not show one of them. The dashed horizontal lines in Fig. \[fig:DeSPerF\_BALLROOM\_expt03\](a) show some classifications of recordings with the same onset rate are different across the stress periods we test. Figure \[fig:DeSPerF\_BALLROOM\_expt03\](b) shows the appearance of density in the conditional probability distribution of the onset rate in Waltz around the tempo distribution observed in the training dataset of label Waltz (80-90 BPM), which is not apparent in Fig. \[fig:DeSPerF\_BALLROOM\_expt02\](b). Could these changes be due to the system preferring Waltz for a recordings exhibiting a stress period of 3? Figure \[fig:DeSPerF\_BALLROOM\_expt03\_dep\] shows this to not be the case. We see no clear indication that DeSPerF-BALLROOM favours particular classes for each stress period independent of the onset rate for the different kinds of stresses. For instance, we see no strong inclination of DeSPerF-BALLROOM to classify recordings with a stress period of 3 as Waltz. Most classifications are the same across the stress periods. Experiment 4: Manipulation of the tempo --------------------------------------- The previous experiments clearly show the inclination of DeSPerF-BALLROOM to classify in confident ways recordings exhibiting specific onset rates independent of repeated structures of longer periods. This leads to the prediction that any input recording can be time-stretched to elicit any desired response from the system, e.g., we can make the system choose “Tango” by time stretching any input recording to have a tempo of 130 BPM. To test this prediction, we first observe how the system output changes when we apply frequency-preserving time stretching to the entire [*BALLROOM*]{} test dataset with scales from $0.5$ to $1.5$, incrementing by steps of size $0.1$. For a recording with a tempo of 120 bpm, a scaling of $1\pm0.1$ amounts to a change of $\pm 12$ bpm. We then search for tempi where DeSPerF-BALLROOM classifies all test recordings the same way. ![The percentage of the [*BALLROOM*]{} test dataset classified by the system in a number of different ways (numbered) as a function of the maximum scale of frequency-preserving time stretching. For example, with scalings in $1\pm0.08$, half of all test recordings are classified 3 different ways. []{data-label="fig:DeSPerF_BALLROOM_expt07_Nways"}](Nways_DeSPerF_BALLROOM_classified.eps){height="1.9in"} Figure \[fig:DeSPerF\_BALLROOM\_expt07\_Nways\] shows the percentage of the test dataset classified in a number of different ways as a function of the amount of frequency-preserving time stretching. With scalings between $1\pm0.1$, DeSPerF-BALLROOM classifies about 80% of the test dataset with 3-6 different classes. With scalings between $1\pm0.15$, it classifies 90% of the test recordings into 3-7 different classes. Figure \[fig:DeSPerF\_expt07\] shows the confusion table of DeSPerF-BALLROOM tested with all $206\times 32$ time-stretched test recordings. We see most Waltz recordings (66%) are classified as Waltz; however, the majority of recordings of all other labels are classified other ways. In the case of the Rumba recordings, DeSPerF-BALLROOM classifies over 20% of them as Waltz when time stretched by at most a scale of $\pm 15$. This entails reducing their median tempo from 100 BPM (Fig. \[fig:BALLROOM\_tempo\]) to 87, and increasing it up to 117 BPM. ![As in Fig. \[fig:DeSPerF\_expt00\], but for all $206$ test recordings time-stretched with 32 scales in $[0.85, 1.15]$. For instance, about 47% of all Cha cha recordings time stretched by 32 scales in $[0.85, 1.15]$ are classified as Cha cha, but about 6.5% of them are classified as Waltz.[]{data-label="fig:DeSPerF_expt07"}](exp7conf_DeSPerF_BALLROOM_015.eps){width="2.7in"} We do not find tempi at which the system outputs the same specific class for [*all*]{} test recordings. However, we do see the following outcomes, in order of increasing tempo: 1. DeSPerF-BALLROOM chooses Rumba for all Cha cha, Rumba, and Tango recordings time stretched to have a tempo in the range $[95, 96.5]$ BPM; 2. DeSPerF-BALLROOM chooses Tango for all Cha cha, Jive and Tango recordings time stretched to have a tempo in the range $[129, 130.5]$ BPM; 3. DeSPerF-BALLROOM chooses Waltz for all Cha cha and Rumba recordings time stretched to have a tempo in the range $[139.7, 143.7]$ BPM; 4. DeSPerF-BALLROOM chooses Samba for all Cha cha and Jive recordings time stretched to have a tempo in the range $[144.5,147.5]$ BPM; 5. DeSPerF-BALLROOM chooses Waltz for all Cha cha and Tango recordings time stretched to have a tempo in the range $[155.75, 157]$ BPM; 6. DeSPerF-BALLROOM chooses Cha cha for all Jive and Quickstep recordings time stretched to have a tempo in the range $[229, 232]$ BPM. Clear from this is that all Cha cha test recordings are be classified by DeSPerF-BALLROOM as Rumba, Samba, Tango or Waltz simply by changing their tempo to be in specific ranges. This is strong evidence against the claim that the very high precision of DeSPerF-BALLROOM in Cha cha (Fig. \[fig:DeSPerF\_expt00\]) is caused by its ability to recognise rhythmic patterns characteristic of Cha cha. Experiment 5: Hiring the system to compose ------------------------------------------ The previous experiments have shown the strong reliance of DeSPerF-BALLROOM upon cues of a temporal nature, its inclinations toward choosing particular classes for recordings exhibiting different onset rates (one basic form of tempo), the seeming class-irrelevance of larger scale stress periods (one basic form of meter), and how it can be made to choose four other classes for any Cha cha test recording simply by changing only its tempo. It is becoming more apparent that, though its FoM in Fig. \[fig:DeSPerF\_expt00\] is excellent, we do not expect DeSPerF-BALLROOM to be of any use for identifying whether the music in any recording has a particular rhythmic pattern that exists in [*BALLROOM*]{} – unless one defines “rhythmic pattern” in a very limited way, or claims the labels of [*BALLROOM*]{} are not what they seem, e.g., “Samba” actually means “any music having a tempo of 100-104 BPM.” We now consider whether DeSPerF-BALLROOM is able to help compose rhythmic patterns characteristic of the labels in [*BALLROOM*]{}. We address this in the following way. We randomly produce a large number of rhythmic patterns, and synthesise recordings from them using real audio samples of instruments typical to recordings in [*BALLROOM*]{}. More specifically, for each of four voices, we generate a low-level beat structure by sampling a Bernoulli random variable four times for each beat in each measure (semiquaver resolution). The parameter of the Bernoulli random variable for an onset is $p = P[1]=0.25$, where a $1$ is an onset. Each onset is either stressed or unstressed with equal probability. We select a tempo sampled from a uniform distribution over a specific range, then synthesise repetitions of the two measures in each voice to make a recording of 15 s. Finally, we select as most class-representative those recordings for which the classification of DeSPerF-BALLROOM is the most confident (\[eq:DNNoutput\]), and inspect how the results exemplify rhythms in [*BALLROOM*]{}. This is of course a brute force approach. We could use more sophisticated approaches to generate compositions, such as Markov chains, e.g., [@Pachet2003; @Thomas2013a]; but the aim of this experiment is not to produce interesting music, but to see whether the models of DeSPerF-BALLROOM can confidently detect rhythmic patterns characteristic to [*BALLROOM*]{}. To evaluate the internal model of the system for Jive, we perform the above with audio samples of instruments typical to Jive: kick, snare, tom, and hat. Furthermore, we restrict the meter to be quadruple, make sure a stressed kick occurs on the first beat of each measure, and set the tempo range to $[168,176]$ BPM. These are conditions most advantageous to the system, considering what it has learned about Jive in [*BALLROOM*]{}. Of 6020 synthetic recordings produced this way, DeSPerF-BALLROOM classifies 447 with maximum confidence. Of these, 128 are classified as Jive, 122 are classified as Waltz, 79 as Tango, and the remainder in the four other classes. Figure \[fig:patterns\_synthetic\] shows four of them selected at random. Even with these favourable settings, it is difficult to hear in any of the recordings similarity to the rhythmic patterns of which they are supposedly representative. We find similar outcomes for the other labels of [*BALLROOM*]{}. In general, we find it incredibly difficult to coax anything from DeSPerF-BALLROOM that resembles the rhythmic patterns in [*BALLROOM*]{}. Discussion {#sec:discussion} ========== To explain Fig. \[fig:DeSPerF\_expt00\], to seek the cause of the behaviour of DeSPerF-BALLROOM, we have dissected the system, analysed its training and testing dataset, and conducted several experiments. We see from the first experiment that the performance of DeSPerF-BALLROOM relies critically on cues of a temporal nature. The results of the second experiment reveal the inclinations of the system to confidently label in particular ways recordings that all exhibit, arguably, the same and most simple rhythmic pattern but with different onset rates. It also suggests that DeSPerF-BALLROOM is somehow adjusting its perception of tempo, of something highly correlated with tempo, for recordings of some labels in [*BALLROOM*]{}. The results of the third experiment show how little the system’s behaviour changes when we introduce longer-period repetitions in the recordings – a basic form of meter. The independent variable of onset rate appears to trump the influence of the stress pattern. The fourth experiment shows how the system selects many classes for music exhibiting the same repetitive rhythmic patterns, just with different tempi. We also find some narrow tempo ranges in which the system classifies in the same way all test recordings of one label. Finally, the last experiment shows that the system confidently produces rhythmic patterns that do not clearly reflect those heard in [*BALLROOM*]{}. All of this points to the conclusion that Fig. \[fig:DeSPerF\_expt00\] is not caused by, and does not reflect, an intelligence about rhythmic patterns. The task DeSPerF-BALLROOM is performing is not the identification of rhythmic patterns heard in music recordings. Instead, Fig. \[fig:DeSPerF\_expt00\] appears to be caused by the exploitation of some cue highly related to the confounding of tempo with label in [*BALLROOM*]{}, which the system has through no fault of its own learned from its teaching materials. In summary, DeSPerF-BALLROOM is identifying rhythmic patterns as well as Clever Hans was solving arithmetic. One can of course say Table \[tab:BALLROOMregulations\] is proof that tempo is extremely relevant for ballroom dance music classification. Supported by such formal rules, as well as the increased reproduction of ground truth observed in [*BALLROOM*]{} when tempo is used as a feature, write, “tempo is one the most important features in determining dance genre” [@Dixon2003; @Gouyon2004]. Hence, one is tempted to claim that though the system uses some cue highly related to tempo, it makes little difference. There are four problems with this claim. First, one can argue that tempo and rhythm are intimately connected, but in practice they seem to be treated separately. For instance, the rhythmic pattern features proposed by are tempo invariant. In their work on measuring rhythmic similarity, use dynamic time warping to compare rhythms independent of tempo (further refined in [@Holzapfel2009]). Second, Table \[tab:BALLROOMregulations\] describes eligibility for music [*to be allowed in a competition of particular dance styles*]{}, and not for music or its rhythmic patterns to be given a stylistic label. Indeed, Fig. \[fig:BALLROOM\_tempo\] shows several recordings in [*BALLROOM*]{} break the criteria set forth by . Third, this claim moves the goal line after the fact. Section \[sec:BALLROOM\] shows that though [*BALLROOM*]{} poses many different tasks, the task originally intended by is to extract and learn “repetitive rhythmic patterns” from recorded music audio, and not to classify ballroom dance music. Finally, the claim that tempo is extremely relevant for ballroom dance music classification works against the aims of developing music content analysis systems. If the information or composition needs of a user involve rhythmic patterns characteristic of ballroom dance music styles, then DeSPerF-BALLROOM will contribute little of value despite its impressive and human-like FoM in Fig. \[fig:DeSPerF\_expt00\]. The hope is the DeSPerF-BALLROOM has learned to model rhythmic patterns. The reality is that it is not recognising rhythmic patterns. Automatically constructing a working model, or theory, that explains a collection of real-world music examples has been called “a great intellectual challenge” with major repercussions [@Dubnov2003a]. As observed by Eigenfeldt et al. [-@Eigenfeldt2013d; -@Eigenfeldt2013e; -@Eigenfeldt2013b], applying a machine learning algorithm to learn relationships among and rules of the music in a dataset (corpus) is in the most abstract sense [*automated meta-creation*]{}: a machine learns the “rules from which to generate new art” [@Eigenfeldt2014b]. This same sentiment is echoed in other domains, such as computer vision [@Dosovitskiy2014; @Nguyen2014a], written language [@Shannon1998a; @Ghedini2015a], and the recent “zero resource speech challenge,”[^9] in which a machine listening system must learn basic elements of spoken natural language, e.g., phonemes and words. In fact, the automatic modelling of music style is a pursuit far older and more successful in the symbolic domain than in the domain of audio signal processing [@Hiller1959a; @Cope1991a; @Roads1996; @Dubnov2003a; @Pachet2003; @Pachet2005; @Collins2010b; @Argamon2010a; @Dubnov2014a; @Eigenfeldt2012b; @Eigenfeldt2013b; @Eigenfeldt2013d]. One reason for the success of music style emulation in the symbolic domain is that notated music is automatically on a plane more meaningful than samples of an audio signal, or features derived from such basic representations. It is closer to “the musical surface” [@Dubnov2003a; @Dubnov2014a]. In his work on the algorithmic emulation of electronic dance music, highlights some severe impediments arising from working with music audio recordings: reliability, interpretability, and usability. They found that the technologies offered so far by content-based music information retrieval do not yet provide suitably rich and meaningful representations from which a machine can learn about music. thus bypasses these problems by sacrificing scalability, and approaching the automated style modelling of electronic dance music in the symbolic domain by first transcribing by hand a corpus of dance music [@Eigenfeldt2013b; @Eigenfeldt2013d]. Another reason why the pursuit of style detection, understanding, and emulation in the symbolic domain has seen substantial success whereas that in the audio domain has not is the relevance of evaluation practices in each domain. A relevant evaluation of success toward the pursuit of music understanding is how well a system can create “new art” that reflects its training [@Eigenfeldt2014b]. As with the “continuator” [@Pachet2003] – where a computer agent “listens” to the performance of a musician, and then continues where the musician leaves off – the one being emulated becomes the judge. This is also the approach used by in their music style recognition system, which sidesteps the thorny issue of having to define what is being emulated or recognised. Unfortunately, much research in developing music content analysis systems has approached the evaluation of such technologies in ways that, while convenient, widely accepted, and precise, are not relevant. In essence, the proof of good pudding is in its eating, not in the fact that its ingredients were precisely measured. Among the nearly 500 publications about the automatic recognition of music genre or style [@Sturm2014d], only a few works evaluate the internal models learned by a system by looking at the music it composes. construct a system that attempts to learn language models from notated music melodies in a variety of styles (Gregorian, Baroque, Ragtime). They implement these models as finite state automata, and then use them to generate exemplary melodies in each style. As in Fig. \[fig:patterns\_synthetic\], provide examples of the produced output, and reflect on the quality of the results (which they expand upon in a journal article [@Cruz2008]). In the audio domain, employs a brute force approach to exploring the sanity of the learned models of two different state-of-the-art music content analysis systems producing high FoM in a benchmark music genre dataset. He generates random recordings from sample loops, has each system classify them, and keeps only those made with high confidence. From a listening experiment, he finds that people cannot identify the genres of those representative excerpts.[^10] In a completely different domain, similar approaches have recently been used to test the sanity of the internal models of high-performing image content recognition systems [@Szegedy2014; @Dosovitskiy2014; @Nguyen2014a]. The results of our analysis and experiments with DeSPerF-BALLROOM clearly do not support rejecting the hypothesis that this system is a “horse” with respect to identifying rhythmic patterns; but what about the DeSPerF-based systems that reproduced the most ground truth in the 2013 MIREX edition of the “Audio Latin Music Genre classification task” (ALGC)? Can we now conclude that their winning performance was not caused by “musical intelligence,” but by the exploitation of some tempo-like cue? In the case of the [*LMD*]{} dataset used in ALGC, the task appears to be “musical genre classification” [@Silla2008b]. reference to define “genre:” “a kind of music, as it is acknowledged by a community for any reason or purpose or criteria.” In particular to [*LMD*]{}, the community acknowledging these “kinds” of music was represented by two “professional teachers with over ten years of experience in teaching ballroom and Brazilian cultural dances” [@Silla2008b]. These professionals selected commercial recordings of music “that they judged representative of a specific genre, according to how that musical recording is danced.” The appendix to [@Silla2008b] gives characteristics of the music genres in [*LMD*]{}, many of which should be entirely outside the purview of any audio-based system, e.g., aspects of culture, topic, geography, and dance moves. We cannot say what the cue in [*LMD*]{} is – and tempo currently does not appear to be a confound [@Esparza2014] – but the default position in light of the poor evidence contributed by the amount of ground truth reproduced [*must be*]{} that the system is not yet demonstrated to possess the “intelligence” relevant for a specific task. Valid experiments are needed to claim otherwise [@Urbano2013]. The task of creating Fig. \[fig:patterns\_BALLROOM\] was laborious. Identifying these rhythmic patterns relies on experience in listening to mixtures of voices and separating instruments, listening comparatively to collections of music recordings, memory, expectation, musical practice, physicality, and so on. Constructing an artificial system that can automatically do something like this for an arbitrarily large collection of music audio recordings will surely produce major advances in machine listening and creativity [@Dubnov2003a]. In proportion, evidence for such abilities must be just as outstanding – much more so than achieving 100% on the rather tepid multiple choice exam. It is of course the hope that DeSPerF-BALLROOM has learned from a collection of music recordings [*general*]{} models of the styles tersely represented by the labels; and indeed, “One of machine learning’s main purposes is to create the capability to sensibly generalize” [@Dubnov2003a]. The results in Fig. \[fig:DeSPerF\_expt00\] just does not provide valid evidence for such a conclusion; it does not even provide evidence that such capabilities are within reach. Similarly, we are left to question all results in Table \[tab:BALLROOMresults\]: which of these are “horses” like DeSPerF-BALLROOM, and which are solutions, for identifying rhythmic patterns? What problem is each actually solving, and how is it related to music? Which can be useful for connecting users with music and information about music? Which can facilitate creative pursuits? Returning to the formalism presented in Section \[sec:problemofMCA\], for what [*use cases*]{} can each system actually benefit? One might say that any system using musically interpretable features is likely a solution. For instance, the features employed by are essentially built from bar-synchronised decimated amplitude envelopes, and are interpretable with respect to the rhythmic characteristics of the styles in [*BALLROOM*]{}. However, as seen at the end of Section \[sec:features\], SPerF are musically interpretable as well. One must look under the hood, and design, implement and analyse experiments that have the validity to test to the objective. Ascribing too much importance to the measurement and comparison of the amounts of ground truth reproduced – a practice that appears in a vast majority of publications in music genre recognition [@Sturm2012e; @Sturm2014d] – is an impediment to progress. Consider a system trained and tested in [*BALLROOM*]{} that has actually learned to recognise rhythmic patterns characteristic of waltz, but has trouble with any rhythmic patterns not in triple meter. Auditioning [*BALLROOM*]{} demonstrates that all observations not labeled Waltz have a duple or quadruple meter. If such a system correctly classifies all Waltz test recordings based on rhythmic patterns, but chooses randomly for all others, we expect its normalised accuracy to be about $28.5$%. This is double that expected of a random selection, but far below the accuracy seen in Fig. \[fig:DeSPerF\_expt00\]. It is thus not difficult to believe the low-performing system would be tossed for DeSPerF-BALLROOM, or even let pass through peer review, even though it is actually the case that the former system is addressing the task of rhythmic pattern recognition, while the latter is just a “horse.” Such a warning has been given before: “an improved general music similarity algorithm might even yield lower accuracies" [@Pohle2009]. [*No system should be left behind because of invalid experiments.*]{} Many interesting questions arise from our work. What will happen when SPerF are made tempo-invariant? What will happen if the tempo confounding in [*BALLROOM*]{} is removed? One can imagine augmenting the training dataset by performing many different pitch-preserving time stretching transformations; or of making all recordings have the same tempo. Will the resulting system then learn to identify repetitive rhythmic patterns? Or will it only appear so by use of another cue? Another question is what the DNN contributes? In particular, DNN have been claimed to be able to learn to “listen” to music in a hierarchical fashion [@Hamel2010; @Humphrey2013; @Deng2014]. If a DNN-based system is actually addressing the task of identifying rhythmic patterns, how does this hierarchical listening manifest? Is it over beats, figures, and bars? This also brings up the question, “why learn at all?” Should we expect the system to acquire what is readily available from experts? Why not use expert knowledge, or at least leverage automated learning with an expert-based system? Finally, the concepts of meta-creation motivates new evaluation methods [@Thomas2013a], both in determining the sanity of a system’s internal models, but also in meaningfully comparing these models. Meta-creation essentially motivates the advice of hiring the system to do the accounting in order to reveal the “horse.” Valid evaluation approaches will undoubtedly require more effort on the part of the music content analysis system developer, but validity is simply non-negotiable. Conclusion ========== The first supplement in describes the careful and strict methods used to teach the horse Clever Hans over the course of four years to read letters and numerals, and then to solve simple problems of arithmetic. When Clever Hans had learned these basics, had time “to discover a great deal for himself,” and began to give solutions to unique problems that were not part of his training, his handler believed “he had succeeded in inculcating the inner meaning of the number concepts, and not merely an external association of memory images with certain movement responses” [@Pfungst1911]. Without knowing the story of Clever Hans, it seems quite reasonable to conclude that since it is highly unlikely for DeSPerF-BALLROOM to achieve the FoM in Fig. \[fig:DeSPerF\_expt00\] by luck alone, then it must have learned rhythmic patterns in the recorded music in [*BALLROOM*]{}. As in the case of Clever Hans’s tutor, there are four problems with such a conclusion. First, this unjustifiably anthropomorphises the system of Fig. \[fig:DeSPerF\_expt00\]. For instance, someone who does not know better might believe that a stereo system must be quite a capable musician because they hear it play music. There is no evidence that the criteria and rules used by this system – the ones completely obfuscated by the cascade of compressed affine linear transformations described in Section 3.2 – are among those that a human uses to discriminate between and identify style in music listening. Second, one makes the assumption that the semantics of the labels of the dataset refer to some quality called “style” or “rhythmic pattern.” This thus equates, “learning to map statistics of a sampled time series to tokens,” and “learning to discriminate between and identify styles that manifest in recorded music.” Third, underpinning this conclusion is the assumption that the tutoring was actually teaching the skills desired. In the case of the system of Fig. \[fig:DeSPerF\_expt00\], the tutoring actually proceeds by asking the DNN a question (inputting an element of ${{\mathcal{U}_{\mathbb{F},A'}}}$ with ground truth $s \in {{\mathcal U}_{{{\mathcal V}},A}}$), comparing its output $\vx^{(K)}$ to the target $\ve_s$ (the standard basic vector with a $1$ in the row associated with $s$ and zero everywhere else), then adapting all of its parameters in an optimal direction toward that target, and finally repeating. While this “pedagogy” is certainly strict and provably optimal with respect to specific objectives [@Deng2014; @Hastie2009], its relationship to “learning to discriminate between and identify styles” is not clear. Repeatedly forcing Hans to tap his hoof twice is not so clearly teaching him about the “inner meaning of the number concept” 2. Fourth, and most significantly, this conclusion implicitly and incorrectly assumes that the results of Fig. \[fig:DeSPerF\_expt00\] have only two possible explanations: luck or “musical intelligence.” The story of Clever Hans shows just how misguided such a belief can be. The usefulness of any music content analysis system depends on what task it is actually performing, what problem it is actually solving. [*BALLROOM*]{} at first appears to explicitly pose a clear problem; but we now see that there exists several ways to reproduce its ground truth – each of which involves a different task, e.g., rhythmic pattern recognition, tempo detection, instrument recognition, and/or ones that have no concrete relationship to music. We cannot tell which task DeSPerF-BALLROOM is performing just from looking at Fig. \[fig:DeSPerF\_expt00\]. While comparing the output of a music content analysis system to the ground truth of a dataset is convenient, it simply does not distinguish between “horses” and solutions [@Sturm2012e; @Sturm2013g]. It does not produce valid evidence of intelligence. That is, we cannot know whether the system is giving the right answers for the [*wrong reasons.*]{} Just as Clever Hans appeared to be solving problems of arithmetic – what can be more explicit than asking a horse to add 1 and 1? – the banal task he was actually performing, unbeknownst to many save himself, was “make the nice man feed me.” The same might be true, metaphorically speaking, for the systems in Table \[tab:BALLROOMresults\]. Thank you to Aggelos Pikrakis, Corey Kereliuk, Jan Larsen, and the anonymous reviewers. I dedicate this article to the memory of Alan Young (1919-2016), principal actor of the TV show, “Mr. Ed.” [^1]: Also see lecture by G. Tzanetakis, “UVic MIR Course”: <https://www.youtube.com/watch?v=vD5wn-ffVQY> (2014). [^2]: <http://www.music-ir.org/nema_out/mirex2013/results/act/latin_report/> [^3]: <http://www.shazam.com/> [^4]: For the system of Fig. \[fig:DeSPerF\_expt00\] this is done by adapting the code produced by Salakhutdinov and Hinton (<http://www.cs.toronto.edu/~hinton/MatlabForSciencePaper.html>), which trains a deep autoencoder for handwritten digit recognition. This code for DeSPerF is provided by A. Pikrakis. [^5]: The period of the $k$th DCT function is $128/k$ semitones. [^6]: Downloadable from <http://mtg.upf.edu/ismir2004/contest/tempoContest/node5.html> [^7]: <http://mtg.upf.edu/ismir2004/contest/rhythmContest/> [^8]: We use the rubberband library to achieve these transformations with minimal change in recording quality. We have auditioned several of the transformations to confirm. [^9]: <http://www.lscp.net/persons/dupoux/bootphon/zerospeech2014/website> [^10]: It is entirely likely that I have missed relevant references from the symbolic domain for genre/style recognition/emulation.
--- abstract: 'The $\lambda$-calculus is a widely accepted computational model of higher-order functional programs, yet there is not any direct and universally accepted cost model for it. As a consequence, the computational difficulty of reducing $\lambda$-terms to their normal form is typically studied by reasoning on concrete implementation algorithms. In this paper, we show that when head reduction is the underlying dynamics, the unitary cost model is indeed invariant. This improves on known results, which only deal with weak (call-by-value or call-by-name) reduction. Invariance is proved by way of a explicit substitutions, which allows to nicely decompose any head reduction step in the $\lambda$-calculus into more elementary substitution steps, thus making the combinatorics of head-reduction easier to reason about. The technique is also a promising tool to attack what we see as the main open problem, namely understanding for which *normalizing* strategies derivation complexity is an invariant cost model, if any.' bibliography: - 'main.bib' --- Introduction ============ Giving an estimate of the amount of time $T$ needed to execute a program is a natural refinement of the termination problem, which only requires to decide whether $T$ is either finite or infinite. The shift from termination to complexity analysis brings more informative outcomes at the price of an increased difficulty. In particular, complexity analysis depends much on the chosen computational model. Is it possible to express such estimates in a way which is independent from the specific machine the program is run on? An answer to this question can be given following computational complexity, which classifies functions based on the amount of time (or space) they consume when executed by *any* abstract device endowed with a *reasonable* cost model, depending on the size of input. When can a cost model be considered reasonable? The answer lies in the so-called invariance thesis [@vanEmdeBoas90]: any time cost model is reasonable if it is polynomially related to the (standard) one of Turing machines. If programs are expressed as rewrite systems (e.g. as first-order TRSs), an abstract but effective way to execute programs, rewriting itself, is always available. As a consequence, a natural time cost model turns out to be *derivational complexity*, namely the (maximum) number of rewrite steps which can possibly be performed from the given term. A rewriting step, however, may not be an atomic operation, so derivational complexity is not by definition invariant. For first-order TRSs, however, derivational complexity has been recently shown to be an invariant cost model, by way of term graph rewriting [@DalLagoM10; @AvanziniM10]. The case of $\lambda$-calculus is definitely more delicate: if $\beta$-reduction is weak, i.e., if it cannot take place in the scope of $\lambda$-abstractions, one can see $\lambda$-calculus as a TRS and get invariance by way of the already cited results [@DalLagoM09], or by other means [@SandsGM02]. But if one needs to reduce “under lambdas” because the final term needs to be in normal form (e.g., when performing type checking in dependent type theories), no invariance results are known at the time of writing. In this paper we give a partial solution to this problem, by showing that the unitary cost model is indeed invariant for the $\lambda$-calculus endowed with *head reduction*, in which reduction *can* take place in the scope of $\lambda$-abstractions, but *can only* be performed in head position. Our proof technique consists in implementing head reduction in a calculus of explicit substitutions. Explicit substitutions were introduced to close the gap between the theory of $\l$-calculus and implementations [@ACCL91es]. Their rewriting theory has also been studied in depth, after  showed the possibility of pathological behaviors [@DBLP:conf/tlca/Mellie95]. Starting from graphical syntaxes, a new *at a distance* approach to explicit substitutions has recently been proposed [@AccattoliK10]. The new formalisms are simpler than those of the earlier generation, and another thread of applications — to which this paper belongs — also started: new results on $\l$-calculus have been proved by means of explicit substitutions [@AccattoliK10; @AKLPAR]. In this paper we use the *linear-substitution calculus* $\lm$, a slight variation over a calculus of explicit substitutions introduced by Robin Milner [@Milner2006]. The variation is inspired by the structural $\l$-calculus [@AccattoliK10]. We study in detail the relation between $\l$-calculus head reduction and *linear head reduction* [@DBLP:journals/tcs/MascariP94], the head reduction of $\lm$, and prove that the latter can be at most quadratically longer than the former. This is proved without any termination assumption, by a detailed rewriting analysis. To get the Invariance Theorem, however, other ingredients are required: *The Subterm Property*. Linear head reduction has a property not enjoyed by head $\beta$-reduction: linear substitutions along a reduction $\tmone\tohl^* \tmtwo$ duplicates subterms of $\tmone$ only. It easily follows that $\tohl$-steps can be simulated by Turing machines in time polynomial in the size of $t$ and the length of $\tohl^*$. This is explained in Section \[s:expsubst\]. *Compact representations*. Explicit substitutions, decomposing $\beta$-reduction into more atomic steps, allow to take advantage of sharing and thus provide compact representations of terms, avoiding the exponential blowups of term size happening in plain $\lambda$-calculus. Is it reasonable to use these compact representations of $\l$-terms? We answer affirmatively, by exhibiting a dynamic programming algorithm for checking equality of terms with explicit substitutions modulo unfolding, and proving it to work in polynomial time in the size of the involved compact representations. This is the topic of Section \[s:unfolding\]. *Head simulation of Turing machines*. We also provide the simulation of Turing machines by $\lambda$-terms. We give a new encoding of Turing machines, since the known ones do not work with *head* $\beta$-reduction, and prove it induces a polynomial overhead. Some details of the encoding are given in Section \[s:tur-mach\]. We emphasize the result for head $\beta$-reduction, but our technical detour also proves invariance for linear head reduction. To our knowledge, we are the firsts to use the fine granularity of explicit substitutions for complexity analysis. Many calculi with bounded complexity (e.g. [@DBLP:journals/aml/Terui07]) use ${\tt let}$-constructs, an avatar of explicit substitutions, but they do not take advantage of the refined dynamics, as they always use big-steps substitution rules. To conclude, we strongly believe that the main contribution of this paper lies in the technique rather than in the invariance result. Indeed, the main open problem in this area, namely the invariance of the unitary cost model for any *normalizing* strategy remains open but, as we argue in Section \[s:relation\], seems now within reach. $\lambda$-Calculus and Cost Models: an Informal Account {#s:informal} ======================================================= Linear Explicit Substitutions {#s:expsubst} ============================= \[ss:pure-linear-head\] On the Relation Between $\Lambda$ and $\Lambda_{[\cdot]}$ {#s:relation} ========================================================= $\Lambda_{[\cdot]}$ as an Acceptable Encoding of $\lambda$-terms {#s:unfolding} ================================================================ Encoding Turing Machines {#s:tur-mach} ======================== Conclusions =========== The main result of this paper is the first invariance result for the $\lambda$-calculus when reduction is allowed to take place in the scope of abstractions. The key tool to achieve invariance are linear explicit substitutions, which are *compact* but *manageable* representations of $\lambda$-terms. Of course, the main open problem in the area, namely invariance of the unitary cost model for any normalizing strategy (e.g. for the strategy which always reduces the leftmost-outermost redex) remains open. Although linear explicit substitutions cannot be *directly* applied to this problem, the authors strongly believe that this is anyway a promising direction, on which they are actively working at the time of writing.
--- abstract: 'A drone monitoring system that integrates deep-learning-based detection and tracking modules is proposed in this work. The biggest challenge in adopting deep learning methods for drone detection is the limited amount of training drone images. To address this issue, we develop a model-based drone augmentation technique that automatically generates drone images with a bounding box label on drone’s location. To track a small flying drone, we utilize the residual information between consecutive image frames. Finally, we present an integrated detection and tracking system that outperforms the performance of each individual module containing detection or tracking only. The experiments show that, even being trained on synthetic data, the proposed system performs well on real world drone images with complex background. The USC drone detection and tracking dataset with user labeled bounding boxes is available to the public.' author: - title: A Deep Learning Approach to Drone Monitoring --- Introduction ============ There is a growing interest in the commercial and recreational use of drones. This in turn imposes a threat to public safety. The Federal Aviation Administration (FAA) and NASA have reported numerous cases of drones disturbing the airline flight operations, leading to near collisions. It is therefore important to develop a robust drone monitoring system that can identify and track illegal drones. Drone monitoring is however a difficult task because of diversified and complex background in the real world environment and numerous drone types in the market. Generally speaking, techniques for localizing drones can be categorized into two types: acoustic and optical sensing techniques. The acoustic sensing approach achieves target localization and recognition by using a miniature acoustic array system. The optical sensing approach processes images or videos to estimate the position and identity of a target object. In this work, we employ the optical sensing approach by leveraging the recent breakthrough in the computer vision field. The objective of video-based object detection and tracking is to detect and track instances of a target object from image sequences. In earlier days, this task was accomplished by extracting discriminant features such as the scale-invariant feature transform (SIFT) [@sift] and the histograms of oriented gradients (HOG) [@hog]. The SIFT feature vector is attractive since it is invariant to object’s translation, orientation and uniform scaling. Besides, it is not too sensitive to projective distortions and illumination changes since one can transform an image into a large collection of local feature vectors. The HOG feature vector is obtained by computing normalized local histograms of image gradient directions or edge orientations in a dense grid. It provides another powerful feature set for object recognition. In 2012, Krizhevsky [*et al.*]{} [@alexnet] demonstrated the power of the convolutional neural network (CNN) in the ImageNet grand challenge, which is a large scale object classification task, successfully. This work has inspired a lot of follow-up work on the developments and applications of deep learning methods. A CNN consists of multiple convolutional and fully-connected layers, where each layer is followed by a non-linear activation function. These networks can be trained end-to-end by back-propagation. There are several variants in CNNs such as the R-CNN [@rcnn], SPPNet [@sppnet] and Faster-RCNN [@fasterrcnn]. Since these networks can generate highly discriminant features, they outperform traditional object detection techniques in object detection by a large margin. The Faster-RCNN includes a Region Proposal Network (RPNs) to find object proposals, and it can reach real time computation. The contributions of our work are summarized below. - To the best of our knowledge, this is the first one to use the deep learning technology for the challenging drone detection and tracking problem. - We propose to use a large number of synthetic drone images, which are generated by conventional image processing and 3D rendering algorithms, along with a few real 2D and 3D data to train the CNN. - We propose to use the residue information from an image sequence to train and test an CNN-based object tracker. It allows us to track a small flying object in a cluttered environment. - We propose an integrated drone monitoring system that consists of a drone detector and a generic object tracker. The integrated system outperforms the detection-only and the tracking-only sub-systems. - We have validated the proposed system on several drone datasets. The rest of this paper is organized as follows. The collected drone datasets are introduced in Sec. \[sec:dataset\]. The proposed drone detection and tracking system is described in Sec. \[sec:solution\]. Experimental results are presented in Sec. \[sec:results\]. Concluding remarks are given in Sec. \[sec:conclusion\]. Data Collection and Augmentation {#sec:dataset} ================================ [0.5]{} ![Sampled frames from two collected drone datasets.[]{data-label="fig:dataset"}](fig/onlinedrone.png "fig:"){width="70mm"} [0.5]{} ![Sampled frames from two collected drone datasets.[]{data-label="fig:dataset"}](fig/uscdrone.png "fig:"){width="70mm"} Data Collection --------------- The first step in developing the drone monitoring system is to collect drone flying images and videos for the purpose of training and testing. We collect two drone datasets as shown in Fig. \[fig:dataset\]. They are explained below. - Public-Domain drone dataset.\ It consists of 30 YouTube video sequences captured in an indoor or outdoor environment with different drone models. Some samples in this dataset are shown in Fig. \[fig:dataset2\]. These video clips have a frame resolution of 1280 x 720 and their duration is about one minute. Some video clips contain more than one drone. Furthermore, some shoots are not continuous. - USC drone dataset.\ It contains 30 video clips shot at the USC campus. All of them were shot with a single drone model. Several examples of the same drone in different appearance are shown in Fig. \[fig:dataset1\]. To shoot these video clips, we consider a wide range of background scenes, shooting camera angles, different drone shapes and weather conditions. They are designed to capture drone’s attributes in the real world such as fast motion, extreme illumination, occlusion, etc. The duration of each video approximately one minute and the frame resolution is 1920 x 1080. The frame rate is 15 frames per second. We annotate each drone sequence with a tight bounding box around the drone. The ground truth can be used in CNN training. It can also be used to check the CNN performance when we apply it to the testing data. Data Augmentation {#sec:augmentation} ----------------- The preparation of a wide variety of training data is one of the main challenges in the CNN-based solution. For the drone monitoring task, the number of static drone images is very limited and the labeling of drone locations is a labor intensive job. The latter also suffers from human errors. All of these factors impose an additional barrier in developing a robust CNN-based drone monitoring system. To address this difficulty, we develop a model-based data augmentation technique that generates training images and annotates the drone location at each frame automatically. The basic idea is to cut foreground drone images and paste them on top of background images as shown in Fig. \[fig:augment\]. To accommodate the background complexity, we select related classes such as aircrafts, cars in the PASCAL VOC 2012 [@pascal]. As to the diversity of drone models, we collect 2D drone images and 3D drone meshes of many drone models. For the 3D drone meshes, we can render their corresponding images by changing camera’s view-distance, viewing-angle, lighting conditions. As a result, we can generate many different drone images flexibly. Our goal is to generate a large number of augmented images to simulate the complexity of background images and foreground drone models in a real world environment. Some examples of the augmented drone images of various appearances are shown in Fig. \[fig:augment\]. ![Illustration of the data augmentation idea, where augmented training images can be generated by merging foreground drone images and background images.[]{data-label="fig:augment"}](fig/augment.png){width="90mm"} Specific drone augmentation techniques are described below. - Geometric transformations\ We apply geometric transformations such as image translation, rotation and scaling. We randomly select the angle of rotation from the range (-30$^{\circ}$, 30$^{\circ}$). Furthermore, we conduct uniform scaling on the original foreground drone images along the horizontal and the vertical direction. Finally, we randomly select the drone location in the background image. - Illumination variation\ To simulate drones in the shadows, we generate regular shadow maps by using random lines and irregular shadow maps via Perlin noise [@perlin]. In the extreme lighting environments, we observe that drones tend to be in monochrome (i.e. the gray-scale) so that we change drone images to gray level ones. - Image quality\ This augmentation technique is used to simulate blurred drones caused by camera’s motion and out-of-focus. We use some blur filters (e.g. the Gaussian filter, the motion Blur filter) to create the blur effects on foreground drone images. [0.5]{} ![Illustration of (a) augmented drone models and (b) synthesized training images by incorporating various illumination conditions, image qualities, and complex backgrounds.[]{data-label="fig:augresult"}](fig/droneimage.png "fig:"){width="70mm"} [0.5]{} ![Illustration of (a) augmented drone models and (b) synthesized training images by incorporating various illumination conditions, image qualities, and complex backgrounds.[]{data-label="fig:augresult"}](fig/augdrone.png "fig:"){width="70mm"} Several exemplary synthesized drone images are shown in Fig. \[fig:augresult\], where augmented drone models are given in Fig. \[fig:augdrone\]. We use the model-based augmentation technique to acquire more training images with the ground-truth labels and show them in Fig. \[fig:augimage\]. Drone Monitoring System {#sec:solution} ======================= To realize the high performance, the system consists of two modules; namely, the drone detection module and the drone tracking module. Both of them are built with the deep learning technology. These two modules complement each other, and they are used jointly to provide the accurate drone locations for a given video input. Drone Detection {#sec:detection} --------------- The goal of drone detection is to detect and localize the drone in static images. Our approach is built on the Faster-RCNN [@fasterrcnn], which is one of the state-of-the-art object detection methods for real-time applications. The Faster-RCNN utilizes the deep convolutional networks to efficiently classify object proposals. To achieve real time detection, the Faster-RCNN replaces the usage of external object proposals with the Region Proposal Networks (RPNs) that share convolutional feature maps with the detection network. The RPN is constructed on the top of convolutional layers. It consists of two convolutional layers – one that encodes conv feature maps for each proposal to a lower-dimensional vector and the other that provides the classification scores and regressed bounds. The Faster-RCNN achieves nearly cost-free region proposals and it can be trained end-to-end by back-propagation. We use the Faster-RCNN to build the drone detector by training it with synthetic drone images generated by the proposed data augmentation technique as described in Sec. \[sec:augmentation\]. Drone Tracking {#sec:tracking} -------------- The drone tracker attempts to locate the drone in the next frame based on its location at the current frame. It searches around the neighborhood of the current drone’s position. This helps detect a drone in a certain region instead of the entire frame. To achieve this objective, we use the state-of-the-art object tracker called the Multi-Domain Network (MDNet) [@mdnet]. The MDNet is able to separate the domain independent information from the domain specific information in network training. Besides, as compared with other CNN-based trackers, the MDNet has fewer layers, which lowers the complexity of an online testing procedure. To improve the tracking performance furthermore, we propose a video pre-processing step. That is, we subtract the current frame from the previous frame and take the absolute values pixelwise to obtain the residual image of the current frame. Note that we do the same for the R,G,B three channels of a color image frame to get a color residual image. Three color image frames and their corresponding color residual images are shown in Fig. \[fig:residueimage\] for comparison. If there is a panning movement of the camera, we need to compensate the global motion of the whole frame before the frame subtraction operation. [0.5]{} ![Comparison of three raw input images and their corresponding residual images.[]{data-label="fig:residueimage"}](fig/Original.jpg "fig:"){width="70mm"} [0.5]{} ![Comparison of three raw input images and their corresponding residual images.[]{data-label="fig:residueimage"}](fig/Residue.jpg "fig:"){width="70mm"} Since there exists strong correlation between two consecutive images, most background of raw images will cancel out and only the fast moving object will remain in residual images. This is especially true when the drone is at a distance from the camera and its size is relatively small. The observed movement can be well approximated by a rigid body motion. We feed the residual sequences to the MDNet for drone tracking after the above pre-processing step. It does help the MDNet to track the drone more accurately. Furthermore, if the tracker loses the drone for a short while, there is still a good probability for the tracker to pick up the drone in a faster rate. This is because the tracker does not get distracted by other static objects that may have their shape and color similar to a drone in residual images. Those objects do not appear in residual images. Integrated Detection and Tracking System {#sec:fusion} ---------------------------------------- There are limitations in detection-only or tracking-only modules. The detection-only module does not exploit the temporal information, leading to huge computational waste. The tracking-only module does not attempt to recognize the drone object but follow a moving target only. To build a complete system, we need to integrate these two modules into one. The flow chart of the proposed drone monitoring system is shown in Fig. \[fig:overview\]. ![A flow chart of the drone monitoring system.[]{data-label="fig:overview"}](fig/overview.png){width="70mm"} Generally speaking, the drone detector has two tasks – finding the drone and initializing the tracker. Typcially, the drone tracker is used to track the detected drone after the initialization. However, the drone tracker can also play the role of a detector when an object is too far away to be robustly detected as a drone due to its small size. Then, we can use the tracker to track the object before detection based on the residual images as the input. Once the object is near, we can use the drone detector to confirm whether it is a drone or not. An illegal drone can be detected once it is within the field of view and of a reasonable size. The detector will report the drone location to the tracker as the start position. Then, the tracker starts to work. During the tracking process, the detector keeps providing the confidence score of a drone at the tracked location as a reference to the tracker. The final updated location can be acquired by fusing the confidence scores of the tracking and the detection modules as follows. For a candidate bounding box, we can compute the confidence scores of this location via $$\begin{aligned} \label{eqn:confidence} S'_d&=& 1 / ({1+e^{-\beta_1(S_d-\alpha_1)}}),\\ S'_t&=& 1 / ({1+e^{-\beta_2(S_t-\alpha_2)}}),\\ S' &=& \max(S'_d, S'_t),\end{aligned}$$ where $S_d$ and $S_t$ denote the confidence scores obtained by the detector and the tracker, respectively, $S'_f$ is the confidence score of this candidate location and parameters $\beta_1$, $\beta_2$, $\alpha_1$, $\alpha_2$ are used to control the acceptance threshold. We compute the confidence score of a couple of bounding box candidates, denoted by $BB_i$, $i \in C$, where $C$ denoted the set of candidate indices. Then, we select the one with the highest score: $$\begin{aligned} i^* & = & \underset{i \in C}{\operatorname{argmax}}~S'_i, \\ S_f & = & \underset{i \in C}{\operatorname{max}}~S'_i, \end{aligned}$$ where $BB_{i^*}$ is the finally selected bounding box and $S_f$ is its confidence score. If $S_f = 0$, the system will report a message of rejection. Experimental Results {#sec:results} ==================== Drone Detection {#drone-detection} --------------- We test on both the real-world and the synthetic datasets. Each of them contains 1000 images. The images in the real-world dataset are sampled from videos in the USC Drone dataset. The images in the synthetic dataset are generated using different foreground and background images in the training dataset. The detector can take any size of images as the input. These images are then re-scaled such that their shorter side has 600 pixels [@fasterrcnn]. To evaluate the drone detector, we compute the precision-recall curve. Precision is the fraction of the total number of detections that are true positive. Recall is the fraction of the total number of labeled samples in positive class that are true positive. The area under the precision-recall curve (AUC) [@auc] is also reported. The effectiveness of the proposed data augmentation technique is illustrated in Fig. \[fig:detectorR\]. In this figure, we compare the performance of the baseline method that uses simple geometric transformations only and that of the method that uses all mentioned data augmented techniques, including geometric transformations, illumination conditions and image quality simulation. Clearly, better detection performance can be achieved by more augmented data. We see around $11\%$ and $16\%$ improvements in the AUC measure on the real-world and the synthetic datasets, respectively. [0.5]{} ![Comparison of the drone detection performance on (a) the synthetic and (b) the real-world datasets, where the baseline method refers to that uses geometric transformations to generate training data only while the All method indicates that uses geometric transformations, illumination conditions and image quality simulation for data augmentation.[]{data-label="fig:detectorR"}](fig/synv2.png "fig:"){width="70mm"} [0.5]{} ![Comparison of the drone detection performance on (a) the synthetic and (b) the real-world datasets, where the baseline method refers to that uses geometric transformations to generate training data only while the All method indicates that uses geometric transformations, illumination conditions and image quality simulation for data augmentation.[]{data-label="fig:detectorR"}](fig/realv2.png "fig:"){width="70mm"} Drone Tracking {#drone-tracking} -------------- The MDNet is adopted as the object tracker. We take 3 video sequences from the USC drone dataset as testing ones. They cover several challenges, including scale variation, out-of-view, similar objects in background, and fast motion. Each video sequence has a duration of 30 to 40 seconds with 30 frames per second. Thus, each sequence contains 900 to 1200 frames. Since all video sequences in the USC drone dataset have relatively slow camera motion, we can also evaluate the advantages of feeding residual frames (instead of raw images) to the MDNet. The performance of the tracker is measured with the area-under-the-curve (AUC) measure. We first measure the intersection over union $(IoU)$ for all frames in all video sequences as $$IoU = \frac{Area~ of~ Overlap}{Area~ of~ Union},$$ where the “Area of Overlap" is the common area covered by the predicted and the ground truth bounding boxes and the “Area of Union" is the union of the predicted and the ground truth bounding boxes. The IoU value is computed at each frame. If it is higher than a threshold, the success rate is set to 1; otherwise, 0. Thus, the success rate value is either 1 or 0 for a given frame. Once we have the success rate values for all frames in all video sequences for a particular threshold, we can divide the total success rate by the total frame number. Then, we can obtain a success rate curve as a function of the threshold. Finally, we measure the area under the curve (AUC) which gives the desired performance measure. We compare the success rate curves of the MDNet using the original images and the residual images in Fig. \[fig:trackR\]. As compared to the raw frames, the AUC value increases by around 10% using the residual frames as the input. It collaborates the intuition that removing background from frames helps the tracker identify the drones more accurately. Although residual frames help improve the performance of the tracker for certain conditions, it still fails to give good results in two scenarios: 1) movement with fast changing directions and 2) co-existence of many moving objects near the target drone. To overcome these challenges, we have the drone detector operating in parallel with the drone tracker to get more robust results. ![Comparison of the MDNet tracking performance using the raw and the residual frames as the input.[]{data-label="fig:trackR"}](fig/Residue_result_v2.png){width="90mm"} Fully Integrated System ----------------------- The fully integrated system contains both the detection and the tracking modules. We use the USC drone dataset to evaluate the performance of the fully integrated system. The performance comparison (in terms of the AUC measure) of the fully integrated system, the conventional MDNet (the tracker-only module) and the Faster-RCNN (the detector-only module) is shown in Fig. \[fig:systemR\]. The fully integrated system outperforms the other benchmarking methods by substantial margins. This is because the fully integrated system can use detection as the means to re-initialize its tracking bounding box when it loses the object. ![Detection only (Faster RCNN) vs. tracking only (MDNet tracker) vs. our integrated system: The performance increases when we fuse the detection and tracking results.[]{data-label="fig:systemR"}](fig/Drone_result.png){width="90mm"} Conclusion {#sec:conclusion} ========== A video-based drone monitoring system was proposed in this work. The system consisted of the drone detection module and the drone tracking module. Both of them were designed based on deep learning networks. We developed a model-based data augmentation technique to enrich the training data. We also exploited residue images as the input to the drone tracking module. The fully integrated monitoring system takes advantage of both modules to achieve high performance monitoring. Extensive experiments were conducted to demonstrate the superior performance of the proposed drone monitoring system. Acknowledgment {#acknowledgment .unnumbered} ============== This research is supported by a grant from the Pratt & Whitney Institute of Collaborative Engineering (PWICE). [10]{} A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in *Advances in neural information processing systems*, pp. 1097–1105, 2012. S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” in *Advances in neural information processing systems*, pp. 91–99, 2015. R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in *Computer Vision and Pattern Recognition*, 2014. K. He, X. Zhang, S. Ren, and J. Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,” in *European Conference on Computer Vision*, pp. 346–361, Springer, 2014. H. Nam, B. Han, *Learning Multi-Domain Convolution Neural Networks for Visual Tracking*.In CVPR, 2016. J. Huang and C. X. Ling, “Using AUC and accuracy in evaluating learning algorithms,” *IEEE Transactions on knowledge and Data Engineering*, vol. 17, no. 3, pp. 299–310, 2005. N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in *Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on*, vol. 1, pp. 886–893, IEEE, 2005. D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” *International journal of computer vision*, vol. 60, no. 2, pp. 91–110, 2004. M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The [PASCAL]{} [V]{}isual [O]{}bject [C]{}lasses [C]{}hallenge 2012 [(VOC2012)]{} [R]{}esults.” http://www.pascal-network.org/challenges/VOC/voc2012/workshop/index.html. K. Perlin, “An image synthesizer,” *ACM Siggraph Computer Graphics*, vol. 19, no. 3, pp. 287–296, 1985.
--- author: - Matthias Jamin - and Ramon Miravitllas title: 'Scalar correlator, Higgs decay into quarks, and scheme variations of the QCD coupling' --- Introduction {#sect1} ============ The scalar correlation function in QCD plays an important role, as it governs the decay of the Higgs into quark-antiquark pairs, and it has been employed in determinations of quark masses from QCD sum rules as well as hadronic decays of the $\tau$ lepton. Presently, the perturbative expansion for the scalar correlator is known analytically up to order $\alpha_s^4$ in the strong coupling [@bck05; @che96; @gkls90], and estimates of the next, fifth order have been attempted in the literature. While the decay of the Higgs boson into quark-antiquark pairs is connected to the imaginary part of the scalar correlator $\Psi(s)$ [@djou08], two other physical correlators, $\Psi^{''}(s)$ and $D^L(s)$, have been utilised in QCD sum rule analyses, the former in quark mass extractions [@jop02; @jop06] and the latter in hadronic $\tau$ decays [@pp98; @pp99; @gjpps03]. In this work we shall investigate the perturbative series of all three. In order to achieve reliable error estimates of missing higher orders in QCD predictions, a better understanding of the perturbative behaviour of the scalar correlator at high orders is desirable. Work along those lines has been performed in ref. [@bkm00], where the scalar correlation function has been calculated in the large-$N_f$ approximation [@ben92; @bro92], or relatedly the large-$\beta_0$ approximation [@bb94] (for a review see [@ben98]), to all orders in the strong coupling.[^1] However, as will be discussed in more detail below, the large-$\beta_0$ approximation does not provide a satisfactory representation of the scalar correlator in full QCD. Still, as will be demonstrated, it can serve as a guideline to shed light on the general structure of the scalar correlation function. Furthermore, while large QCD corrections are found in the case of the correlator $D^L(s)$, the corrections are substantially smaller for ${\mbox{\rm Im}}\Psi(s)$ and $\Psi^{''}(s)$. In the large-$\beta_0$ approximation this observation can be traced back to the presence of a spurious renormalon pole in the Borel transform at $u=1$ for $D^L(s)$, whereas $\Psi^{''}(s)$ and ${\mbox{\rm Im}}\Psi(s)$ are free from this contribution. We discuss the origin of the additional renormalon pole and its implications, but at any rate conclude that, in view of this fact, the correlator $D^L(s)$ should be avoided in phenomenological analyses. Additionally, the large-$\beta_0$ approximation motivates a strategy in order to improve the perturbative expansion. The structure of the Borel transform in the large-$\beta_0$ limit suggests the introduction of a renormalisation scheme invariant QCD coupling ${\widehat}\alpha_s$, which underlines the scheme invariance of the perturbative term for the physical quantities under investigation. In fact, all contributions of infrared (IR) and ultraviolet (UV) renormalons individually are scheme independent. It is then found that higher-order corrections tend to become smaller when re-expressing the perturbative series in terms of the coupling ${\widehat}\alpha_s$. One reason for this behaviour appears to be that part of the perturbative corrections are resummed into a global prefactor ${\alpha_s}^\delta$ which is present for the scalar correlator. In full QCD, the construction of a scheme-invariant coupling does not appear to be possible, at least in a universal sense, independent of any observable. Nonetheless, we are able to provide the definition of a QCD coupling, which we also term ${\widehat}\alpha_s$, and whose running is scheme independent and described by a simple $\beta$-function, only depending on the coefficients $\beta_1$ and $\beta_2$. Different schemes can then be parametrised by a single parameter $C$, which corresponds to transformations of the QCD scale parameter $\Lambda$. By investigating two phenomenological applications, the correlator $\Psi^{''}(s)$ at the $\tau$ mass scale and ${\mbox{\rm Im}}\Psi(s)$ for Higgs decay to quarks, we show that employing the coupling ${\widehat}\alpha_s$ and choosing appropriate schemes by varying the parameter $C$, the behaviour of the perturbative series can be substantially improved. Our article is organised as follows: in section \[sect2\], theoretical expressions for the scalar correlation function $\Psi(s)$ and the corresponding physical correlators ${\mbox{\rm Im}}\Psi(s)$, $\Psi^{''}(s)$ and $D^L(s)$ are collected, and the present knowledge on the perturbative expansions is summarised. Furthermore, the renormalisation group invariant quark mass ${\widehat}m_q$ is introduced, and the correlators are rewritten in terms of this mass definition. In section \[sect3\], we review the results of ref. [@bkm00] on the scalar correlation function in the large-$\beta_0$ approximation and apply them to a discussion of the correlators $\Psi^{''}(s)$ and $D^L(s)$. Next, in section \[sect4\], we define the coupling ${\widehat}\alpha_s$, and compute its $\beta$-function as well as the perturbative relation to $\alpha_s$ in the ${{\overline{\rm MS}}}$ scheme. Finally, in section \[sect5\], two phenomenological applications, $\Psi^{''}(s)$ at the $\tau$ mass scale and ${\mbox{\rm Im}}\Psi(s)$ for Higgs decay, are investigated, and followed by our conclusions in section \[sect6\]. More technical material like the coefficients of the renormalisation group functions, higher-order coefficients relevant for the large-$\beta_0$ approximation, as well as a discussion of the subtraction constant $\Psi(0)$, are relegated to appendices. The scalar two-point correlator {#sect2} =============================== The following work shall be concerned with the scalar two-point correlation function $\Psi(p^2)$ which is defined by $$\label{Psi} \Psi(p^2) \,\equiv\, i\!\int\!{\rm d}x \,{\rm e}^{ipx} \langle\Omega| T\{j(x) j^\dagger(0)\}|\Omega\rangle \,.$$ The non-perturbative, full QCD vacuum is denoted by $|\Omega\rangle$. For our two applications, the scalar current $j(x)$ is chosen to arise either from the divergence of the normal-ordered vector current, $$\label{jtau} j(x) \,=\, \partial^\mu \!:\!\bar u(x)\gamma_\mu s(x)\!: \;=\, i\,(m_u-m_s) \!:\!\bar u(x) s(x)\!: \,,$$ or the interaction of the Higgs boson with quarks, $$j(x) \,=\, m_q \!:\!\bar q(x) q(x)\!: .$$ These choices have the advantage of an additional factor of the quark masses, which makes the currents $j(x)$ renormalisation group invariant (RGI). Furthermore, the first current is taken to be flavour non-diagonal, with a particular flavour content that plays a role in hadronic $\tau$ decays to strange final states.[^2] The purely perturbative expansion of $\Psi(p^2)$ is known up to order ${\alpha_s}^4$ [@bck05] and takes the general form $$\label{PsiPT} \Psi_{\rm PT}(s) \,=\, -\,\frac{N_c}{8\pi^2} \,m_\mu^2 \,s \sum\limits_{n=0}^\infty a_\mu^n \sum\limits_{k=0}^{n+1} d_{n,k} L^k \,,$$ where $s\equiv p^2$ and $a_\mu\equiv{\alpha_s}(\mu)/\pi$. To simplify the notation, we have introduced the generic mass factor $m_\mu$ which either stands for the combination $(m_u(\mu)-m_s(\mu))$ or $m_q(\mu)$.[^3] The running quark masses and the QCD coupling are renormalised at the scale $\mu$, which enters in $L\equiv\ln(-s/\mu^2)$. As a matter of principle, different scales could be introduced for the renormalisation of coupling and quark masses, but for simplicity, we refrain from this choice. Below, this option will, however, be discussed in relation to renormalisation schemes. At each perturbative order $n$, the only independent coefficients $d_{n,k}$ are the $d_{n,1}$. The coefficients $d_{n,0}$ depend on the renormalisation prescription and do not contribute in physical quantities, while all remaining coefficients $d_{n,k}$ with $k>1$ can be obtained by means of the renormalisation group equation (RGE). The normalisation in eq. [(\[PsiPT\])]{} is chosen such that $d_{0,1}=1$. Setting the number of colours $N_c=3$, and employing the ${{\overline{\rm MS}}}$-scheme [@bbdm78], after tremendous efforts the coefficients $d_{n,1}$ up to ${{\cal O}}({\alpha_s}^4)$ were found to be [@gkls90; @che96; @bck05]: $$\label{d01tod21} d_{0,1} \,=\, 1 \,, \qquad d_{1,1} \,=\, {\mbox{$\frac{17}{3}$}} \,, \qquad d_{2,1} \,=\, {\mbox{$\frac{10801}{144}$}} - {\mbox{$\frac{39}{2}$}} \zeta_3 + \Big(\!- {\mbox{$\frac{65}{24}$}} + {\mbox{$\frac{2}{3}$}} \zeta_3 \Big) N_f$$ $$\label{d31} d_{3,1} \,=\, {\mbox{$\frac{6163613}{5184}$}} - {\mbox{$\frac{109735}{216}$}} \zeta_3 + {\mbox{$\frac{815}{12}$}} \zeta_5 + \Big(\! -{\mbox{$\frac{46147}{486}$}} + {\mbox{$\frac{262}{9}$}} \zeta_3 - {\mbox{$\frac{5}{6}$}} \zeta_4 - {\mbox{$\frac{25}{9}$}} \zeta_5 \Big) N_f + \Big( {\mbox{$\frac{15511}{11664}$}} - {\mbox{$\frac{1}{3}$}} \zeta_3 \Big) N_f^2 {\nonumber}$$ $$\begin{aligned} \label{d41} d_{4,1} &\!=\!& {\mbox{$\frac{10811054729}{497664}$}} - {\mbox{$\frac{3887351}{324}$}} \zeta_3 + {\mbox{$\frac{458425}{432}$}} \zeta_3^2 + {\mbox{$\frac{265}{18}$}} \zeta_4 + {\mbox{$\frac{373975}{432}$}} \zeta_5 - {\mbox{$\frac{1375}{32}$}} \zeta_6 - {\mbox{$\frac{178045}{768}$}} \zeta_7 {\nonumber}\\ {\vbox{\vskip 6mm}}&& +\,\Big(\! - {\mbox{$\frac{1045811915}{373248}$}} + {\mbox{$\frac{5747185}{5184}$}} \zeta_3 - {\mbox{$\frac{955}{16}$}} \zeta_3^2 - {\mbox{$\frac{9131}{576}$}} \zeta_4 + {\mbox{$\frac{41215}{432}$}} \zeta_5 + {\mbox{$\frac{2875}{288}$}} \zeta_6 + {\mbox{$\frac{665}{72}$}} \zeta_7 \Big) N_f {\nonumber}\\ {\vbox{\vskip 6mm}}&& \hspace{-12mm} +\,\Big( {\mbox{$\frac{220313525}{2239488}$}} - {\mbox{$\frac{11875}{432}$}} \zeta_3 + {\mbox{$\frac{5}{6}$}} \zeta_3^2 + {\mbox{$\frac{25}{96}$}} \zeta_4 - {\mbox{$\frac{5015}{432}$}} \zeta_5 \Big) N_f^2 + \Big(\! -{\mbox{$\frac{520771}{559872}$}} + {\mbox{$\frac{65}{432}$}} \zeta_3 + {\mbox{$\frac{1}{144}$}} \zeta_4 + {\mbox{$\frac{5}{18}$}} \zeta_5 \Big) N_f^3 \,. {\nonumber}\end{aligned}$$ For future reference, at $N_f=3$, numerically, the respective coefficients take the values $$\label{d11tod41n} d_{1,1} \,=\, 5.6667 \,, \qquad d_{2,1} \,=\, 45.846 \,, \qquad d_{3,1} \,=\, 465.85 \,, \qquad d_{4,1} \,=\, 5588.7 \,.$$ The case $N_f=5$, relevant for Higgs boson decay, will be considered in the phenomenological applications of section \[sect5\]. As indicated above, the correlator $\Psi(s)$ itself is not related to a measurable quantity. Since it grows linearly with $s$ as $s$ tends to infinity, it satisfies a dispersion relation with two subtraction constants, $$\label{DisRelPsi} \Psi(s) \,=\, \Psi(0) + s\,\Psi^{'}(0) + s^2 \!\int\limits_0^\infty \!\frac{\rho(s')}{(s')^2 (s'-s-i0)}\,{\rm d}s' \,,$$ where $\rho(s)\equiv {\mbox{\rm Im}}\Psi(s+i0)/\pi$ is the scalar spectral function. Hence, a possibility to construct a physical quantity other than the spectral function itself, which will be discussed further down below, is to employ the second derivative of $\Psi(s)$ with respect to $s$. Since the two derivatives remove the two unphysical subtractions, $\Psi^{''}(s)$ is then only related to the spectral function. The corresponding dispersion relation reads $$\label{DisRelPsipp} \Psi^{''}(s) \,=\, 2 \!\int\limits_0^\infty \! \frac{\rho(s')}{(s'-s-i0)^3}\,{\rm d}s' \,,$$ and the general perturbative expansion is $$\label{PsippPT} \Psi_{\rm PT}^{''}(s) \,=\, -\,\frac{N_c}{8\pi^2}\,\frac{m_\mu^2}{s}\, \sum\limits_{n=0}^\infty a_\mu^n \sum\limits_{k=1}^{n+1} d_{n,k} \,k \,\big[ L^{k-1} + (k-1) L^{k-2} \big] \,.$$ Being a physical quantity, $\Psi^{''}(s)$ satisfies a homogeneous RGE, and therefore the logarithms can be resummed with the particular scale choice $\mu^2=-s\equiv Q^2$, leading to the compact expression $$\label{Psippres} \Psi_{\rm PT}^{''}(Q^2) \,=\, \frac{N_c}{8\pi^2}\,\frac{m_Q^2}{Q^2}\,\biggl\{ \, 1 + \sum\limits_{n=1}^\infty \,(d_{n,1} + 2 d_{n,2}) \,a_Q^n \,\biggr\} \,.$$ In this way, both the running quark mass as well as the running QCD coupling are to be evaluated at the renormalisation scale $Q$. The dependent coefficients $d_{n,2}$ can be calculated from the RGE. They are collected in appendix \[appA\], together with the coefficients of the QCD $\beta$-function and mass anomalous dimension. Numerically, at $N_f=3$, the perturbative coefficients $d_{n,1}^{\,''}\equiv d_{n,1} + 2 d_{n,2}$ of eq. [(\[Psippres\])]{} take the values $$\label{dt11todt41n} d_{1,1}^{\,''} \,=\, 3.6667 \,, \qquad d_{2,1}^{\,''} \,=\, 14.179 \,, \qquad d_{3,1}^{\,''} \,=\, 77.368 \,, \qquad d_{4,1}^{\,''} \,=\, 511.83 \,.$$ It is observed that the coefficients [(\[dt11todt41n\])]{} for the physical correlator are substantially smaller than the $d_{n,1}$ of eq. [(\[d11tod41n\])]{}. For the ensuing discussion it will be advantageous to remove the running effects of the quark mass from the remaining perturbative series. This can be achieved by rewriting the running quark masses $m_q(\mu)$ in terms of RGI quark masses ${\widehat}m_q$ which are defined through the relation $$\label{mhat} m_q(\mu) \,\equiv\, {\widehat}m_q \,[\alpha_s(\mu)]^{\gamma_m^{(1)}/\beta_1} \exp\Biggl\{ \int\limits_0^{a_\mu} \!{\rm d}a \biggl[ \frac{\gamma_m(a)}{\beta(a)} - \frac{\gamma_m^{(1)}}{\beta_1 a} \biggr] \Biggr\} \,.$$ Accordingly, we define a modified perturbative expansion with new coefficients $r_n$, $$\label{Psippmhat} \Psi_{\rm PT}^{''}(Q^2) \,=\, \frac{N_c}{8\pi^2}\, \frac{{\widehat}m^2}{Q^2} \,[\alpha_s(Q)]^{2\gamma_m^{(1)}/\beta_1} \biggl\{\, 1 + \sum_{n=1}^\infty \, r_n \,a_Q^n \,\biggr\} \,,$$ which now contain contributions from the exponential factor in eq. [(\[mhat\])]{}. At $N_f=3$ the coefficients $r_n$ take the numerical values $$\label{rnqcd} r_1 \,=\, 5.4568 \,, \quad r_2 \,=\, 24.287 \,, \quad r_3 \,=\, 122.10 \,, \quad r_4 \,=\, 748.09 \,.$$ The order ${\alpha_s}^4$ coefficient $r_4$ depends on quark-mass anomalous dimensions as well as $\beta$-function coefficients up to five-loops which for the convenience of the reader in our conventions have been collected in appendix \[appA\]. As a second observable, we discuss the imaginary part of the scalar correlator ${\mbox{\rm Im}}\Psi(s)$. After resumming the logarithms with the scale choice $\mu^2=s\equiv M^2$, its general perturbative expansion reads $$\begin{aligned} \label{ImPsi} {\mbox{\rm Im}}\Psi_{\rm PT}(s+i0) &=& \frac{N_c}{8\pi}\,m_M^2\, s\, \sum_{n=0}^\infty a_M^n \sum_{l=0}^{[n/2]} d_{n,2l+1}\, (i\pi)^{2l} \\ {\vbox{\vskip 6mm}}&=& \frac{N_c}{8\pi}\,m_M^2\, s\, \Big[\,1 + 5.6667\,a_M + 31.864\,a_M^2 + 89.156\,a_M^3 - 536.84\,a_M^4 + \ldots \Big] . {\nonumber}\end{aligned}$$ In the first line, $[x]$ denotes the integer value of $x$, and in the second line, the numerics has again been provided for $N_f=3$. We remark that in the ${{\overline{\rm MS}}}$ scheme the fourth order coefficient turns out to be negative. However, this does not necessarily imply an onset of the dominance of UV renormalons, since the $(i\pi)^{2l}$ terms give a large contribution and contribute to the sign change. Also for the imaginary part, we introduce a modified perturbative series which results from rewriting the mass factor in terms of the invariant quark mass. This yields $$\label{ImPsihat} {\mbox{\rm Im}}\Psi_{\rm PT}(s+i0) \,=\, \frac{N_c}{8\pi}\,{\widehat}m^2\, s\, [\alpha_s(M)]^{2\gamma_m^{(1)}/\beta_1} \biggl\{\, 1 + \sum_{n=1}^\infty \, \bar r_n \,a_M^n \,\biggr\} \,.$$ At $N_f=3$, this time the coefficients $\bar r_n$ assume the values $$\label{rnbar} \bar r_1 \,=\, 7.4568 \,, \quad \bar r_2 \,=\, 45.552 \,, \quad \bar r_3 \,=\, 172.64 \,, \quad \bar r_4 \,=\, -\,204.09 \,.$$ Besides $\Psi^{''}(s)$ and ${\mbox{\rm Im}}\Psi(s)$, in addition, below another physical quantity shall be investigated, which is closer to the correlation functions arising in hadronic $\tau$ decays. To this end, consider the general decomposition of the vector correlation function into [*transversal*]{} ($T$) and [*longitudinal*]{} ($L$) correlators: $$\begin{aligned} \label{PiVA} \Pi_{\mu\nu}(p) &\equiv& i\!\int\!{\rm d}x \,{\rm e}^{ipx} \langle\Omega| T\{j_\mu(x) j_\nu^\dagger(0)\}|\Omega\rangle \,=\, (p_\mu p_\nu - g_{\mu\nu} p^2)\, \Pi^T(p^2) + p_\mu p_\nu \,\Pi^L(p^2) {\nonumber}\\ {\vbox{\vskip 6mm}}&=& (p_\mu p_\nu - g_{\mu\nu} p^2)\,\Pi^{T+L}(p^2) + g_{\mu\nu} p^2\, \Pi^L(p^2) \,,\end{aligned}$$ where $j_\mu(x)=\;:\!\!\bar u(x)\gamma_\mu s(x)\!\!:\,$. The correlators of the decomposition in the second line, $\Pi^{T+L}(s)$ and $\Pi^L(s)$ are free of kinematical singularities and thus should be employed in phenomenological analyses. Next, the longitudinal correlator $\Pi^L(s)$ is related to the scalar correlation function via $$\label{PiL} \Pi^L(s) \,=\, \frac{1}{s^2} \left[\, \Psi(s) - \Psi(0) \,\right] \,.$$ Eq. [(\[PiL\])]{} suggests to define a third physical quantity $D^L(s)$ by [@pp98; @pp99; @gjpps03] $$\label{DL} D^L(s) \,\equiv\, -\,s\,\frac{{\rm d}}{{\rm d} s} \Big[ s\,\Pi^L(s) \Big] \,=\, \frac{1}{s} \left[\, \Psi(s) - \Psi(0) \,\right] - \Psi^\prime(s) \,.$$ Employing eqs. [(\[PiL\])]{} and [(\[DL\])]{}, together with the expansion [(\[PsiPT\])]{}, the general form of the perturbative expansion for $D^L(Q^2)$ reads $$\label{DLPT} D_{\rm PT}^{L}(s) \,=\, -\,\frac{N_c}{8\pi^2}\, m_\mu^2\, \sum\limits_{n=0}^\infty a_\mu^n \sum\limits_{k=1}^{n+1} k\,d_{n,k} L^{k-1}\,.$$ Comparing eq. [(\[DLPT\])]{} to the corresponding expression for the Adler function [@bj08], one observes that up to the global prefactor – which however depends on the scale dependent quark mass – they are completely equivalent. Being a physical quantity, also $D^L(s)$ satisfies a homogeneous RGE, and thus again the logarithms in eq. [(\[DLPT\])]{} can be resummed with the scale choice $\mu^2=-s=Q^2$, leading to the simple expression $$\label{DLres} D_{\rm PT}^{L}(Q^2) \,=\, -\,\frac{N_c}{8\pi^2} \,m_Q^2 \sum\limits_{n=0}^\infty d_{n,1} \,a_Q^n \,.$$ From eq. [(\[DLres\])]{} it is again apparent that the only physically relevant coefficients are the $d_{n,1}$. All the rest is encoded in running coupling and quark masses. However, as only the $d_{n,1}$ enter, the perturbative behaviour of $D^L(s)$ is substantially worse than that of the correlator $\Psi^{''}(s)$. We shall shed further light on this observation in the next section. In analogy to eqs. [(\[Psippmhat\])]{} and [(\[ImPsihat\])]{}, we can define a new expansion by rewriting the running quark mass in terms of the RGI one. The corresponding general perturbative expansion for $D^L(Q^2)$ reads $$\label{DLmhat} D_{\rm PT}^{L}(Q^2) \,=\, -\,\frac{N_c}{8\pi^2}\, {\widehat}m^2\, [\alpha_s(Q)]^{2\gamma_m^{(1)}/\beta_1} \biggl\{\, 1 + \sum_{n=1}^\infty \,\tilde r_n a_Q^n \,\biggr\} \,,$$ which defines the coefficients $\tilde r_n$. Numerically, at $N_f=3$, the $\tilde r_n$ are found to be $$\label{rntqcd} \tilde r_1 \,=\, 7.4568 \,, \quad \tilde r_2 \,=\, 59.534 \,, \quad \tilde r_3 \,=\, 574.36 \,, \quad \tilde r_4 \,=\, 6645.3 \,.$$ As the next step, we review and utilise the information available on the scalar correlation function in the large-$N_f$, or relatedly, the large-$\beta_0$ approximation. Large- approximation for the scalar correlator {#sect3} ============================================== The large-$\beta_0$ approximation for the scalar correlation function was worked out in an impressive tour de force by Broadhurst et al. in ref. [@bkm00]. The approach is to first calculate the large-$N_f$ expansion by summing fermion-loop chains in the gluon propagator, and then performing the naive non-abelianisation [@bb94] through the replacement $N_f \to -3\beta_1$. Taking into account that the correlator $\Pi_S(Q^2)$ of [@bkm00] is related to $\Psi(Q^2)$ by $\Pi_S(Q^2)=(4\pi)^2 \Psi(Q^2)$, in the large-$N_f$ limit the scalar correlator was found to be $$\label{PsiLargeNf} \Psi(Q^2) \,=\, \frac{N_c}{8\pi^2}\,m_\mu^2 \,Q^2 \biggl[\, L - 2 + \frac{C_F b}{2T_F N_f}\,H(L,b) + {{\cal O}}\biggl(\frac{1}{N_f^2}\biggr) + {{\cal O}}\biggl(\frac{1}{Q^2}\biggr) \,\biggr] \,.$$ The function $H(L,b)$, with $b\equiv T_F N_f\,a_\mu/3$, is at the heart of the work [@bkm00] and will be discussed in detail below.[^4] In our conventions, $T_F=1/2$. Comparing eqs. [(\[PsiPT\])]{} and [(\[PsiLargeNf\])]{}, it immediately follows that $$\sum\limits_{n=1}^\infty a_\mu^n \sum\limits_{k=0}^{n+1} d_{n,k} L^k \,=\, \frac{C_F b}{2T_F N_f}\,H(L,b) \,.$$ Next, employing the expansion $$H(L,b) \,=\, \sum_{n=1}^\infty H_{n+1}(L)\, b^{n-1},$$ along the lines of ref. [@bkm00], one obtains $$\sum\limits_{k=0}^{n+1} d_{n,k} L^k \,=\, C_F\,\frac{N_f^{n-1}}{6^n}\, H_{n+1}(L) \,,$$ and in particular $$\label{dn1LargeNf} d_{n,1} \,=\, C_F\,\frac{N_f^{n-1}}{6^n}\, H_{n+1}^{(1)} \,,$$ for the independent coefficients $d_{n,1}$, where $H_{n+1}^{(1)}$ denotes the coefficient of the term of $H_{n+1}(L)$ linear in the logarithm. It remains to arrive at an expression for $H_{n+1}^{(1)}$. An explicit expression for the $H_{n+1}^{(1)}$ can be pieced together from several formulae presented in ref. [@bkm00], the central of which, for $n \geq 1$, reads: $$\label{Hnp1L} n(n+1) H_{n+1}(L) \,=\, (n+1)\big[ h_{n+2} + 4 (L-2) g_{n+1} \big] + 4 g_{n+2} + 9\,(-1)^n\,{{\cal D}}_{n+1}(L) \,.$$ The coefficients $h_{n+2}$ are scheme-dependent constants, which do not concern us here since they are independent of $L$, while the quantities $g_n$ are related to the expansion coefficients of the quark-mass anomalous dimension $\gamma_m(a)$ in the large-$N_f$ limit. In this limit, one finds [@gra93; @bkm00] $$\label{gammam} \gamma_m(a) \,\equiv\, -\,\frac{\mu}{m_\mu}\frac{{\rm d}m_\mu}{{\rm d}\mu}\,=\, \frac{2\,C_F b}{T_F N_f}\,g(b) + {{\cal O}}\biggl(\frac{1}{N_f^2}\biggr) \,,$$ with the function $g(b)$ being given by $$\label{gb} g(b) \,=\, \frac{(3-2b)^2}{(4-2b)}\,\frac{\Gamma(2-2b)}{[\Gamma(2-b)]^2}\, \frac{\sin(\pi b)}{\pi b} \,.$$ Then, finally, the expansion of $g(b)$, together with an efficient way to generate it, which was also presented in [@bkm00], reads: $$\label{gbn} g(b) \,=\, \sum\limits_{n=1}^\infty g_n b^{n-1} \,=\, \Biggl[ 4 - \sum\limits_{n=2}^\infty\biggl(\frac{3}{2^n}+\frac{n}{2}\biggr) b^{n-2} \Biggr] \exp\Biggl( \sum\limits_{l=3}^\infty \frac{2^l-3-(-1)^l}{l}\, \zeta_l \,b^l \Biggr) \,.$$ For the convenience of the reader, we list the first six coefficients $g_n$: $$\begin{aligned} \label{gnnum} g_1 &=& {\mbox{$\frac{9}{4}$}} \,, \qquad g_2 \,=\, -\,{\mbox{$\frac{15}{8}$}} \,, \qquad g_3 \,=\, -\,{\mbox{$\frac{35}{16}$}} \,, \qquad g_4 \,=\, -\,{\mbox{$\frac{83}{32}$}} + {\mbox{$\frac{9}{2}$}}\,\zeta_3 \,, {\nonumber}\\ {\vbox{\vskip 6mm}}g_5 &=& -\,{\mbox{$\frac{195}{64}$}} - {\mbox{$\frac{15}{4}$}}\,\zeta_3 + {\mbox{$\frac{27}{4}$}}\,\zeta_4 \,, \qquad g_6 \,=\, -\,{\mbox{$\frac{451}{128}$}} - {\mbox{$\frac{35}{8}$}}\,\zeta_3 - {\mbox{$\frac{45}{8}$}}\,\zeta_4 + {\mbox{$\frac{27}{2}$}}\,\zeta_5 \,.\end{aligned}$$ Comparing the general expansion of $g(b)$ with the one for $\gamma_m(a)$, the relation for the individual expansion coefficients is given by $$\gamma_m^{(n)} \,=\, 4\,C_F\,\frac{N_f^{n-1}}{6^n}\, g_n \,.$$ Employing the coefficients $g_n$ of eq. [(\[gnnum\])]{}, it can easily be verified that the terms with the highest power in $N_f$ of $\gamma_m^{(n)}$ in eq. [(\[gfun\])]{} are indeed reproduced. The functions ${{\cal D}}_n(L)$ in the last summand of [(\[Hnp1L\])]{}, and the corresponding coefficients ${{\cal D}}_n^{(1)}$ linear in $L$, can be derived from the following relation:[^5] $$\label{Dn1L} \sum\limits_{n=0}^\infty \frac{{{\cal D}}_n(L)}{n!}\,u^n \,=\, \big[\, 1 + u\,G_D(u) \big] \,{\rm e}^{-(L-5/3)u} \,.$$ The term “$-5/3$” in the exponent is particular for the ${{\overline{\rm MS}}}$ scheme which is employed unless otherwise stated. Below, we shall, however, generalise our expressions to an arbitrary scheme for the coupling. Furthermore, the function $G_D(u)$ was found to be [@bkm00] $$\begin{aligned} G_D(u) &=& \frac{2}{1-u} - \frac{1}{2-u} + \frac{2}{3} \sum\limits_{p=3}^\infty \frac{(-1)^p}{(p-u)^2} - \frac{2}{3} \sum\limits_{p=1}^\infty \frac{(-1)^p}{(p+u)^2} {\nonumber}\\ {\vbox{\vskip 8mm}}&=& \frac{2}{1-u} - \frac{1}{2-u} + \frac{1}{6} \Big[\, \zeta\big(2,2-{\mbox{$\frac{u}{2}$}}\big) - \zeta\big(2,{\mbox{$\frac{3}{2}$}}-{\mbox{$\frac{u}{2}$}}\big) - \zeta\big(2,1+{\mbox{$\frac{u}{2}$}}\big) + \zeta\big(2,{\mbox{$\frac{1}{2}$}}+{\mbox{$\frac{u}{2}$}}\big) \,\Big] {\nonumber}\\ {\vbox{\vskip 8mm}}\label{GDu} &=& \sum\limits_{k>0} \frac{k+3}{3}\,(2-2^{-k})\,u^{k-1} - \frac{8}{3} \sum\limits_{l>0} \zeta_{2l+1} l(1-4^{-l})\,u^{2l-1} \,.\end{aligned}$$ The first line of eq. [(\[GDu\])]{} explicitly displays the renormalon structure, separated in IR renormalon poles at positive integer $u$, and UV renormalon poles at negative integer $u$, while the second gives an expression in terms of the Hurwitz $\zeta$-function. Finally, the third line provides the Taylor expansion of $G_D(u)$ around $u=0$, which corresponds to the perturbative expansion. Inserting the extracted coefficients $g_n$ and ${{\cal D}}_n^{(1)}$ into $H_{n+1}^{(1)}$ derived from eq. [(\[Hnp1L\])]{}, it is a simple matter to verify that eq. [(\[dn1LargeNf\])]{} reproduces the contributions with the highest power of $N_f$ in the coefficients $d_{n,1}$ of [(\[d01tod21\])]{} for $n\geq 1$. To facilitate the comparison, the first few coefficients ${{\cal D}}_n^{(1)}$ and $H_{n+1}^{(1)}$ have been collected in appendix \[appB\]. Next, an expression for $\Psi^{''}(Q^2)$ of eq. [(\[PsippPT\])]{} in the large-$\beta_0$ limit shall be derived. The required second derivative of the function $H(L,b)$ with respect to $L$ can be extracted from expressions provided in ref. [@bkm00], along the lines of the computation above which led to the coefficients $d_{n,1}$. To convert the large-$N_f$ expansion into the large-$\beta_0$ (or large-$\beta_1$) limit, all occurrences of $N_f$ have to be replaced by $-3\beta_1$. Finally, rewriting sums over the ${{\cal D}}_n$ coefficients (and derivatives) in terms of the Borel transform of the coupling, those sums can be expressed in closed form containing the function $G_D(u)$. This yields $$\begin{aligned} \label{Psipplb0} \Psi_{\beta_0}^{''}(Q^2) \,=\, &\frac{N_c}{8\pi^2}\, \frac{m_\mu^2}{Q^2} \biggl\{\, 1 - \frac{2}{\beta_1} \sum_{n=1}^\infty \frac{\gamma_m^{(n+1)}}{n}\, a_\mu^n \,+ \\ {\vbox{\vskip 8mm}}&\frac{3C_F}{\beta_1}\! \int\limits_0^\infty \!{\rm d}u\, {\rm e}^{-2u/(\beta_1 a_\mu)} \Big[ (1-u)\big[ 1+u\,G_D(u) \big] {\rm e}^{-(L-5/3)u} - 1 \Big] \frac{1}{u} + \ldots \,\biggr\} , \end{aligned}$$ where the ellipses stand for terms with additional suppression in $\beta_1$ or $Q^2$. Because the integrand contains IR renormalon poles along the path of integration, a prescription has to be specified in order to define the integral. In the present study the principal-value prescription shall always be adopted. As $\Psi^{''}(Q^2)$ satisfies a homogeneous RGE, the logarithm can be resummed through the scale choice $\mu^2=Q^2$. Furthermore, the running of the quark mass is reflected in the terms containing the coefficients of the quark-mass anomalous dimension $\gamma_m^{(n)}$, except for the leading-order running $\gamma_m^{(1)}$ which is cancelled by the last term “$-1$” in the square brackets. Hence, the mass running (except for the leading order) can be resummed by expressing the quark mass in terms of the RGI quark mass ${\widehat}m$ according to eq. [(\[mhat\])]{}. In addition, we rewrite the expression in terms of a coupling $a_Q^C\equiv \alpha_s^C(Q)/\pi$ parametrised by a constant $C$, specifying the renormalisation scheme and being defined by the relation: $$\label{ahatlb0} \frac{1}{\hat a_Q} \,\equiv\, \frac{1}{a_Q^C} + C\,\frac{\beta_1}{2} \,=\, \frac{1}{a_Q^{{{\overline{\rm MS}}}}} - \frac{5}{3}\,\frac{\beta_1}{2} \,.$$ The coupling $\hat a_Q$ for $C=0$ can be considered a scheme-independent coupling at large-$\beta_0$. This leads to our final formula for $\Psi^{''}(Q^2)$ in the large-$\beta_0$ approximation: $$\begin{aligned} \label{Psipplb0res} \Psi_{\beta_0}^{''}(Q^2) &=& \frac{N_c}{8\pi^2}\, \frac{{\widehat}m^2}{Q^2} \,[\alpha_s^{C_m}(Q)]^{2\gamma_m^{(1)}/\beta_1} \biggl\{\, 1 - 2\,\frac{\gamma_m^{(1)}}{\beta_1} \ln\Big[ 1 + C_m {\mbox{$\frac{\beta_1}{2}$}}\, a_Q^{C_m} \Big] {\nonumber}\\ {\vbox{\vskip 8mm}}&& +\,\frac{2\pi}{\beta_1}\! \int\limits_0^\infty \!{\rm d}u\, {\rm e}^{-2u/(\beta_1 a_Q^{C_a})} B[\Psi^{''}](u) + \ldots \,\biggr\} \,, \end{aligned}$$ where we have introduced two separate constants $C_m$ and $C_a$, referring to the scheme dependencies of quark mass and coupling, respectively. The Borel transform $B[\Psi^{''}](u)$ is given by $$\begin{aligned} \label{BPsipp} B[\Psi^{''}](u) &=& \frac{3C_F}{2\pi}\,{\rm e}^{-C_a u}\,\Big[\, (1-u)\,G_D(u) - 1 \,\Big] {\nonumber}\\ {\vbox{\vskip 8mm}}&=& \frac{3C_F}{2\pi}\,{\rm e}^{-C_a u}\, \biggl\{\, \frac{1}{(2-u)} - \frac{2}{3}\,\sum\limits_{p=3}^\infty\, (-1)^p \biggl[\, \frac{(p-1)}{(p-u)^2} - \frac{1}{(p-u)} \,\biggr] {\nonumber}\\ && \hspace{36.3mm} -\,\frac{2}{3}\,\sum\limits_{p=1}^\infty\, (-1)^p \biggl[\, \frac{(p+1)}{(p+u)^2} - \frac{1}{(p+u)} \,\biggr] \,\biggr\} \,.\end{aligned}$$ The second equality again provides the separation of the Borel transform $B[\Psi^{''}](u)$ in IR and UV renormalon poles. The found general structure is analogous to the one of the Adler function [@ben98]. Except for the linear IR pole at $u=2$, being related to the gluon condensate, we have quadratic and linear IR poles at all integer $u\geq 3$. Furthermore, quadratic and linear UV renormalon poles are found for all integer $u\leq -1$. Hence, like for the Adler function, at large orders the perturbative coefficients will be dominated by the quadratic UV renormalon pole at $u=-1$ which lies closest to $u=0$. As is also observed from eq. [(\[Psipplb0res\])]{}, the perturbative series contains a term without renormalon singularities which is related to the scheme dependence of the global prefactor $\alpha_s^C(Q)$. This “no-pole” contribution is absent in the scheme with $C=0$, in which the prefactor is expressed in terms of the invariant coupling $\hat\alpha_s(Q)$. Let us proceed to an investigation of the perturbative expansion for three different choices of the renormalisation scheme. We begin with the ${{\overline{\rm MS}}}$ scheme for both mass and coupling, in which $C_m=C_a=-5/3$, and the coefficients $r_n^{\beta_0}$, introduced in eq. [(\[Psippmhat\])]{}, are found to be $$\begin{aligned} \label{rnlb0MSb} r_1^{\beta_0}({{\overline{\rm MS}}},{{\overline{\rm MS}}}) &=& \frac{16}{3} \,=\, 5.3333 \,, \quad r_2^{\beta_0}({{\overline{\rm MS}}},{{\overline{\rm MS}}}) \,=\, \Big(\,\frac{143}{36} - 2\zeta_3\Big) \beta_1 \,=\, 7.0565 \,, {\nonumber}\\ {\vbox{\vskip 8mm}}r_3^{\beta_0}({{\overline{\rm MS}}},{{\overline{\rm MS}}}) &=& \Big(\,\frac{1465}{324} - \frac{4}{3}\zeta_3 \Big) \beta_1^2 \,=\, 59.107 \,, \\ {\vbox{\vskip 8mm}}r_4^{\beta_0}({{\overline{\rm MS}}},{{\overline{\rm MS}}}) \!&=&\! \Big(\,\frac{17597}{2592} + \frac{5}{6}\zeta_3 - \frac{15}{2}\zeta_5 \Big) \beta_1^3 \,=\, 1.2504 \,. {\nonumber}\end{aligned}$$ The first entry in the argument of $r_n^{\beta_0}$ refers to the scheme for the mass and the second for the coupling. The numerical values have been given for $N_f=3$. Comparing to eq. [(\[rnqcd\])]{}, except for the first coefficient $r_1$, the higher-order coefficients are not at all well represented by the large-$\beta_0$ approximation, with a complete failure observed at the fourth order. To obtain a better understanding of this behaviour, the contribution of the lowest-lying renormalon poles to the perturbative large-$\beta_0$ coefficients shall be investigated. $\quad r_1^{\beta_0}\;$ $\quad r_2^{\beta_0}\;$ $\quad r_3^{\beta_0}\;$ $\quad r_4^{\beta_0}\;$ $\quad r_5^{\beta_0}\;$ $\quad r_6^{\beta_0}\;$ ----------------- ------------------------- ------------------------- ------------------------- ---------------------------- ---------------------------- ---------------------------- ${\rm UV}_{-1}$ 25.0 -56.7 31.7 -15025.0 65.2 133.9 ${\rm UV}_{-2}$ -6.2 3.5 1.1 618.5 0.4 -0.8 ${\rm IR}_{2}$ 18.8 69.1 42.3 10973.5 28.4 -28.4 ${\rm IR}_{3}$ -2.8 -6.3 -1.3 349.9 2.5 -3.8 ${\rm IR}_{4}$ 1.6 3.1 0.3 -259.0 -1.3 1.7 No-Pole 62.5 88.6 26.4 3514.5 4.6 -2.2 SUM 98.8 101.3 100.7 172.4 99.8 100.4 $\quad r_7^{\beta_0}\;$ $\quad r_8^{\beta_0}\;$ $\quad r_9^{\beta_0}\;$ $\quad r_{10}^{\beta_0}\;$ $\quad r_{11}^{\beta_0}\;$ $\quad r_{12}^{\beta_0}\;$ ${\rm UV}_{-1}$ 89.6 105.7 97.7 101.1 99.5 100.2 ${\rm UV}_{-2}$ 0.0 -0.1 0.0 0.0 0.0 0.0 ${\rm IR}_{2}$ 9.0 -4.9 2.1 -1.0 0.4 -0.2 ${\rm IR}_{3}$ 1.5 -0.8 0.3 -0.1 0.1 0.0 ${\rm IR}_{4}$ -0.6 0.3 -0.1 0.0 0.0 0.0 No-Pole 0.3 -0.1 0.0 0.0 0.0 0.0 SUM 99.9 100.1 100.0 100.0 100.0 100.0 : Contribution (in percent) of the lowest-lying ultraviolet (UV) and infrared (IR) renormalon poles as well as the no-pole term to the first 12 perturbative coefficients $r_n^{\beta_0}$ in the ${{\overline{\rm MS}}}$ scheme for both quark mass and renormalon terms.\[tab1\] In table \[tab1\], the contributions in percent of the two lowest-lying UV renormalon poles at $u=-1,-2$ and three lowest-lying IR renormalon poles at $u=2,3,4$ as well as the no-pole term to the first 12 perturbative coefficients $r_n$ in the large-$\beta_0$ approximation and the ${{\overline{\rm MS}}}$ scheme are presented. It is observed that starting with about the 5th order, the dominance of the lowest-lying UV pole at $u=-1$ sets in. For the first two orders, the no-pole term which does not contain a renormalon singularity, dominates. Furthermore, for the 4th order, huge cancellations between the different contributions take place. At this order, only when adding the no-pole term and UV and IR renormalon contributions up to order $p=15$, a 1% precision on the coefficient $r_4^{\beta_0}$ is reached. $\quad r_1^{\beta_0}\;$ $\quad r_2^{\beta_0}\;$ $\quad r_3^{\beta_0}\;$ $\quad r_4^{\beta_0}\;$ $\quad r_5^{\beta_0}\;$ $\quad r_6^{\beta_0}\;$ ----------------- ------------------------- ------------------------- ------------------------- ---------------------------- ---------------------------- ---------------------------- ${\rm UV}_{-1}$ 66.7 -496.0 43.1 440.0 68.4 131.0 ${\rm UV}_{-2}$ -16.7 31.0 1.5 -18.1 0.5 -0.8 ${\rm IR}_{2}$ 50.0 604.5 57.6 -321.4 29.7 -27.8 ${\rm IR}_{3}$ -7.4 -55.1 -1.7 -10.2 2.6 -3.7 ${\rm IR}_{4}$ 4.2 27.1 0.5 7.6 -1.4 1.7 SUM 96.8 111.5 100.9 97.9 99.8 100.4 $\quad r_7^{\beta_0}\;$ $\quad r_8^{\beta_0}\;$ $\quad r_9^{\beta_0}\;$ $\quad r_{10}^{\beta_0}\;$ $\quad r_{11}^{\beta_0}\;$ $\quad r_{12}^{\beta_0}\;$ ${\rm UV}_{-1}$ 89.9 105.6 97.7 101.1 99.5 100.2 ${\rm UV}_{-2}$ 0.0 -0.1 0.0 0.0 0.0 0.0 ${\rm IR}_{2}$ 9.1 -4.9 2.1 -1.0 0.4 -0.2 ${\rm IR}_{3}$ 1.5 -0.8 0.3 -0.1 0.1 0.0 ${\rm IR}_{4}$ -0.6 0.3 -0.1 0.0 0.0 0.0 SUM 99.9 100.1 100.0 100.0 100.0 100.0 : Contribution (in percent) of the lowest-lying ultraviolet (UV) and infrared (IR) renormalon poles to the first 12 perturbative coefficients $r_n^{\beta_0}$ in the mixed scheme with $C_m=0$ for the quark mass and ${{\overline{\rm MS}}}$ in the renormalon terms.\[tab2\] Now, we move to the discussion of renormalisation schemes for which the mass renormalisation is taken at $C_m=0$, and thus the no-pole, logarithmic term of eq. [(\[Psipplb0res\])]{} vanishes. Since the renormalisation scheme in the mass and in the renormalon contribution can be chosen independently, we still have the freedom to employ a different scheme in the latter case. Using the ${{\overline{\rm MS}}}$ scheme in the Borel integral, $C_a=-5/3$, the first four perturbative coefficients are found to be: $$\begin{aligned} \label{rnlb0Mix} r_1^{\beta_0}({C\!\!=\!0},{{\overline{\rm MS}}}) &=& 2 \,, \quad r_2^{\beta_0}({C\!\!=\!0},{{\overline{\rm MS}}}) \,=\, \Big(\,\frac{31}{12} - 2\zeta_3\Big) \beta_1 \,=\, 0.8065 \,, {\nonumber}\\ {\vbox{\vskip 8mm}}r_3^{\beta_0}({C\!\!=\!0},{{\overline{\rm MS}}}) &=& \Big(\,\frac{15}{4} - \frac{4}{3}\zeta_3 \Big) \beta_1^2 \,=\, 43.482 \,, \\ {\vbox{\vskip 8mm}}r_4^{\beta_0}({C\!\!=\!0},{{\overline{\rm MS}}}) \!&=&\! \Big(\,\frac{5449}{864} + \frac{5}{6}\zeta_3 - \frac{15}{2}\zeta_5 \Big) \beta_1^3 \,=\, -\,42.695 \,.{\nonumber}\end{aligned}$$ It is observed that the first two orders are substantially smaller than in eq. [(\[rnlb0MSb\])]{}, due to the fact that the no-pole term has effectively been resummed into the global prefactor. The third order is of a similar size and the 4th order turns out to be negative, which indicates that the leading UV renormalon singularity is already dominating. This is confirmed by the separated contributions of the lowest-lying IR and UV renormalons, again provided in table \[tab2\]. This time large cancellations between the lowest-lying UV and IR renormalons take place for the second and 4th order. This cancellation could be the reason for an anomalously small second order coefficient. Like in the ${{\overline{\rm MS}}}$ scheme, dominance of the leading UV renormalon at $u=-1$ sets in at about the 5th order. To conclude our discussion of the perturbative expansion of $\Psi^{''}(Q^2)$ in the large-$\beta_0$ approximation, we investigate the scheme with $C_m=C_a=0$ in both no-pole and renormalon contributions. The corresponding first few perturbative coefficients read $$\begin{aligned} \label{rnlb0Ce0} r_1^{\beta_0}({C\!\!=\!0},{C\!\!=\!0}) &=& 2 \,, \quad r_2^{\beta_0}({C\!\!=\!0},{C\!\!=\!0}) \,=\, \Big(\,\frac{11}{12} - 2\zeta_3 \Big) \beta_1 \,=\, -\,6.6935 \,, {\nonumber}\\ {\vbox{\vskip 8mm}}r_3^{\beta_0}({C\!\!=\!0},{C\!\!=\!0}) &=& \Big(\,\frac{5}{6} + 2\zeta_3 \Big) \beta_1^2 \,=\, 65.558 \,, \\ {\vbox{\vskip 8mm}}r_4^{\beta_0}({C\!\!=\!0},{C\!\!=\!0}) \!&=&\! \Big(\,\frac{37}{32} - \frac{15}{2}\zeta_5 \Big) \beta_1^3 \,=\, -\,603.31 \,. {\nonumber}\end{aligned}$$ In this case, the leading UV renormalon dominates already from the lowest order which is reflected in the sign-alternating behaviour of the perturbative coefficients. Also the strong growth of the coefficients that signals the asymptotic behaviour of the series is observed. As an amusing aside, we remark that in this scheme, at each order $n>1$, only the highest possible $\zeta$-function coefficients $\zeta(2\,[n/2+1]-1)$ arise, where $[x]$ denotes the integer value of $x$. In table \[tab3\], once again the contributions in percent to the first 6 perturbative coefficients are presented. As indicated above, in this scheme one finds that already the second coefficient $r_2^{\beta_0}$ is largely dominated by the leading UV renormalon at $u=-1$, and for still higher orders the series is fully dominated by this contribution. The respective behaviour is also expected from the exponential factor $\exp(-C_a u)$ in eq. [(\[BPsipp\])]{} which entails that in the scheme with $C_a=0$ the residues of the IR renormalon poles are no longer enhanced with respect to the UV ones as is the case in the ${{\overline{\rm MS}}}$ scheme. $\quad r_1^{\beta_0}\;$ $\quad r_2^{\beta_0}\;$ $\quad r_3^{\beta_0}\;$ $\quad r_4^{\beta_0}\;$ $\quad r_5^{\beta_0}\;$ $\quad r_6^{\beta_0}\;$ ----------------- ------------------------- ------------------------- ------------------------- ------------------------- ------------------------- ------------------------- ${\rm UV}_{-1}$ 66.7 134.5 103.0 105.7 101.5 101.3 ${\rm UV}_{-2}$ -16.7 -22.4 -9.0 -4.7 -2.3 -1.2 ${\rm IR}_{2}$ 50.0 -16.8 3.9 -1.4 0.5 -0.2 ${\rm IR}_{3}$ -7.4 -1.7 0.8 -0.3 0.1 0.0 ${\rm IR}_{4}$ 4.2 1.4 -0.4 0.1 0.0 0.0 SUM 96.8 95.0 98.2 99.4 99.8 99.9 : Contribution (in percent) of the lowest-lying ultraviolet (UV) and infrared (IR) renormalon poles to the first 6 perturbative coefficients $r_n^{\beta_0}$ in the scheme with $C_m=C_a=0$ for both quark mass and renormalon terms.\[tab3\] In an analogous fashion to the derivation of eq. [(\[Psipplb0res\])]{}, we can derive an expression for the correlation function $D^L(Q^2)$ of eq. [(\[DLres\])]{} in the large-$\beta_0$ approximation, which reads $$\begin{aligned} \label{DLlb0res} D_{\beta_0}^L(Q^2) &=& -\,\frac{N_c}{8\pi^2}\, {\widehat}m^2 \,[\alpha_s^{C_m}(Q)]^{2\gamma_m^{(1)}/\beta_1} \biggl\{\, 1 - 2\,\frac{\gamma_m^{(1)}}{\beta_1} \ln\Big[ 1 + C_m {\mbox{$\frac{\beta_1}{2}$}}\, a_Q^{C_m} \Big] {\nonumber}\\ {\vbox{\vskip 8mm}}&& +\,\frac{2\pi}{\beta_1}\! \int\limits_0^\infty \!{\rm d}u\, {\rm e}^{-2u/(\beta_1 a_Q^{C_a})}\cdot \frac{3C_F}{2\pi}\,{\rm e}^{-C_a u}\, G_D(u) + \ldots \,\biggr\} \,.\end{aligned}$$ The perturbative expansion of this correlator shall only be discussed in the mixed scheme with $C_m=0$ for the quark mass and ${{\overline{\rm MS}}}$, that is $C_a=-5/3$, for the remainder. Then, the coefficients $\tilde r_n$ of eq. [(\[DLmhat\])]{} in the large-$\beta_0$ limit are found as $$\begin{aligned} \label{rntlb0} \tilde r_1^{\beta_0}({C\!\!=\!0},{{\overline{\rm MS}}}) &=& 4 \,, \quad \tilde r_2^{\beta_0}({C\!\!=\!0},{{\overline{\rm MS}}}) \,=\, \Big(\,\frac{25}{4} - 2\zeta_3\Big) \beta_1 \,=\, 17.3065 \,, {\nonumber}\\ {\vbox{\vskip 8mm}}\tilde r_3^{\beta_0}({C\!\!=\!0},{{\overline{\rm MS}}}) &=& \Big(\,\frac{205}{18} - \frac{10}{3}\zeta_3 \Big) \beta_1^2 \,=\, 149.486 \,, \\ {\vbox{\vskip 8mm}}\tilde r_4^{\beta_0}({C\!\!=\!0},{{\overline{\rm MS}}}) &=& \Big(\,\frac{21209}{864} - \frac{25}{6}\zeta_3 - \frac{15}{2}\zeta_5 \Big) \beta_1^3 \,=\, 1071.81 \,, {\nonumber}\end{aligned}$$ where like before the numerical values have been given at $N_f=3$. It is again observed that the coefficients $\tilde r_n^{\beta_0}$ are substantially worse behaved than the coefficients $r_n^{\beta_0}$. $\quad \tilde r_1^{\beta_0}\;$ $\quad \tilde r_2^{\beta_0}\;$ $\quad \tilde r_3^{\beta_0}\;$ $\quad \tilde r_4^{\beta_0}\;$ $\quad \tilde r_5^{\beta_0}\;$ $\quad \tilde r_6^{\beta_0}\;$ ----------------- -------------------------------- -------------------------------- -------------------------------- ----------------------------------- ----------------------------------- ----------------------------------- ${\rm UV}_{-1}$ 33.3 -5.8 9.5 -8.6 8.2 -10.6 ${\rm UV}_{-2}$ -8.3 -2.9 -1.1 -0.3 -0.1 0.0 ${\rm UV}_{1}$ 100.0 138.7 109.9 123.1 99.1 115.3 ${\rm IR}_{2}$ -25.0 -28.2 -16.7 -12.8 -6.4 -4.2 ${\rm IR}_{3}$ -3.7 -4.5 -2.8 -2.3 -1.1 -0.7 SUM 96.3 97.3 98.8 99.2 99.7 99.8 $\quad \tilde r_7^{\beta_0}\;$ $\quad \tilde r_8^{\beta_0}\;$ $\quad \tilde r_9^{\beta_0}\;$ $\quad \tilde r_{10}^{\beta_0}\;$ $\quad \tilde r_{11}^{\beta_0}\;$ $\quad \tilde r_{12}^{\beta_0}\;$ ${\rm UV}_{-1}$ 9.6 -13.2 11.3 -16.2 13.1 -19.4 ${\rm UV}_{-2}$ 0.0 0.0 0.0 0.0 0.0 0.0 ${\rm UV}_{1}$ 92.5 114.5 89.2 116.5 87.0 119.5 ${\rm IR}_{2}$ -1.8 -1.2 -0.5 -0.3 -0.1 -0.1 ${\rm IR}_{3}$ -0.3 -0.2 -0.1 0.0 0.0 0.0 SUM 99.9 100.0 100.0 100.0 100.0 100.0 : Contribution (in percent) of the lowest-lying ultraviolet (UV) and infrared (IR) renormalon poles to the first 12 perturbative coefficients $r_n^{\beta_0}$ in the mixed scheme with $C_m=0$ for the quark mass and ${{\overline{\rm MS}}}$ in the renormalon terms.\[tab4\] Similarly to table \[tab2\], in table \[tab4\] the contributions in percent of the three lowest-lying UV renormalon poles at $u=1,-1,-2$ and two lowest-lying IR renormalon poles at $u=2,3$ to the first 12 perturbative coefficients $\tilde r_n$ in the large-$\beta_0$ approximation and the mixed scheme are presented. The surprising finding that can also be inferred directly from eq. [(\[DLlb0res\])]{} is that the function $D^L(Q^2)$ suffers from an additional, spurious renormalon pole at $u=1$. This observation was, of course, already made in ref. [@bkm00]. Because the linear $u=1$ pole has the larger residue as compared to the UV renormalon pole at $u=-1$, it dominates the perturbative coefficients for a large number of orders, before the quadratic UV pole at $u=-1$ takes over.[^6] The origin of the renormalon pole at $u=1$ can be understood from eq. [(\[DL\])]{}. In the construction of $D^L(Q^2)$, the term $\Psi(0)/s$ is subtracted. As will be explained in more detail in appendix \[appC\], the subtraction constant $\Psi(0)$ consists of a contribution from the quark condensate and an UV divergent perturbative term proportional to $m^4$. The subtraction of this divergent term leads to an ambiguity which results in the emergence of the additional renormalon at $u=1$, and since it is of UV origin, in table \[tab4\] we have labelled the pole accordingly. Generally, in applications, because of this spurious renormalon pole, it appears advisable to avoid the correlator $D^L(Q^2)$ in phenomenological analyses. A detailed discussion of the third physical observable related to the scalar correlator, ${\mbox{\rm Im}}\Psi(s)$, in the large-$\beta_0$ limit, has been presented in ref. [@bkm00], and therefore, we shall not repeat it here. We only remark that, like $\Psi^{''}(s)$, also the spectral function does not suffer from a renormalon pole at $u=1$. In the case of $\Psi^{''}(s)$, this pole contribution, which is present in the independent perturbative coefficients $d_{n,1}$, is cancelled by the term $2d_{n,2}$ (see eq. [(\[Psippres\])]{}), which individually also receives contributions from a pole at $u=1$. In the case of ${\mbox{\rm Im}}\Psi(s)$, those $u=1$ pole contributions are cancelled by the $(i\pi)^{2l}$ terms multiplying $d_{n,2l+1}$ coefficients with $l\geq 1$ (see eq. [(\[ImPsi\])]{}). To conclude, from the investigation of the scalar correlator in the large-$\beta_0$ approximation, it appears advantageous to express at least the global prefactor proportional to $\alpha_s^{2\gamma_m^{(1)}/\beta_1}$ in terms of a scheme-invariant coupling $\hat\alpha_s$, such that the quark mass factor is fully scheme independent. In the next section, we shall investigate the options for such a definition of $\hat\alpha_s$ in full QCD and will study its implications in section \[sect5\]. Scheme variations of the QCD coupling {#sect4} ===================================== The aim of this section is to define a class of renormalisation schemes in which the running of the QCD coupling is scheme invariant, in particular it only depends on the two leading $\beta$-function coefficients $\beta_1$ and $\beta_2$. In addition, scheme transformations of this coupling can be parametrised by just one parameter $C$, corresponding to transformations of the QCD $\Lambda$-parameter, which sets the scale. Our starting point for the construction of this class of couplings is the scale-invariant parameter $\Lambda$ that can be defined as $$\label{Lambda} \Lambda \,\equiv\, Q\, {\rm e}^{-\frac{1}{\beta_1 a_Q}} \,[ a_Q ]^{-\frac{\beta_2}{\beta_1^2}} \exp\Biggl\{\,\int\limits_0^{a_Q}\,\frac{{\rm d}a}{\tilde\beta(a)}\Biggr\}\,,$$ where $$\frac{1}{\tilde\beta(a)} \,\equiv\, \frac{1}{\beta(a)} - \frac{1}{\beta_1 a^2} + \frac{\beta_2}{\beta_1^2 a} \,,$$ which is free of singularities in the limit $a\to 0$. Consider a scheme transformation to a new coupling $a'$, which takes the general form $$\label{ap} a' \,\equiv\, a + c_1\,a^2 + c_2\,a^3 + c_3\,a^4 + \ldots \,.$$ The $\Lambda$-parameter in the new scheme, $\Lambda'$, only depends on $c_1$ and not on the remaining higher-order coefficients [@cg79]. The exact relation between the $\Lambda$-parameters is given by $$\label{Lambdap} \Lambda' \,=\, \Lambda\,{\rm e}^{c_1/\beta_1} \,.$$ This motivates the definition of a new coupling $\tilde a_Q$, which is scheme invariant, except for shifts in the $\Lambda$-parameter, parametrised by the constant $C$: $$\label{atilde} \frac{1}{\beta_1\tilde a_Q} \,\equiv\, \ln\frac{Q}{\Lambda} + \frac{C}{2} \,=\, \frac{1}{\beta_1 a_Q} + \frac{C}{2} + \frac{\beta_2}{\beta_1^2} \ln a_Q - \int\limits_0^{a_Q}\,\frac{{\rm d}a}{\tilde\beta(a)} \,.$$ Like in the last section, we might have termed the new coupling $\tilde a_Q^C$, in order to indicate its scheme dependence, but for notational simplicity, we drop the superscript. In large-$\beta_0$ and the ${{\overline{\rm MS}}}$ scheme, the value $C=-5/3$ led to the invariant construction of eq. [(\[ahatlb0\])]{}. As shall be discussed further below, in full QCD the construction of a universal scheme-invariant coupling appears not to be possible. The combination [(\[atilde\])]{} was already introduced in refs. [@byz92; @ben93], where it was noted that an unpleasant feature of $\tilde a_Q$ is the presence of the non-analytic logarithmic term. However, we can get rid of it by an implicit construction of another coupling $\hat a$, this time defined by $$\label{ahat} \frac{1}{\hat a_Q} \,\equiv\, \beta_1 \Big( \ln\frac{Q}{\Lambda} + \frac{C}{2} \Big) - \frac{\beta_2}{\beta_1} \ln\hat a_Q \,=\, \frac{1}{a_Q} + \frac{\beta_1}{2}\,C + \frac{\beta_2}{\beta_1} \ln\frac{a_Q}{\hat a_Q} - \beta_1 \!\int\limits_0^{a_Q}\,\frac{{\rm d}a}{\tilde\beta(a)} \,,$$ which in perturbation theory should be interpreted in an iterative sense. It is a straightforward matter to deduce from eq. [(\[ahat\])]{} the perturbative relations that provide the transformations between the coupling $a$ in a particular scheme and the coupling ${\hat a}$. Up to fourth order, taking $a$ as well as the corresponding $\beta$-function coefficients in the ${{\overline{\rm MS}}}$ scheme, and for $N_f=3$, we find $$\begin{aligned} \label{ahatofa} {\hat a}(a) &=& a - {\mbox{$\frac{9}{4}$}}\,C\,a^2 - \big( {\mbox{$\frac{3397}{2592}$}} + 4 C - {\mbox{$\frac{81}{16}$}}\,C^2 \big) a^3 {\nonumber}\\ {\vbox{\vskip 6mm}}&& -\,\big( {\mbox{$\frac{741103}{186624}$}} + {\mbox{$\frac{233}{192}$}}\,C - {\mbox{$\frac{45}{2}$}}\,C^2 + {\mbox{$\frac{729}{64}$}}\,C^3 + {\mbox{$\frac{445}{144}$}}\zeta_3 \big) a^4 + {{\cal O}}(a^5) \,,\end{aligned}$$ as well as $$\begin{aligned} \label{aofahat} a({\hat a}) &=& {\hat a}+ {\mbox{$\frac{9}{4}$}}\,C\,{\hat a}^2 + \big( {\mbox{$\frac{3397}{2592}$}} + 4 C + {\mbox{$\frac{81}{16}$}}\,C^2 \big) {\hat a}^3 {\nonumber}\\ {\vbox{\vskip 6mm}}&& +\,\big( {\mbox{$\frac{741103}{186624}$}} + {\mbox{$\frac{18383}{1152}$}}\,C + {\mbox{$\frac{45}{2}$}}\,C^2 + {\mbox{$\frac{729}{64}$}}\,C^3 + {\mbox{$\frac{445}{144}$}}\zeta_3 \big) {\hat a}^4 + {{\cal O}}({\hat a}^5) \,.\end{aligned}$$ As the next step, we investigate the running of the coupling ${\hat a}$. To this end, we first have to derive its $\beta$-function which is found to have the simple form $$\label{betahat} -\,\mu\,\frac{{\rm d}{\hat a}}{{\rm d}\mu} \,\equiv\, \hat\beta({\hat a}) \,=\, \frac{\beta_1 \hat a^2}{\left(1 - {\mbox{$\frac{\beta_2}{\beta_1}$}}\, \hat a\right)} \,.$$ Obviously, as is seen explicitly, it only depends on the scheme-invariant $\beta$-function coefficients $\beta_1$ and $\beta_2$. However, our scheme is different from the ’t Hooft scheme for which $\beta(a)=\beta_1 a^2+\beta_2 a^3$ [@tHo77]. We also note that non-trivial zeros of $\hat\beta({\hat a})$ can only arise in the case of $\beta_1=0$. Integrating the RGE [(\[betahat\])]{}, one obtains $$\label{ahatrun} \frac{1}{{\hat a}_Q} \,=\, \frac{1}{{\hat a}_\mu} + \frac{\beta_1}{2}\ln\frac{Q^2}{\mu^2} - \frac{\beta_2}{\beta_1}\ln\frac{{\hat a}_Q}{{\hat a}_\mu} \,.$$ Again, this implicit equation for ${\hat a}_Q$ can either be solved iteratively, to provide a perturbative expansion, or, of course, numerically. In the following section, we shall investigate the phenomenological implications of re-expressing the perturbative expansion in terms of ${\hat a}$ for the scalar correlation function. Before turning to the phenomenological applications, however, we point out the possibility of defining a fully scheme-invariant coupling. Since the QCD coupling is not directly measurable, such a definition would have to be based on a particular physical observable, for example the QCD Adler function. In the past, such definitions have been discussed in the literature. (See e.g. refs. [@gru80; @gru84].) However, then the definition of the coupling is non-universal and its $\Lambda$-parameter and $\beta$-function depend on the perturbative expansion coefficients of the physical quantity. For this reason, we prefer to stick to the universal coupling ${\hat a}$ according to the definition [(\[ahat\])]{}, and study the behaviour of physical observables under variation of the parameter $C$. Phenomenological applications {#sect5} ============================= Let us now investigate the phenomenological implications of introducing the QCD coupling ${\widehat}\alpha_s$ of eq. [(\[ahat\])]{}. We begin by doing this on the basis of the scalar correlator $\Psi_{\rm PT}^{''}$ of eq. [(\[Psippmhat\])]{}, where, as a first step, the coupling in the prefactor, originating in the running of the quark mass, is re-expressed in terms of ${\widehat}\alpha_s$. Defining the quantity ${\widehat}\Psi^{''}({\alpha_s})$, which just contains the dependence on the coupling, $$\label{PsippSI} \Psi_{\rm PT}^{''}(Q^2) \,\equiv\, \frac{N_c}{8\pi^2}\, \frac{{\widehat}m^2}{Q^2}\; {\widehat}\Psi^{''}({\alpha_s}) \,,$$ and employing the transformation of the QCD coupling provided in eq. [(\[ahatofa\])]{}, we find: $$\begin{aligned} \label{psipphat} {\widehat}\Psi^{''}({\alpha_s}) &=& [{\widehat}\alpha_s(Q)]^{8/9} \big\{\, 1 + (5.4568 + 2\,C)\,a_Q + (25.452 + 14.469\,C - 0.25\,C^2)\,a_Q^2 {\nonumber}\\ {\vbox{\vskip 6mm}}&+& (135.29 + 74.006\,C - 6.2531\,C^2 + 0.20833\,C^3)\,a_Q^3 {\nonumber}\\ {\vbox{\vskip 6mm}}&+& (824.05 + 367.82\,C - 56.089\,C^2 + 9.2479\,C^3 - 0.24740\,C^4) \,a_Q^4 + \ldots \big\} .\end{aligned}$$ Thus far the coupling $a_Q$ within the curly brackets is left in the ${{\overline{\rm MS}}}$ scheme. We will proceed with investigating this case numerically and then, in a second step, also rewrite these contributions in terms of ${\hat a}_Q$. ![${\widehat}\Psi^{''}({\alpha_s})$ according to eq. [(\[psipphat\])]{} as a function of $C$ for ${\alpha_s}(M_\tau)=0.316$. The yellow band corresponds to either removing or doubling the ${{\cal O}}(a^4)$ correction to estimate the respective uncertainty. In the red point, where ${{\cal O}}(a^4)$ vanishes, the third order is taken as the error. For further discussion, see the text.\[fig1\]](psipp.pdf){height="6.4cm"} To this end, figure \[fig1\] displays a numerical account of the behaviour of ${\widehat}\Psi^{''}$ as a function of the scheme parameter $C$. As we are interested in applications to hadronic $\tau$ decays in the future, for definiteness, we have chosen ${\alpha_s}(M_\tau)=0.316$ in the ${{\overline{\rm MS}}}$ scheme, which corresponds to the current PDG average ${\alpha_s}(M_Z)=0.1181$ [@pdg15]. The coupling ${\widehat}\alpha_s(Q)$ required in the prefactor has been determined by directly solving eq. [(\[ahat\])]{} numerically, not via the expansion [(\[ahatofa\])]{}. In order to estimate the uncertainty in the perturbative prediction, the fourth order term is either removed or doubled. The steepest curve in figure \[fig1\] then corresponds to setting the ${{\cal O}}(a_Q^4)$ contribution to zero and the flattest one to doubling it. The yellow band hence corresponds to the region of expected values for ${\widehat}\Psi^{''}$, depending on the parameter $C$. It is observed that at $C=-1.683$ the ${{\cal O}}(a_Q^4)$ correction vanishes. Interestingly enough, this value is surprisingly close to $C=-5/3$ in large-$\beta_0$, which enters the construction of the invariant coupling [(\[ahatlb0\])]{} in the ${{\overline{\rm MS}}}$ scheme, though, presumably, this is merely a coincidence. The red data point then indicates an estimate where the uncertainty is taken to be the size of the third-order term. At this value of $C$, the third-order correction has already turned negative and, beyond it, also the ${{\cal O}}(a_Q^4)$ contribution changes sign. This is an indication that in the respective region of $C$ the contributions from IR and UV renormalons are more balanced. To obtain a more complete picture, also the uncertainty of $\alpha_s$ should be folded in. From the PDG average ${\alpha_s}(M_Z)=0.1181(13)$ [@pdg15], we deduce ${\alpha_s}(M_\tau)=0.316(10)$. Numerically, our result at $C=-1.683$ then reads $$\label{Oa4zero} {\widehat}\Psi^{''}(C=-1.683) \,=\, 0.774 \pm 0.005^{\,+\,0.058}_{\,-\,0.052} \,=\, 0.774^{\,+\,0.058}_{\,-\,0.052} \,,$$ where the first error corresponds to the ${{\cal O}}(a_Q^3)$ correction also displayed in figure \[fig1\], while the second error results from the current uncertainty in $\alpha_s$. The total error on the right-hand side has been obtained by adding the individual uncertainties in quadrature. The value [(\[Oa4zero\])]{} can be compared to the result at $C=0$, $$\label{Ceq0} {\widehat}\Psi^{''}(C=0) \,=\, 0.715 \pm 0.030^{\,+\,0.040}_{\,-\,0.038} \,=\, 0.715^{\,+\,0.050}_{\,-\,0.048} \,.$$ The two predictions [(\[Oa4zero\])]{} and [(\[Ceq0\])]{} are found to be compatible and have similar uncertainties. At present, the error on $\alpha_s$ is dominant. While in the prediction [(\[Oa4zero\])]{}, the estimated uncertainty from missing higher orders is substantially reduced, its sensitivity to $\alpha_s$ and its uncertainty is increased. This is due to the fact that at $C=-1.683$, symmetrising the error, one finds ${\widehat}\alpha_s=0.610\pm 0.045$. This increased sensitivity on $\alpha_s$ may also be seen as a virtue if one aims at an extraction of $\alpha_s$ along the lines of [@bj08; @bgjmmop12; @bgmop14; @pr16]. In this respect, further understanding of the behaviour of the perturbative series, for example, through models for the Borel transform in the spirit of ref. [@bj08], could be helpful. As a last remark it is pointed out that at the scale of $M_\tau$, for $C< -2$, the scheme transformation ceases to be perturbative and breaks down. Therefore, such values should be discarded for phenomenology. We proceed with our second step of also expressing the coupling $a_Q$ within the curly brackets of eq. [(\[psipphat\])]{} in terms of ${\hat a}_Q$. As a matter of principle, we could introduce two different scheme constants $C_m$ and $C_a$, related to mass and coupling renormalisation, respectively, since the global prefactor originates from the quark mass, and the remaining expansion concerns the QCD coupling. To keep the discussion more transparent, however, we prefer to only use a single common constant $C=C_m=C_a$. Then the expansion in ${\hat a}_Q$ takes the form $$\begin{aligned} \label{psipphat2} {\widehat}\Psi^{''}({\widehat}\alpha_s) &=& [{\widehat}\alpha_s(Q)]^{8/9} \big\{\, 1 + (5.4568 + 2\,C)\,{\hat a}_Q + (25.452 + 26.747\,C + 4.25\,C^2)\,{\hat a}_Q^2 {\nonumber}\\ {\vbox{\vskip 6mm}}&+& (142.44 + 212.99\,C + 94.483\,C^2 + 9.2083\,C^3)\,{\hat a}_Q^3 {\nonumber}\\ {\vbox{\vskip 6mm}}&+& (932.71 + 1625.0\,C + 1099.8\,C^2 + 291.95\,C^3 + 20.143\,C^4) \,{\hat a}_Q^4 + \ldots \big\} .\end{aligned}$$ The corresponding graphical representation of this result is displayed in figure \[fig2\]. In this case, the order ${\hat a}^4$ correction does not vanish for any sensible value of $C$. The smallest uncertainty is assumed around $C\approx -0.9$, at which one deduces $$\label{Cmin} {\widehat}\Psi^{''}(C=-0.9) \,=\, 0.753 \pm 0.022^{\,+\,0.050}_{\,-\,0.046} \,=\, 0.753^{\,+\,0.055}_{\,-\,0.051} \,.$$ In figure \[fig2\], the first error is shown as the red data point and the second again corresponds to the uncertainty induced from the error on $\alpha_s$. In view of the large $\alpha_s$ error, the result [(\[Cmin\])]{} is again fully compatible with [(\[Oa4zero\])]{} and [(\[Ceq0\])]{}. ![${\widehat}\Psi^{''}({\widehat}\alpha_s)$ according to eq. [(\[psipphat2\])]{} as a function of $C$ for ${\alpha_s}(M_\tau)=0.316$. The yellow band corresponds to either removing or doubling the ${{\cal O}}({\hat a}^4)$ correction to estimate the respective uncertainty. At the red point, the uncertainty resulting from the ${{\cal O}}({\hat a}^4)$ contribution is minimal. For further discussion, see the text.\[fig2\]](psipp2.pdf){height="6.4cm"} Let us now turn to the decay of the Higgs boson into quark-antiquark pairs. The corresponding decay width is given by $$\label{GammaH} \Gamma(H\to q\bar q) \,=\, \frac{\sqrt{2}\,G_F}{M_H}\, {\mbox{\rm Im}}\Psi\big(M_H^2+i0\big) \,\equiv\, \frac{N_c\,G_F M_H}{4\sqrt{2}\pi}\, {\widehat}m_q^2\, {\widehat}R\big(\alpha_s(M_H)\big) \,,$$ which defines the function ${\widehat}R\big(\alpha_s(M_H)\big)$. We proceed in analogy to the case of $\Psi^{''}$ by first expressing only the global prefactor in terms of the coupling ${\widehat}\alpha_s$, which results in $$\begin{aligned} \label{Rhat} {\widehat}R({\alpha_s}) &=& [{\widehat}\alpha_s(Q)]^{24/23} \big\{\, 1 + (8.0176 + 2\,C)\,a_Q + (46.732 + 18.557\,C + 0.08333\,C^2)\,a_Q^2 {\nonumber}\\ {\vbox{\vskip 6mm}}&+& (142.12 + 117.09\,C - 1.5384\,C^2 - 0.05093\,C^3)\,a_Q^3 {\nonumber}\\ {\vbox{\vskip 6mm}}&-& (544.67 - 426.17\,C + 22.522\,C^2 - 2.2856\,C^3 - 0.04774\,C^4) \,a_Q^4 + \ldots \big\} .\end{aligned}$$ Here the number of flavours $N_f=5$ and $Q=M_H$. For ${\alpha_s}(M_H)=0.1127$, a graphical representation of ${\widehat}R({\alpha_s})$ as a function of $C$ is given in figure \[fig3\]. Because the coupling now is much smaller than at the $\tau$ scale, the perturbative expansion converges faster, and thus the typical ${{\cal O}}(a^4)$ term is substantially smaller than the order $a^3$ correction at $C=1.362$, where ${{\cal O}}(a^4)$ vanishes. This is obvious from the large error bar of the red point. The corresponding numerical result reads $$\label{RhOa4zero} {\widehat}R(C=1.362) \,=\, 0.1387 \pm 0.0013 \pm 0.0020 \,=\, 0.1387 \pm 0.0024 \,,$$ where the second error again results from the variation $\alpha_s(M_H)=0.1127(12)$ which has been deduced from the PDG average. Still, even though the large ${{\cal O}}(a^3)$ uncertainty has been assumed, the current error from the $\alpha_s$ input is even bigger. ![${\widehat}R({\alpha_s})$ according to eq. [(\[Rhat\])]{} as a function of $C$ for ${\alpha_s}(M_H)=0.1127$. The yellow band corresponds to either removing or doubling the ${{\cal O}}(a^4)$ correction to estimate the respective uncertainty. In the red point, where ${{\cal O}}(a^4)$ vanishes, the third order is taken as the error. For further discussion, see the text.\[fig3\]](Rhat.pdf){height="6.4cm"} ![${\widehat}R({\widehat}\alpha_s)$ according to eq. [(\[Rhat2\])]{} as a function of $C$ for ${\alpha_s}(M_H)=0.1127$. The yellow band corresponds to either removing or doubling the ${{\cal O}}({\hat a}^4)$ correction to estimate the respective uncertainty. In the red points, where ${{\cal O}}({\hat a}^4)$ vanishes, the third order is taken as the error. For further discussion, see the text.\[fig4\]](Rhat2.pdf){height="6.4cm"} Like for $\Psi^{''}$, also for the Higgs decay, as a second step, we express the remaining $\alpha_s$ series in powers of ${\hat a}$. This yields $$\begin{aligned} \label{Rhat2} {\widehat}R({\widehat}\alpha_s) &=& [{\widehat}\alpha_s(Q)]^{24/23} \big\{\, 1 + (8.0176 + 2\,C) \,{\hat a}_Q + (46.732 + 33.924\,C + 3.9167\,C^2)\,{\hat a}_Q^2 {\nonumber}\\ {\vbox{\vskip 6mm}}&+& (141.19 + 315.38\,C + 103.88\,C^2 + 7.6157\,C^3)\,{\hat a}_Q^3 {\nonumber}\\ {\vbox{\vskip 6mm}}&-& (524.03 - 1491.9\,C - 1353.1\,C^2 - 277.97\,C^3 - 14.756\,C^4) \,{\hat a}_Q^4 + \ldots \big\} ,\end{aligned}$$ and the corresponding behaviour as a function of $C$ is presented in figure \[fig4\]. This time two values of $C$ are found, at which the ${{\cal O}}({\hat a}^4)$ correction vanishes, and they are again displayed as the red data points. In both cases, like before the corresponding uncertainty inferred from the size of the third order is much larger than a typical fourth order term. The corresponding numerical results are given by $$\label{Rh2Oa4zero1} {\widehat}R(C=-2.079) \,=\, 0.1386 \pm 0.0012 \pm 0.0020 \,=\, 0.1386 \pm 0.0023 \,,$$ and $$\label{Rh2Oa4zero2} {\widehat}R(C=0.277) \,=\, 0.1387 \pm 0.0010 \pm 0.0020 \,=\, 0.1387 \pm 0.0022 \,,$$ where the second error once more is due to the $\alpha_s$ uncertainty and the final errors result from a quadratic average. In a situation like this, in our opinion a conservative estimate of higher-order corrections can be obtained by assuming the maximal ${{\cal O}}({\hat a}^4)$ correction between those two points and taking that as the perturbative uncertainty. This approach is shown as the blue point, and the numerical value reads $$\label{Rh2Oa4max} {\widehat}R(C=-0.94) \,=\, 0.1387 \pm 0.0002 \pm 0.0020 \,=\, 0.1387 \pm 0.0020 \,.$$ It is clear that now the higher-order uncertainty is completely negligible with respect to the present error in $\alpha_s$. To summarise, rewriting the perturbative expansion in terms of the coupling ${\widehat}\alpha_s$ of eq. [(\[ahat\])]{} introduces interesting approaches to improve the convergence of the series for the known low-order corrections, before the asymptotic behaviour sets in. We demonstrated this explicitly for the correlator $\Psi^{''}(s)$ at the scale $M_\tau$ and for the decay of the Higgs boson into quarks which is related to ${\mbox{\rm Im}}\Psi(s)$ at the scale $M_H$. In both examples, however, the parametric uncertainty induced by the error on $\alpha_s$ dominates. This is in part due to the recent increase in the $\alpha_s$ uncertainty of the PDG average [@pdg15] by more than a factor of two, in view of an earlier analysis of $\alpha_s$ determinations from lattice QCD by the FLAG collaboration [@flag14]. Hence, we expect our findings to increase in importance when the uncertainty on $\alpha_s$ again shrinks in the future. Still, in view of the potential to strengthen the sensitivity on $\alpha_s$, our approach could also open promising options for improved non-lattice $\alpha_s$ determinations. Conclusions {#sect6} =========== The scalar correlation function is one of the basic QCD two-point correlators with important phenomenological applications for the decay of the Higgs boson to quark-antiquark pairs [@djou08], determinations of light quark masses from QCD sum rules [@jop02; @jop06] and contributions to hadronic decays of the $\tau$ lepton [@pp98; @pp99; @gjpps03]. Presently, the perturbative expansion of the scalar correlator is known up to order $\alpha_s^4$ in the strong coupling [@bck05]. Three physical functions related to the scalar correlator play a role for phenomenological studies: ${\mbox{\rm Im}}\Psi(s)$ in Higgs decay, $\Psi^{''}(s)$ for quark-mass extractions and $D^L(s)$ in finite-energy sum rule analyses of hadronic $\tau$ decays. From the known perturbative coefficients it is observed that the renormalisation-group resummed $D^L(s)$ only depends on the independent coefficients $d_{n,1}$, and those corrections turn out much larger than the ones for $\Psi^{''}(s)$ and ${\mbox{\rm Im}}\Psi(s)$, for which combinations of the $d_{n,1}$ and $d_{n,k}$ with $k>1$ appear. The latter coefficients are calculable from the renormalisation group and only depend on lower-order $d_{n,1}$, $\beta$-function coefficients, and those of the mass anomalous dimension. In order to understand this pattern of higher-order corrections better, we reviewed the results for the scalar correlator in the large-$\beta_0$ approximation [@bkm00], and derived compact expressions for the correlators $\Psi^{''}(s)$ and $D^L(s)$ in terms of Borel transforms, which directly give access to the renormalon structure of the respective correlators. While this structure in the case of $\Psi^{''}(s)$ is analogous to the one of the Adler function, double and single IR renormalon poles for $u\geq 2$, with only a single pole at $u=2$, as well as double and single UV poles for $u\leq -1$, for the correlator $D^L(s)$ an additional single pole at $u=1$ is found. The origin of this spurious pole, which is suspected to be of UV origin, can be traced back to the divergent subtraction $\Psi(0)/s$ that is performed in the construction of $D^L(s)$. While the pole at $u=1$ is present in the coefficients $d_{n,1}$, for $\Psi^{''}(s)$ and ${\mbox{\rm Im}}\Psi(s)$ it is cancelled by corresponding contributions to the dependent coefficients $d_{n,k}$ with $k>1$. Another feature of the scalar correlator that becomes apparent from the large-$\beta_0$ approximation is the appearance of a regular contribution that is related to the renormalisation of the global mass factor $m^2$. By rewriting this prefactor in terms of the renormalisation-group invariant quark mass ${\widehat}m$, one is left with the logarithmic term in eq. [(\[Psipplb0res\])]{}, which depends on the leading-order RG coefficients $\beta_1$ and $\gamma_m^{(1)}$, as well as the renormalisation scheme of the coupling in the prefactor. Expressing this prefactor in terms of the coupling ${\widehat}\alpha_s$ of eq. [(\[ahatlb0\])]{}, which can be considered an invariant coupling in large-$\beta_0$, the regular logarithmic contribution is resummed. Improvements in the behaviour of the perturbative series were also discussed in section \[sect3\], and it was concluded that this is in part due to shifting the contribution of UV renormalon poles, in particular the lowest-lying one at $u=-1$, to lower orders. Generally, however, it has to be acknowledged that for the scalar correlator the large-$\beta_0$ limit does not provide a satisfactory representation of the full QCD case. In order to mimic as much as possible the large-$\beta_0$ case, in section \[sect4\], we attempted to define a scheme-invariant coupling also for full QCD. Whereas it appears to be impossible to do this in a universal way, that is, independent of any observable, in eq. [(\[ahat\])]{} we presented the definition of a coupling ${\widehat}\alpha_s$ whose running is renormalisation-group invariant in the sense that it only depends on the invariant coefficients $\beta_1$ and $\beta_2$, and is given by the simple $\beta$-function of eq. [(\[betahat\])]{}. The scheme dependence of ${\widehat}\alpha_s$ is then parametrised by a single parameter $C$ which corresponds to transformations of the QCD scale parameter $\Lambda$. Phenomenological applications of re-expressing the perturbative series of $\Psi^{''}(s)$ at the $\tau$-mass scale, and ${\mbox{\rm Im}}\Psi(s)$ at $M_H$, in terms of ${\widehat}\alpha_s$, were investigated in section \[sect5\]. To this end, we considered two cases: a first, in which only the $\alpha_s$-prefactor, originating from the quark mass, is rewritten in ${\widehat}\alpha_s$, and the remaining series is kept in the ${{\overline{\rm MS}}}$ scheme, and a second case, in which the whole series is expressed in terms of the coupling ${\widehat}\alpha_s$. Generally, it can be concluded that appropriate choices of $C$ allow for an improvement of the behaviour of the perturbative series for the first few known orders. This is, however, achieved at the expense of an increase in the value of the coupling, either only in the prefactor, or also in the remaining expansion terms, which leads to an increased sensitivity to $\alpha_s$ and also its uncertainty. In an era in which just recently the error on the PDG average of the strong coupling [@pdg15] has increased by more than a factor of two, in view of an earlier analysis of $\alpha_s$ determinations from lattice QCD by the FLAG collaboration [@flag14], we find that in all considered cases the uncertainty of our perturbative predictions is dominated by the error on $\alpha_s$. Therefore, in the investigated examples, currently, improvements in the perturbative series appear to be a secondary issue. Still, when our knowledge on the value of $\alpha_s$ at some point returns to a precision comparable to previous estimates, the uncertainty due to higher-order corrections becomes of a similar size, and optimising the series by appropriate scheme choices through variation of the parameters $C$ should allow for refined perturbative predictions. On the other hand, the increased sensitivity on $\alpha_s$ for certain ranges of $C$ can also be taken as a virtue if one aims at determinations of $\alpha_s$, for example from hadronic $\tau$ decay spectra along the lines of refs. [@bj08; @bgjmmop12; @bgmop14; @pr16], as this could result in reduced equivalent uncertainties in the ${{\overline{\rm MS}}}$ coupling. A preliminary assessment of such an approach is performed in ref. [@bjm16], for the perturbative expansion of the Adler function and the total $\tau$ hadronic width, before we embark on a full-fledged analysis of the decay spectra. In this respect, also analysing models for the Borel transform in the coupling ${\widehat}\alpha_s$, along the lines of ref. [@bj08], could provide additional helpful insights. Since a substantial part of the improvements results from rewriting global prefactors of $\alpha_s$, investigating other observables which include such factors and suffer from large perturbative corrections could be rather promising. These factors may either be explicitly present, like for example in gluonium correlation functions which carry a global factor $\alpha_s^2$, or may emerge from quark-mass factors, similarly to the scalar correlator, as in the case of the total semi-leptonic $B$-meson decay rate which is proportional to $m_b^5$. It is to be expected that also in these applications the perturbative expansion could be improvable by adequate scheme choices for the coupling ${\widehat}\alpha_s$. Helpful discussions with Martin Beneke, Diogo Boito and Antonio Pineda are gratefully acknowledged. This work has been supported in part by MINECO Grant numbers CICYT-FEDER-FPA2011-25948 and CICYT-FEDER-FPA2014-55613-P, by the Severo Ochoa excellence program of MINECO, Grant number SO-2012-0234, and Secretaria d’Universitats i Recerca del Departament d’Economia i Coneixement de la Generalitat de Catalunya under Grant number 2014 SGR 1450. Renormalisation group functions and dependent coefficients {#appA} ========================================================== In our notation, the QCD $\beta$-function and mass anomalous dimension are defined as: $$\begin{aligned} \label{bega} -\,\mu\,\frac{{\rm d}a}{{\rm d}\mu} &\equiv& \beta(a) \,=\, \beta_1\,a^2 + \beta_2\,a^3 + \beta_3\,a^4 + \beta_4\,a^5 + \ldots \,, \\ {\vbox{\vskip 6mm}}-\,\frac{\mu}{m}\,\frac{{\rm d}m}{{\rm d}\mu} &\equiv& \gamma_m(a) \,=\, \gamma_m^{(1)}\,a + \gamma_m^{(2)}\,a^2 + \gamma_m^{(3)}\,a^3 + \gamma_m^{(4)}\,a^4 + \ldots \,.\end{aligned}$$ It is assumed that we work in a mass-independent renormalisation scheme and in this study throughout the modified minimal subtraction scheme ${{\overline{\rm MS}}}$ is used. To make the presentation self-contained, below the known coefficients of the $\beta$-function and mass anomalous dimension in the given conventions shall be provided. Numerically, for $N_c=3$ the first four coefficients of the $\beta$-function are given by [@tvz80; @lrv97; @cza04; @bck16] $$\begin{aligned} \label{bfun} \beta_1 &=& {\mbox{$\frac{11}{2}$}} - {\mbox{$\frac{1}{3}$}}\,N_f \,, \qquad \beta_2 \,=\, {\mbox{$\frac{51}{4}$}} - {\mbox{$\frac{19}{12}$}}\,N_f \,, \qquad \beta_3 \,=\, {\mbox{$\frac{2857}{64}$}} - {\mbox{$\frac{5033}{576}$}}\,N_f + {\mbox{$\frac{325}{1728}$}}\,N_f^2 \,, {\nonumber}\\ {\vbox{\vskip 6mm}}\beta_4 &=& {\mbox{$\frac{149753}{768}$}} + {\mbox{$\frac{891}{32}$}}\,\zeta_3 - \left({\mbox{$\frac{1078361}{20736}$}} + {\mbox{$\frac{1627}{864}$}}\,\zeta_3\right) N_f + \left({\mbox{$\frac{50065}{20736}$}} + {\mbox{$\frac{809}{1296}$}}\,\zeta_3\right) N_f^2 + {\mbox{$\frac{1093}{93312}$}}\,N_f^3 \,, {\nonumber}\\ {\vbox{\vskip 6mm}}\beta_5 &=& {\mbox{$\frac{8157455}{8192}$}} + {\mbox{$\frac{621885}{1024}$}}\,\zeta_3 - {\mbox{$\frac{88209}{1024}$}}\,\zeta_4 - {\mbox{$\frac{144045}{256}$}}\,\zeta_5 {\nonumber}\\ {\vbox{\vskip 6mm}}&-& \left({\mbox{$\frac{336460813}{995328}$}} + {\mbox{$\frac{1202791}{10368}$}}\,\zeta_3 - {\mbox{$\frac{33935}{3072}$}}\,\zeta_4 - {\mbox{$\frac{1358995}{13824}$}}\,\zeta_5 \right) N_f {\nonumber}\\ {\vbox{\vskip 6mm}}&+& \left({\mbox{$\frac{25960913}{995328}$}} + {\mbox{$\frac{698531}{41472}$}}\,\zeta_3 - {\mbox{$\frac{5263}{2304}$}}\,\zeta_4 - {\mbox{$\frac{5965}{648}$}}\,\zeta_5 \right) N_f^2 {\nonumber}\\ {\vbox{\vskip 6mm}}&-& \left({\mbox{$\frac{630559}{2985984}$}} + {\mbox{$\frac{24361}{62208}$}}\,\zeta_3 - {\mbox{$\frac{809}{6912}$}}\,\zeta_4 - {\mbox{$\frac{115}{1152}$}}\,\zeta_5 \right) N_f^3 + \left({\mbox{$\frac{1205}{1492992}$}} - {\mbox{$\frac{19}{5184}$}}\,\zeta_3 \right)\,N_f^4 \,,\end{aligned}$$ and the first five for $\gamma_m(a)$ are found to be [@vlr97; @bck14] $$\begin{aligned} \label{gfun} \gamma_m^{(1)} &=& 2 \,, \qquad \gamma_m^{(2)} \,=\, {\mbox{$\frac{101}{12}$}} - {\mbox{$\frac{5}{18}$}}\,N_f \,, \qquad \gamma_m^{(3)} \,=\, {\mbox{$\frac{1249}{32}$}} - \left({\mbox{$\frac{277}{108}$}} + {\mbox{$\frac{5}{3}$}}\,\zeta_3\right) N_f - {\mbox{$\frac{35}{648}$}}\,N_f^2 \,, {\nonumber}\\ {\vbox{\vskip 6mm}}\gamma_m^{(4)} &=& {\mbox{$\frac{4603055}{20736}$}} + {\mbox{$\frac{1060}{27}$}}\,\zeta_3 - {\mbox{$\frac{275}{4}$}}\,\zeta_5 - \left({\mbox{$\frac{91723}{3456}$}} + {\mbox{$\frac{2137}{72}$}}\, \zeta_3 - {\mbox{$\frac{55}{8}$}}\,\zeta_4 - {\mbox{$\frac{575}{36}$}}\,\zeta_5 \right) N_f {\nonumber}\\ {\vbox{\vskip 4mm}}&+& \left({\mbox{$\frac{2621}{15552}$}} + {\mbox{$\frac{25}{36}$}}\,\zeta_3 - {\mbox{$\frac{5}{12}$}}\, \zeta_4 \right) N_f^2 - \left({\mbox{$\frac{83}{7776}$}} - {\mbox{$\frac{1}{54}$}}\,\zeta_3 \right) N_f^3 \,. {\nonumber}\\ {\vbox{\vskip 6mm}}\gamma_m^{(5)} &=& {\mbox{$\frac{99512327}{82944}$}} + {\mbox{$\frac{23201233}{62208}$}}\,\zeta_3 + {\mbox{$\frac{3025}{16}$}}\,\zeta_3^2 - {\mbox{$\frac{349063}{2304}$}}\,\zeta_4 - {\mbox{$\frac{28969645}{15552}$}}\,\zeta_5 + {\mbox{$\frac{15125}{32}$}}\,\zeta_6 + {\mbox{$\frac{25795}{32}$}}\,\zeta_7 {\nonumber}\\ {\vbox{\vskip 4mm}}&-& \left({\mbox{$\frac{150736283}{746496}$}} + {\mbox{$\frac{391813}{1296}$}}\,\zeta_3 + {\mbox{$\frac{2365}{144}$}}\,\zeta_3^2 - {\mbox{$\frac{1019371}{6912}$}}\,\zeta_4 - {\mbox{$\frac{12469045}{31104}$}}\,\zeta_5 + {\mbox{$\frac{39875}{288}$}}\,\zeta_6 + {\mbox{$\frac{56875}{432}$}}\,\zeta_7 \right) N_f {\nonumber}\\ {\vbox{\vskip 4mm}}&+& \left({\mbox{$\frac{660371}{186624}$}} + {\mbox{$\frac{251353}{15552}$}}\,\zeta_3 + {\mbox{$\frac{725}{216}$}}\,\zeta_3^2 - {\mbox{$\frac{41575}{3456}$}}\,\zeta_4 - {\mbox{$\frac{33005}{5184}$}}\,\zeta_5 + {\mbox{$\frac{2875}{432}$}}\,\zeta_6 \right) N_f^2 {\nonumber}\\ {\vbox{\vskip 4mm}}&+& \left({\mbox{$\frac{91865}{746496}$}} + {\mbox{$\frac{803}{2592}$}}\,\zeta_3 + {\mbox{$\frac{7}{72}$}}\,\zeta_4 - {\mbox{$\frac{10}{27}$}}\,\zeta_5 \right) N_f^3 - \left({\mbox{$\frac{65}{31104}$}} + {\mbox{$\frac{5}{1944}$}}\,\zeta_3 - {\mbox{$\frac{1}{216}$}}\,\zeta_4 \right) N_f^4 \,.\end{aligned}$$ The dependent perturbative coefficients $d_{n,k}$ with $k>1$ can be expressed in terms of the independent coefficients $d_{n,1}$, and coefficients of the QCD $\beta$-function and mass anomalous dimension. In particular, the coefficients $d_{n,2}$, which are required in eq. [(\[Psippres\])]{}, take the form $$\label{dn2} d_{n,2} \,=\, -\,\frac{1}{2}\,\gamma_m^{(n)} d_{0,1} - \frac{1}{4} \sum\limits_{k=1}^{n-1} \big( 2\gamma_m^{(n-k)} + k\,\beta_{n-k} \big) d_{k,1} \,.$$ Explicitly, at $N_c=3$ and up to the fourth order, they read: $$\begin{aligned} \label{d12tod42} d_{1,2} &=& -\,1 \,, \qquad d_{2,2} \,=\, -\,{\mbox{$\frac{53}{3}$}} + {\mbox{$\frac{11}{18}$}}\,N_f \,, {\nonumber}\\ {\vbox{\vskip 6mm}}d_{3,2} &=& -\,{\mbox{$\frac{49349}{144}$}} + {\mbox{$\frac{585}{8}$}} \zeta_3 + \left( {\mbox{$\frac{11651}{432}$}} - {\mbox{$\frac{59}{12}$}} \zeta_3 \right) N_f - \left( {\mbox{$\frac{275}{648}$}} - {\mbox{$\frac{1}{9}$}} \zeta_3 \right) N_f^2 \,, {\nonumber}\\ {\vbox{\vskip 6mm}}d_{4,2} &=& -\,{\mbox{$\frac{49573615}{6912}$}} + {\mbox{$\frac{535759}{192}$}} \zeta_3 - {\mbox{$\frac{30115}{96}$}} \zeta_5 + \left( {\mbox{$\frac{56935973}{62208}$}} - {\mbox{$\frac{243511}{864}$}} \zeta_3 + {\mbox{$\frac{5}{6}$}} \zeta_4 + {\mbox{$\frac{1115}{48}$}} \zeta_5 \right) N_f {\nonumber}\\ && -\, \left( {\mbox{$\frac{6209245}{186624}$}} - {\mbox{$\frac{250}{27}$}} \zeta_3 + {\mbox{$\frac{25}{36}$}} \zeta_5 \right) N_f^2 + \left( {\mbox{$\frac{985}{2916}$}} - {\mbox{$\frac{5}{54}$}} \zeta_3 \right) N_f^3 \,.\end{aligned}$$ The coefficients and {#appB} ===================== Here, we provide the coefficients ${{\cal D}}_n^{(1)}$ and $H_n^{(1)}$, required to predict the perturbative coefficients $d_{n,1}$ in the large-$\beta_0$ approximation up to fifth order. $$\begin{aligned} {{\cal D}}_1^{(1)} &=& -\,1 \,, \quad {{\cal D}}_2^{(1)} \,=\, -\,{\mbox{$\frac{22}{3}$}} \,, \quad {{\cal D}}_3^{(1)} \,=\, -\,{\mbox{$\frac{275}{6}$}} + 12\,\zeta_3 \,, \quad {{\cal D}}_4^{(1)} \,=\, -\,{\mbox{$\frac{7880}{27}$}} + 80\,\zeta_3 \,, {\nonumber}\\ {\vbox{\vskip 6mm}}{{\cal D}}_5^{(1)} &=& -\,{\mbox{$\frac{324385}{162}$}} + {\mbox{$\frac{1000}{3}$}}\,\zeta_3 + 600\,\zeta_5 \,, \quad {{\cal D}}_6^{(1)} \,=\, -\,{\mbox{$\frac{1224355}{81}$}} + {\mbox{$\frac{10000}{9}$}}\,\zeta_3 + 6000\,\zeta_5 \,.\end{aligned}$$ $$\begin{aligned} H_2^{(1)} &=& {\mbox{$\frac{51}{2}$}} \,, \quad H_3^{(1)} \,=\, -\,{\mbox{$\frac{585}{8}$}} + 18\,\zeta_3 \,, \quad H_4^{(1)} \,=\, {\mbox{$\frac{15511}{72}$}} - 54\,\zeta_3 \,, {\nonumber}\\ {\vbox{\vskip 6mm}}H_5^{(1)} &=& -\,{\mbox{$\frac{520771}{576}$}} + {\mbox{$\frac{585}{4}$}}\,\zeta_3 + {\mbox{$\frac{27}{4}$}}\,\zeta_4 + 270\,\zeta_5 \,, {\nonumber}\\ {\vbox{\vskip 6mm}}H_6^{(1)} &=& {\mbox{$\frac{19577503}{4320}$}} - {\mbox{$\frac{2021}{6}$}}\,\zeta_3 - {\mbox{$\frac{9}{2}$}}\,\zeta_4 - {\mbox{$\frac{8946}{5}$}}\,\zeta_5 \,.\end{aligned}$$ The subtraction constant {#appC} ========================= In order to understand the structure of the subtraction constant $\Psi(0)$, examining the lowest perturbative order is sufficient. For definiteness, we consider the case of the current [(\[jtau\])]{} that plays a role in hadronic $\tau$ decays. $\Psi(0)$ receives contributions from the normal-ordered quark condensate and a perturbative term proportional to $m^4$. At lowest order it reads: $$\begin{aligned} \label{Psi0} \Psi(0) &=& -\,(m_u - m_s)\big[\, \langle\Omega|\!:\!\bar u u\!:\!|\Omega\rangle - \langle\Omega|\!:\!\bar s s\!:\!|\Omega\rangle \,\big] {\nonumber}\\ {\vbox{\vskip 6mm}}&& +\, 4iN_c\, (m_u - m_s) \big[\, m_u I_{m_u} - m_s I_{m_s} \,\big] \,,\end{aligned}$$ where $I_m$ is the UV divergent massive scalar vacuum-bubble integral $$I_m \,\equiv\, \mu^{2{\varepsilon}}\!\!\int\!\frac{{\rm d}^D\!k}{(2\pi)^D}\, \frac{1}{(k^2-m^2+i0)} \,=\, \frac{i}{(4\pi)^2}\,m^2\,\biggl\{\, \frac{1}{\hat{\varepsilon}} - \ln\frac{m^2}{\mu^2} + 1 + {{\cal O}}({\varepsilon}) \,\biggr\} \,.$$ The explicit expression for $I_m$ has been provided in dimensional regularisation with $D=4-2{\varepsilon}$ and $1/\hat{\varepsilon}\equiv 1/{\varepsilon}-\gamma_E+\ln(4\pi)$, but the particular regularisation scheme is inessential for our argument. Precisely the same massive scalar vacuum-bubble contribution as in the second line of eq. [(\[Psi0\])]{} also arises when rewriting the normal-ordered condensates in terms of non-normal-ordered minimally subtracted quark condensates [@sc88; @jm93]. Therefore, $\Psi(0)$ can also be expressed as $$\label{Psi0nno} \Psi(0) \,=\, -\,(m_u - m_s)\big[\, \langle\Omega|\bar u u|\Omega\rangle - \langle\Omega|\bar s s|\Omega\rangle \,\big] \,,$$ which absorbs the mass logarithms in the definition of the quark condensate. Due to a Ward identity, the condensate contribution in $\Psi(0)$ does not receive higher-order corrections, and at least at next-to-leading order, it has been checked that the perturbative term matches the vacuum-bubble structure that arises when rewriting $\langle\Omega|\!:\!\bar qq\!:\!|\Omega\rangle$ in terms of $\langle\Omega|\bar qq|\Omega\rangle$ [@gen90]. It is expected that this behaviour, and hence also the form of eq. [(\[Psi0nno\])]{}, should remain the same to all orders. As an aside, it may be remarked that for the pseudoscalar channel the combination [(\[Psi0nno\])]{} with flavour sums of quark masses as well as condensates is precisely what appears in the Gell-Mann-Oakes-Renner relation [@gmor68; @mj02]. As we have seen, the subtraction constant $\Psi(0)$ suffers from a UV divergence originating from the perturbative quark-mass correction in eq. [(\[Psi0\])]{}. Even though this contribution can be absorbed in the definition of the quark condensate by rewriting normal-ordered in terms of non-normal-ordered condensates, because of the subtraction of $\Psi(0)/s$, the UV divergence reflects itself in the spurious renormalon at $u=1$ in the correlation function $D^L(Q^2)$ of eq. [(\[DLlb0res\])]{}. \[2\][\#2]{} [10]{} P.A. Baikov, K.G. Chetyrkin, and J.H. K[ü]{}hn, [*Scalar correlator at [${\cal O}(\alpha_s^4)$]{}, [Higgs]{} decay into $b$-quarks and bounds on the light quark masses*]{}, [*Phys. Rev. Lett.*]{} [**96**]{} (2006) 012003, \[[[hep-ph/0511063]{}](http://arxiv.org/abs/hep-ph/0511063)\]. K.G. Chetyrkin, [*Correlator of the quark scalar currents and [$\Gamma_{\rm tot}(H_0\to {\rm hadrons})$]{} at [${\cal O}(\alpha_s^3)$]{} in p[QCD]{}*]{}, [*Phys. Lett. B*]{} [**390**]{} (1997) 309, \[[[hep-ph/9608318]{}](http://arxiv.org/abs/hep-ph/9608318)\]. S.G. Gorishnii, A.L. Kataev, S.A. Larin, and L.R. Surguladze, [*Corrected three loop [QCD]{} correction to the correlator of the quark scalar currents and [$\Gamma_{\rm tot}(H_0\to{\rm hadrons})$]{}*]{}, [*Mod. Phys. Lett. A*]{} [**5**]{} (1990) 2703. A. Djouadi, [*The Anatomy of electro-weak symmetry breaking. I: The Higgs boson in the Standard Model*]{}, [*Phys. Rept.*]{} [**457**]{} (2008) 1–216, \[[[hep-ph/0503172]{}](http://arxiv.org/abs/hep-ph/0503172)\]. M. Jamin, J.A. Oller and A. Pich, [*Scalar $K\pi$ form factor and light quark masses*]{}, [*Phys. Rev. D*]{} [**74**]{} (2006) 074009, \[[[hep-ph/0605095]{}](http://arxiv.org/abs/hep-ph/0605095)\]. M. Jamin, J.A. Oller and A. Pich, [*Light quark masses from scalar sum rules*]{}, [*Eur. Phys. J. C*]{} [**24**]{} (2002) 237, \[[[hep-ph/0110194]{}](http://arxiv.org/abs/hep-ph/0110194)\]. A. Pich and J. Prades, [*Perturbative quark mass corrections to the tau hadronic width*]{}, [*JHEP*]{} [**9806**]{} (1998) 013, \[[[hep-ph/9804462]{}](http://arxiv.org/abs/hep-ph/9804462)\]. A. Pich and J. Prades, [*Strange quark mass determination from Cabibbo suppressed tau decays*]{}, [*JHEP*]{} [**9910**]{} (1999) 004, \[[[hep-ph/9909244]{}](http://arxiv.org/abs/hep-ph/9909244)\]. E. Gámiz, M. Jamin, A. Pich, J. Prades, and F. Schwab, [*Determination of $m_s$ and $|{V}_{us}|$ from hadronic $\tau$ decays*]{}, [*JHEP*]{} [**01**]{} (2003) 060, \[[[hep-ph/0212230]{}](http://arxiv.org/abs/hep-ph/0212230)\]. D.J. Broadhurst, A.L. Kataev, and C.J. Maxwell, [*Renormalons and multiloop estimates in scalar correlators, Higgs decay and quark-mass sum rule*]{}, [*Nucl. Phys. B*]{} [**592**]{} (2001) 247, \[[[hep-ph/0007152]{}](http://arxiv.org/abs/hep-ph/0007152)\]. M. Beneke, [*Large-order perturbation theory for a physical quantity*]{}, [*Nucl. Phys. B*]{} [**405**]{} (1993) 424. D.J. Broadhurst, [*Large-$N$ expansion of QED: Asymptotic photon propagator and contributions to the muon anomaly, for any number of loops*]{}, [*Z. Phys. C*]{} [**58**]{} (1993) 339. M. Beneke and V.M. Braun, [*Naive non-Abelianization and resummation of fermion bubble chains*]{}, [*Phys. Lett. B*]{} [**348**]{} (1995) 513, \[[[hep-ph/9411229]{}](http://arxiv.org/abs/hep-ph/9411229)\]. M. Beneke, [*Renormalons*]{}, [*Phys. Rept.*]{} [**317**]{} (1999) 1–142, \[[[hep-ph/9807443]{}](http://arxiv.org/abs/hep-ph/9807443)\]. W.A. Bardeen, A.J. Buras, D.W. Duke and T. Muta, [*Deep inelastic scattering beyond the leading order in asymptotically free gauge theories*]{}, [*Phys. Rev. D*]{} [**18**]{} (1978) 3998. M. Beneke and M. Jamin, [*$\alpha_s$ and the $\tau$ hadronic width: fixed-order, contour-improved and higher-order perturbation theory,*]{} [*JHEP*]{} [**0809**]{} (2008) 044, \[[[arXiv:0806.3156 \[hep-ph\]]{}](http://arxiv.org/abs/0806.3156)\]. J.A. Gracey, [*Quark, gluon and ghost anomalous dimensions at ${\cal O}(1/N_f)$ in quantum chromodynamics*]{}, [*Phys. Lett. B*]{} [**318**]{} (1993) 177, \[[[hep-th/9310063]{}](http://arxiv.org/abs/hep-th/9310063)\]. W. Celmaster and R.J. Gonsalves, [*Renormalization-prescription dependence of the QCD coupling constant*]{}, [*Phys. Rev. D*]{} [**20**]{} (1979) 1420. L.S. Brown, L.G. Yaffe and C.X. Zhai, [*Large-order perturbation theory for the electromagnetic current current correlation function*]{}, [*Phys. Rev. D*]{} [**46**]{} (1992) 4712, \[[hep-ph/9205213](http://arxiv.gov/abs/hep-ph/9205213)\]. M. Beneke, [*Die Struktur der Störungsreihe in hohen Ordnungen*]{}, PhD Thesis, Munich, 1993. G. ’t Hooft, [*Can we make sense out of Quantum Chromodynamics?*]{}, [*Subnucl. Ser.*]{} [**15**]{} (1979) 943. G. Grunberg, [*Renormalization group improved perturbative QCD*]{}, [*Phys. Lett. B*]{} [**95**]{} (1980) 70. G. Grunberg, [*Renormalization scheme independent QCD and QED: The method of effective charges*]{}, [*Phys. Rev. D*]{} [**29**]{} (1984) 2315. , [*Review of Particle Physics*]{}, [*Chin. Phys. C*]{} [**38**]{} (2014) 090001. D. Boito, M. Golterman, M. Jamin, A. Mahdavi, K. Maltman, J. Osborne and S. Peris, [*An updated determination of $\alpha_s$ from $\tau$ decays*]{}, [*Phys. Rev. D*]{} [**85**]{} (2012) 093015, \[[[arXiv:1203.3146 \[hep-ph\]]{}](http://arxiv.org/abs/1203.3146)\]. D. Boito, M. Golterman, K. Maltman, J. Osborne and S. Peris, [*Strong coupling from the revised ALEPH data for hadronic $\tau$ decays*]{}, [*Phys. Rev. D*]{} [**91**]{} (2015) 034003, \[[[arXiv:1410.3528 \[hep-ph\]]{}](http://arxiv.org/abs/1410.3528)\]. A. Pich and A. Rodríguez-Sánchez, [*Determination of the QCD coupling from ALEPH $\tau$ decay data*]{}, \[[[arXiv:1605.06830 \[hep-ph\]]{}](http://arxiv.org/abs/1605.06830)\]. S. Aoki [*et al.*]{}, [*Review of lattice results concerning low-energy particle physics*]{}, [*Eur. Phys. J. C*]{} [**74**]{} (2014) 2890, \[[[arXiv:1310.8555 \[hep-lat\]]{}](http://arxiv.org/abs/1310.8555)\]. D. Boito, M. Jamin and R. Miravitllas, [*Scheme variations of the QCD coupling and hadronic $\tau$ decays*]{}, [*Phys. Rev. Lett.*]{} [**117**]{} (2016) 152001, \[[[arXiv:1606.06175 \[hep-ph\]]{}](http://arxiv.org/abs/1606.06175)\]. O.V. Tarasov, A.A. Vladimirov, and Yu.A. Zharkov, [*The Gell-Mann-Low function of QCD in the three-loop approximation*]{}, [*Phys. Lett. B*]{}, [**93**]{} (1980) 429. T. van Ritbergen, J.A.M. Vermaseren, and S.A. Larin, [*The four-loop $\beta$-function in quantum chromodynamics*]{}, [*Phys. Lett. B*]{}, [**400**]{} (1997) 379, \[[[hep-ph/9701390]{}](http://arxiv.org/abs/hep-ph/9701390)\]. M. Czakon, [*The four-loop [QCD]{} $\beta$-function and anomalous dimensions*]{}, [*Nucl. Phys. B*]{}, [**710**]{} (2005) 485, \[[[hep-ph/0411261]{}](http://arxiv.org/abs/hep-ph/0411261)\]. P.A. Baikov, K.G. Chetyrkin and J.H. K[ü]{}hn, [*Five-loop running of the QCD coupling constant*]{}, \[[arXiv:1606.08659 \[hep-ph\]](http://arxiv.org/abs/arXiv:1606.08659)\]. J.A.M. Vermaseren, and S.A. Larin, and T. van Ritbergen, [*The four-loop quark mass anomalous dimension and the invariant quark mass*]{}, [*Phys. Lett. B*]{}, [**405**]{} (1997) 327, \[[[hep-ph/9703284]{}](http://arxiv.org/abs/hep-ph/9703284)\]. P.A. Baikov, K.G. Chetyrkin and J.H. K[ü]{}hn, [*Quark mass and field anomalous dimensions to ${\cal O}(\alpha_s^5)$*]{}, [*JHEP*]{} [**1410**]{} (2014) 76, \[[arXiv:1402.6611 \[hep-ph\]](http://arxiv.org/abs/arXiv:1402.6611)\]. V.P. Spiridonov and K.G. Chetyrkin, [*Nonleading mass corrections and renormalization of the operators $m\bar\psi\psi$ and $G_{\mu\nu}^2$*]{}, [*Sov. J. Nucl. Phys.*]{} [**47**]{} (1988) 522, \[[*Yad. Fiz.*]{} [**47**]{} (1988) 818\]. M. Jamin and M. M[ü]{}nz, [*Current correlators to all orders in the quark masses*]{}, [*Z. Phys. C*]{} [**60**]{} (1993) 569, \[[[hep-ph/9208201]{}](http://arxiv.org/abs/hep-ph/9208201)\]. S.C. Generalis, [*QCD sum rules. 1: Perturbative results for current correlators*]{}, [*J. Phys. G*]{} [**16**]{} (1990) 785. M. Gell-Mann, R.J. Oakes and B. Renner, [*Behaviour of current divergences under SU(3)$\times$SU(3)*]{}, [*Phys. Rev.*]{} [**175**]{} (1968) 2195. M. Jamin, [*Flavour symmetry breaking of the quark condensate and chiral corrections to the Gell-Mann-Oakes-Renner relation*]{}, [*Phys. Lett. B*]{} [**538**]{} (2002) 71, \[[hep-ph/0201174](http://arxiv.gov/abs/hep-ph/0201174)\]. [^1]: For historical reasons, we shall speak about the “large-$\beta_0$” approximation, although in the notation employed in this work, the leading coefficient of the $\beta$-function is termed $\beta_1$. [^2]: The $(\bar u d)$ flavour content that also arises in hadronic $\tau$ decays is obtained by simply replacing the strange with a down quark. [^3]: In the case of a flavour non-diagonal current, the so-called singlet-diagram contributions are absent, and the perturbative expansion equally applies to the pseudoscalar correlator, up to a replacement of the mass factor $(m_u-m_s)$ by $(m_u+m_s)$. [^4]: Some care has to be taken when implementing expressions from ref. [@bkm00], since our convention for the logarithm is $L=\ln(Q^2/\mu^2)$, while in [@bkm00] instead $\ln(\mu^2/Q^2)$ was employed. [^5]: The relation to the corresponding coefficients $\tilde\Delta_n$ of [@bkm00] is given by $n(n-1)\tilde\Delta_n = -\,2\,{{\cal D}}_n^{(1)}$. [^6]: In the scheme with $C_m=C_a=0$, in which the spurious pole at $u=1$ is less enhanced, still for many orders large cancellations between the lowest-lying poles at $u=-1$ and $u=1$ take place.
--- abstract: 'In this paper we develop a  bases theory for ideals of partial difference polynomials with constant or non-constant coefficients. In particular, we introduce a criterion providing the finiteness of such bases when a difference ideal contains elements with suitable linear leading monomials. This can be explained in terms of Noetherianity of the corresponding quotient algebra. Among these Noetherian quotients we find finitely generated polynomial algebras where the action of suitable finite dimensional commutative algebras and in particular finite abelian groups is defined. We obtain therefore a consistent  bases theory for ideals that possess such symmetries.' address: - '$^*$ Laboratory of Information Technologies, JINR, 141980 Dubna, Russia' - '$^{**}$ Department of Mathematics, University of Bari, via Orabona 4, 70125 Bari, Italy' author: - 'Vladimir Gerdt$^*$' - 'Roberto La Scala$^{**}$' title: Noetherian quotients of the algebra of partial difference polynomials and Gröbner bases of symmetric ideals --- [^1] Introduction ============ The theory of difference algebras (see the books [@Co; @KLMP; @Le] and references therein) was introduced in the 1930s by the mathematician Joseph Fels Ritt at the same time as the theory of differential algebras. Indeed, for a quite long time, difference algebras has attracted less interest among researchers in comparison with differential ones despite the fact that numerical integration of differential equations relies on solving finite difference equations. The rapid development of symbolic computation and computer algebra in the last decade of the previous century gave rise to rather intensive algorithmic research in differential algebras and to the creation of sophisticated software as the [*diffalg*]{} library [@BH], implementing the Rosenfeld- algorithm and included in [Maple]{} and the package [LDA]{} [@GR12]. At the same time, except for algorithmization and implementation in [Maple]{} of the shift algebra of linear operators [@Ch] as a part of the package [Ore\_algebra]{}, practically nothing has been developed in computer algebra in relation to difference algebras. Nevertheless, in the last few years, the number of applications of the theory and the methods of difference algebras has increased fastly. For instance, it turned out that difference  bases may provide a very useful algorithmic tool for the reduction of multiloop Feynman integrals in high energy physics [@Ge04], for automatic generation of finite difference approximations to partial differential equations [@GBM; @LM] and for the consistency analysis of these approximations [@Ge12; @GR10]. Relevant research has been developed also in the context of linear functional systems [@LW; @Wu; @ZW]. In addition to these natural applications, another source of interest for difference algebras consists in the notion of “letterplace correspondence” [@LSL; @LSL2; @LS2] which transforms non-commutative computations for presented groups and algebras into analogue computations with ordinary difference polynomials. As a result of all this use, a number of computer algebra packages implementing involutive and Buchberger’s algorithms for computing difference  bases has been developed (see [@Ge12; @GR12; @LS] and reference therein). A major drawback in these computations, as for the differential case, is that such bases may be infinite owing to non-Noetherianity of the algebra of difference polynomials. In fact, if $X$ is a finite set and $\Sigma$ denotes a multiplicative monoid isomorphic to $(\N^r,+)$ then the algebra of difference polynomials is by definition the polynomial algebra $P$ in the infinite set of variables $X\times\Sigma$. Then, to provide the termination of the procedures computing  bases in $P$ at least in some significant cases, we propose in this paper essentially two solutions. One consists in defining an appropriate grading for $P$ that allows finite truncated computations for difference ideals $J\subset P$ generated by a finite number of homogeneous elements. For monomial orderings of $P$ that are compatible with such a grading this implies a criterion, valid also for the non-graded case, which is able to certify the completeness of a finite  basis computed on a finite number of variables of $P$. After the algebra of partial difference polynomials and its  bases are introduced in Section 2 and 3, this approach is described in Section 4 and an illustrative example based on the approximation of the Navier-Stokes equations is given in Section 5. A second solution to the termination problem consists in requiring that the difference ideal $J$ contains elements with suitable linear leading monomials which corresponds to have the Noetherian property for the quotient algebra $P/J$. Some similar ideas appeared for the differential case in [@CF; @Zo]. One finds this second approach in Section 6. It is interesting to note that a relevant class of such Noetherian quotient algebras is given by polynomial algebras $P'$ in a finite number of variables which are under the action of a tensor product of a finite number of finite dimensional algebras generated by single elements. These finite dimensional commutative algebras include for instance group algebras of finite abelian groups and hence, as a by-product of the theory of difference  bases, one obtains a theory for  bases of ideals of $P'$ that are invariant under the action of such groups or algebras (see also [@KLMP; @St]). These ideas are presented in Section 7 and a simple application is described in Section 8. Finally, in Section 9 one finds conclusions and hints for further developments of this research. Algebras of difference polynomials ================================== In this section we introduce the algebras of partial difference polynomials as freely generated objects in a suitable category of commutative algebras that are invariant under the action of a monoid isomorphic to $\N^r$ (the monoid of partial shift operators). This is a natural viewpoint since in the formal theory of partial difference equations the unknown functions and their shifts are assumed to be algebraically independent. Note that one has a similar situation with the theory of algebraic equations where the algebras of polynomials are free objects in the category of commutative algebras. Let $\Sigma = \langle \sigma_1,\ldots,\sigma_r\rangle$ be a free commutative monoid which is finitely generated by the elements $\sigma_i$. We denote $\Sigma$ in the multiplicative way with 1 as the identity element. One has clearly that $(\Sigma,\cdot)$ is isomorphic to the additive monoid $(\N^r,+)$ by the mapping $\sigma_1^{\alpha_1}\cdots\sigma_r^{\alpha_r}\mapsto (\alpha_1,\ldots,\alpha_r)$. Let $K$ be a field and denote by $\End(K)$ the monoid of ring endomomorphisms of $K$. We say that [*$\Sigma$ acts on $K$*]{} or equivalently that $K$ is a [*$\Sigma$-field*]{} if there exists a monoid homomorphism $\rho:\Sigma\to\End(K)$. In this case, for all $\sigma\in\Sigma$ and $c\in K$ we denote $\sigma\cdot c = \rho(\sigma)(c)$. Starting from now, we always assume that $K$ is a $\Sigma$-field. We say that $K$ is a [*field of constants*]{} if $\Sigma$ acts trivially on $K$, that is, $\sigma\cdot c = c$, for any $\sigma\in\Sigma$ and $c\in K$. Let $A$ be a commutative $K$-algebra. We say that $A$ is a [*$\Sigma$-algebra*]{} if there is a monoid homomorphism $\rho':\Sigma\to\End(A)$ extending $\rho:\Sigma\to\End(K)$, that is, $\rho'(\sigma)(c) = \rho(\sigma)(c)$, for all $\sigma\in\Sigma$ and $c\in K$. To simplify notations, for any $\sigma\in\Sigma$ and $a\in A$ we put $\sigma\cdot a = \rho'(\sigma)(a)$. Let $B$ be a $K$-subalgebra of a $\Sigma$-algebra $A$. We call $B$ a [*$\Sigma$-subalgebra*]{} of $A$ if $\Sigma\cdot B = \{\sigma\cdot b\mid \sigma\in\Sigma,b\in B\}\subset B$. In the same way, if $I$ is an ideal of $A$ such that $\Sigma\cdot I\subset I$ then we call $I$ a [*$\Sigma$-ideal*]{} of $A$. Let $B$ be a $K$-subalgebra of $A$ and let $X\subset B$ be a subset. If $B$ is the subalgebra generated by $\Sigma\cdot X$ then $B$ coincides clearly with the smallest $\Sigma$-subalgebra of $A$ containing $X$. In this case, we say that $B$ is the $\Sigma$-subalgebra which is [*$\Sigma$-generated by $X$*]{} and we denote it as $K[X]_\Sigma$. In a similar way, if $X\subset I\subset A$ is the ideal generated by $\Sigma\cdot X$ then one has that $I$ is the smallest $\Sigma$-ideal of $A$ containing $X$. Then, we say that $I$ is the $\Sigma$-ideal which is [*$\Sigma$-generated*]{} by $X$ and we make use of notation $I = \langle X \rangle_\Sigma$. We also say that $X$ is a [*$\Sigma$-basis*]{} of $I$. Let $A,B$ be $\Sigma$-algebras and let $\varphi:A\to B$ be a $K$-algebra homomorphism. We call $\varphi$ a [*$\Sigma$-homomorphism*]{} if $\varphi(\sigma\cdot a) = \sigma\cdot\varphi(a)$, for all $\sigma\in\Sigma$ and $a\in A$. In the category of $\Sigma$-algebras one can define free objects as follows. Let $X$ be a set and denote $x(\sigma)$ each element $(x,\sigma)$ of the product set $X(\Sigma) = X\times\Sigma$. Define $P = K[X(\Sigma)]$ the $K$-algebra of polynomials in the commuting variables $x(\sigma)\in X(\Sigma)$. For any element $\sigma\in\Sigma$, consider the ring endomorphism $\bar{\sigma}:P\to P$ such that $$c x(\tau)\mapsto (\sigma\cdot c) x(\sigma\tau)$$ for all $c\in K$ and $x(\tau)\in X(\Sigma)$. Clearly, we have a monoid homomorphism $\rho:\Sigma\to\End(P)$ such that $\rho(\sigma) = \bar{\sigma}$, for any $\sigma\in\Sigma$. By definition of $\bar{\sigma}$, one has that $\rho$ extends to $P$ the action of $\Sigma$ on the base field $K$, that is, $P$ is a $\Sigma$-algebra. Note that the homomorphism $\rho$ is in fact an injective map. The following result states that $P$ is a free object in the category of $\Sigma$-algebras. \[freeobj\] Let $A$ be a $\Sigma$-algebra and let $f:X\to A$ be any map. Then, there exists a unique $\Sigma$-algebra homomorphism $\varphi:P\to A$ such that $\varphi(x(1)) = f(x)$, for all $x\in X$. A $K$-algebra homomorphism $\varphi:P\to A$ is clearly defined by putting $\varphi(x(\sigma)) = \sigma\cdot f(x)$, for any $x\in X$ and $\sigma\in\Sigma$. Then, one has that $\varphi(\sigma\cdot c x(\tau)) = \varphi((\sigma\cdot c) x(\sigma\tau)) = (\sigma\cdot c)\varphi(x(\sigma\tau)) = (\sigma\cdot c)(\sigma\tau\cdot f(x)) = \sigma\cdot(c (\tau\cdot f(x))) = \sigma\cdot(c \varphi(x(\tau))) = \sigma\cdot\varphi(c x(\tau))$, for all $c\in K$, $x\in X$ and $\sigma,\tau\in\Sigma$. In other words, the mapping $\varphi:P\to A$ is a $\Sigma$-algebra homomorphism and owing to $x(\sigma) = \sigma\cdot x(1)$, it is clearly the unique one such that $\varphi(x(1)) = f(x)$, for all $x\in X$. We call $P = K[X(\Sigma)]$ the [*free $\Sigma$-algebra generated by $X$*]{}. In fact, $P$ is $\Sigma$-generated by the subset $X(1) = \{x(1)\mid x\in X\}$. Note that if $A$ is any $\Sigma$-algebra which is $\Sigma$-generated by $X$ one has that $A$ is isomorphic to the quotient $P/J$ where $J\subset P$ is the $\Sigma$-ideal containing all $\Sigma$-algebra relations satisfied by the elements of $X$. In other words, there is a surjective $\Sigma$-algebra homomorphism $\varphi:P\to A$ such that $x(1)\mapsto x$ ($x\in X$) and one defines $J = \Ker\varphi$. We are ready now to make the link with the formal theory of partial difference equations. Let $K$ be a field of functions in the variables $t_1,\ldots,t_r$ and fix $h_1,\ldots,h_r$ some parameters (mesh steps). Assume we may define the action of $\Sigma$ on $K$ by putting for all $\sigma = \prod_i \sigma_i^{\alpha_i}\in\Sigma$ and for any function $f\in K$ $$\sigma\cdot f(t_1,\ldots,t_r) = f(t_1 + \alpha_1 h_1,\ldots, t_r + \alpha_r h_r)\in K.$$ For instance, one can consider the field of rational functions $K = F(t_1,\ldots,t_k)$ over some field $F$ and $h_1,\ldots,h_r\in F$. Consider now a finite set of unknown functions $u_i = u_i(t_1,\ldots,t_r)$ ($1\leq i\leq n$) that are assumed to be $K$-algebraically independent together with the shifted functions $\sigma\cdot u_i = u_i(t_1 + \alpha_1 h_1, \ldots,t_r + \alpha_r h_r)$, for any $\sigma = \prod_i \sigma_i^{\alpha_i} \in\Sigma$. If $X = \{x_1,\ldots,x_n\}$ and if we denote $x_i(\sigma) = \sigma\cdot u_i$ then the free $\Sigma$-algebra $P = K[X(\Sigma)]$ is by definition the [*algebra of partial difference polynomials*]{}. In particular, if $K$ is a field of constants then the difference polynomials of $P$ are said to be [*with constant coefficients*]{}. Moreover, one uses the term [*ordinary difference*]{} when $r = 1$. Note that in the literature one finds the notation $P = K\{X\}$ that emphasizes the role of $X$ as (free) $\Sigma$-generating set of the algebra $P$. According to the notations we have introduced for the $\Sigma$-algebras one may write also $P = K[X]_\Sigma$. In fact, we prefer $P = K[X(\Sigma)]$ to mean that $P$ is the usual polynomial algebra defined for some special set of variables $X(\Sigma)$ which is invariant under the action of the monoid $\Sigma$, that is, $\Sigma\cdot X(\Sigma)\subset X(\Sigma)$. In the theory of algebraic equations we have that systems of algebraic equations correspond to bases of ideals of the polynomial algebra. In a similar way, one has that systems of partial difference equations corresponds to $\Sigma$-bases of $\Sigma$-ideals of $P$ which are also called [*partial difference ideals*]{}. Note that $\Sigma$ and therefore $X(\Sigma)$ is an infinite set which implies that $P$ is not a Noetherian algebra. Then, one has that the $\Sigma$-ideals have bases and even $\Sigma$-bases which are generally infinite.  bases of difference ideals =========================== In this section we introduce a  basis theory for the algebra of partial difference polynomials by extending what has be done in [@LS] for the case of constant coefficients. Note that the concept of difference  basis has arisen also in [@Ge12; @GR12; @LSL2]. \[monord\] Let $\prec$ be a total ordering on the set $M = \Mon(P)$ of all monomials of $P$. We call $\prec$ a [*monomial ordering of $P$*]{} if the following properties are satisfied: - $\prec$ is a multiplicatively compatible ordering, that is, if $m\prec n$ then $t m \prec t n$, for any $m,n,t\in M$; - $\prec$ is a well-ordering, that is, every non-empty subset of $M$ has a minimal element. It is clear that in this case one has also that - $1\prec m$, for all $m\in M, m\neq 1$. Even if the variables set $X(\Sigma)$ is infinite, by Higman’s Lemma [@Hi] the polynomial algebra $P = K[X(\Sigma)]$ can be always endowed with a monomial ordering. Let $\prec$ be a total ordering on $M$ which verifies the properties $(i),(iii)$ of Definition \[monord\]. If $\prec$ induces a well-ordering on the variables set $X(\Sigma)\subset M$, then $\prec$ is a well-ordering also on $M$ and hence it is a monomial ordering of $P$. Note now that the monomials set $M$ is invariant under the action of $\Sigma$, that is $\Sigma\cdot M\subset M$, because the same happens to the variables set $X(\Sigma)$. Clearly, we have to require that a monomial ordering respects this key property for defining  bases of $\Sigma$-ideals of $P$ which are ideals that are $\Sigma$-invariant. In other words, one has to introduce the following notion. Let $\prec$ be a monomial ordering of $P$. We call $\prec$ a [*monomial $\Sigma$-ordering of $P$*]{} if $m\prec n$ implies that $\sigma\cdot m\prec \sigma\cdot n$, for all $m,n\in M$ and $\sigma\in\Sigma$. Note that if $\prec$ is a monomial $\Sigma$-ordering of $P$ then one has immediately that $\sigma\cdot m\succeq m$, for all $m\in M$ and $\sigma\in\Sigma$. Examples of such orderings can be easily constructed in the following way. Let $Q = K[\sigma_1,\ldots,\sigma_r]$ be the polynomial algebra in the variables $\sigma_j$ and therefore $\Sigma = \Mon(Q)$. Moreover, let $K[X] = K[x_1,\ldots,x_n]$ be the polynomial algebra in the variables $x_i$. Fix a monomial ordering $<$ for $Q$ and a monomial ordering $\prec$ for $K[X]$. For any $\sigma\in\Sigma$, denote $X(\sigma) = \{x_i(\sigma)\mid x_i\in X\}$. Clearly $P(\sigma) = K[X(\sigma)]$ is a subalgebra of $P$ which is isomorphic to $K[X]$ and hence it can be endowed with the monomial ordering $\prec$. Since $X(\Sigma) = \bigcup_{\sigma\in\Sigma} X(\sigma)$, one can define a block monomial ordering for $P = K[X(\Sigma)]$ obtained by $<$ and $\prec$. \[weightord\] Let $m,n\in M$ be any pair of monomials. Clearly, we can factorize these monomials as $m = m_1\cdots m_k, n = n_1\cdots n_k$ where $m_i,$ $n_i\in M(\delta_i) = \Mon(P(\delta_i))$ $(\delta_i\in\Sigma)$ and $\delta_1 > \ldots > \delta_k$ $(k\geq 1)$. Note explicitely that some of the factors $m_i,n_i$ may be eventually equal to 1. We define $m\prec' n$ if and only if there is $1\leq i\leq k$ such that $m_j = n_j$ when $j < i$ and $m_i\prec n_i$. Then, $\prec'$ is a monomial $\Sigma$-ordering of $P$. For all $\sigma\in\Sigma$, one has that $\sigma\cdot m = m'_1\cdots m'_k$ where $m'_i = \sigma\cdot m_i\in M(\sigma\delta_i)$ and $\sigma\delta_1 > \ldots > \sigma\delta_k$ because $<$ is a monomial ordering of $Q$. Assume $m\prec' n$, that is, $m_j = n_j$ for $j < i$ and $m_i\prec n_i$. Clearly, one has also that $m'_j = n'_j$. Moreover, by definition of the monomial ordering $\prec$ on all subalgebras $P(\sigma)\subset P$ we have that $m_i\prec n_i$ if and only if $m'_i\prec n'_i$. We conclude that $\sigma\cdot m\prec' \sigma\cdot n$. \[monordex\] Fix $n = 2$ and $r = 3$, that is, let $X = \{x,y\}$ and $\Sigma = \langle \sigma_1,\sigma_2,\sigma_3 \rangle$. To simplify the notation of the variables in $X(\Sigma)$, we identify $\Sigma$ with the additive monoid $\N^3$, that is, we put $X(\Sigma) = \{x(i,j,k),y(i,j,k)\mid i,j,k\geq 0\}$. By Proposition \[weightord\], a monomial $\Sigma$-ordering is defined for $P = K[X(\Sigma)]$ once two monomial orderings are given for $Q = K[\sigma_1,\sigma_2,\sigma_3]$ and $K[X] = K[x,y]$. Consider for instance the degree reverse lexicographic ordering $<$ on $Q$ ($\sigma_1 > \sigma_2 > \sigma_3$) and the lexicographic ordering $\prec$ on $K[X]$ ($x\succ y$). One has that $<$ orders the blocks of variables $X(i,j,k) = \{x(i,j,k),y(i,j,k)\}$ in the following way $$\begin{array}{l} \ldots > \{x(2, 0, 0), y(2, 0, 0)\} > \{x(1, 1, 0), y(1, 1, 0)\} > \{x(0, 2, 0), y(0, 2, 0)\} > \\ \hphantom{\ldots >\ } \{x(1, 0, 1), y(1, 0, 1)\} > \{x(0, 1, 1), y(0, 1, 1)\} > \{x(0, 0, 2), y(0, 0, 2)\} > \\ \hphantom{\ldots >\ } \{x(1, 0, 0), y(1, 0, 0)\} > \{x(0, 1, 0), y(0, 1, 0)\} > \{x(0, 0, 1), y(0, 0, 1)\} > \\ \hphantom{\ldots >\ } \{x(0, 0, 0), y(0, 0, 0)\}. \end{array}$$ Moreover, the ordering $\prec$ is defined for each subalgebra $K[x(i,j,k),y(i,j,k)]$. The resulting block monomial ordering for $P$ (which is a $\Sigma$-ordering by Proposition \[weightord\]) is therefore the lexicographic ordering with $$\begin{array}{l} \ldots \succ x(2, 0, 0)\succ y(2, 0, 0)\succ x(1, 1, 0)\succ y(1, 1, 0)\succ x(0, 2, 0)\succ y(0, 2, 0)\succ \\ \hphantom{\ldots \succ\ } x(1, 0, 1)\succ y(1, 0, 1)\succ x(0, 1, 1)\succ y(0, 1, 1)\succ x(0, 0, 2)\succ y(0, 0, 2)\succ \\ \hphantom{\ldots \succ\ } x(1, 0, 0)\succ y(1, 0, 0)\succ x(0, 1, 0)\succ y(0, 1, 0)\succ x(0, 0, 1)\succ y(0, 0, 1)\succ \\ \hphantom{\ldots \succ\ } x(0, 0, 0)\succ y(0, 0, 0). \end{array}$$ From now on, we assume that $P$ is endowed with a monomial $\Sigma$-ordering $\prec$. Let $f = \sum_i c_i m_i\in P$ with $m_i\in M$ and $0\neq c_i\in K$. If $m_k = \max_\prec\{m_i\}$ then we denote as usual $\lm(f) = m_k, \lc(f) = c_k$ and $\lt(f) = c_k m_k$. Since $\prec$ is a $\Sigma$-ordering, one has that $\lm(\sigma\cdot f) = \sigma\cdot\lm(f)$ and therefore $\lc(\sigma\cdot f) = \sigma\cdot\lc(f),\lt(\sigma\cdot f) = \sigma\cdot\lt(f)$, for all $\sigma\in\Sigma$. If $G\subset P$ then we denote $\langle G \rangle = \{\sum_i f_i g_i\mid f_i\in P, g_i\in G\}$, that is, $\langle G \rangle$ is the ideal of $P$ generated by $G$. Moreover, recall that $\langle G \rangle_\Sigma = \langle \Sigma\cdot G \rangle = \{\sum_i f_i (\delta_i\cdot g_i)\mid \delta_i\in\Sigma, f_i\in P, g_i\in G\}$ is the $\Sigma$-ideal which is $\Sigma$-generated by $G$, that is, it is the smallest $\Sigma$-ideal of $P$ containing $G$. We call $G$ a $\Sigma$-basis of $\langle G \rangle_\Sigma$. Finally, we put $\lm(G) = \{\lm(f) \mid f\in G,f\neq 0\}$ and we denote $\LM(G) = \langle \lm(G) \rangle$. Let $G\subset P$. Then $\lm(\Sigma\cdot G) = \Sigma\cdot \lm(G)$. In particular, if $I$ is a $\Sigma$-ideal of $P$ then $\LM(I)$ is also a $\Sigma$-ideal. Since $P$ is endowed with a $\Sigma$-ordering, one has that $\lm(\sigma\cdot f) = \sigma\cdot\lm(f)$, for any $f\in P,f\neq 0$ and $\sigma\in\Sigma$. Then, $\Sigma\cdot\lm(I) = \lm(\Sigma\cdot I)\subset \lm(I)$ and therefore $\LM(I) = \langle \lm(I) \rangle$ is a $\Sigma$-ideal. Let $I\subset P$ be a $\Sigma$-ideal and $G\subset I$. We call $G$ a [* $\Sigma$-basis*]{} of $I$ if $\lm(G)$ is a $\Sigma$-basis of $\LM(I)$. In other words, $\lm(\Sigma\cdot G) = \Sigma\cdot\lm(G)$ is a basis of $\LM(I)$, that is, $\Sigma\cdot G$ is a  basis of $I$ as an ideal of $P$. Since $P$ is not a Noetherian algebra, in general its $\Sigma$-ideals have infinite () $\Sigma$-bases. Note that one has a similar situation for the free associative algebra and its ideals and this case is strictly related with the algebra of ordinary difference polynomials owing to the notion of “letterplace correspondence” [@LSL; @LSL2; @LS2]. See also the comprehensive Bergman’s paper [@Be] where the theory of Gröbner bases (he did not use this name) is provided for both commutative and non-commutative algebras in full generality, that is, without any assumption about Noetherianity. In Section 6 we will prove in fact the existence of a class of $\Sigma$-ideals containing finite  $\Sigma$-bases. According to [@Ge12; @GR12], such finite bases are also called “difference  bases”. Let now $f,g\in P,f,g\neq 0$ and put $\lt(f) = c m, \lt(g) = d n$ with $m,n\in M$ and $c,d\in K$. If $l = \lcm(m,n)$ one defines the [*S-polynomial*]{} $\spoly(f,g) = (l/c m) f - (l/d n) g$. \[sigmaspoly\] For all $f,g\in P,f,g\neq 0$ and for any $\sigma\in\Sigma$ one has that $\sigma\cdot\spoly(f,g) = \spoly(\sigma\cdot f,\sigma\cdot g)$. Note that $\lt(\sigma\cdot f) = (\sigma\cdot c)(\sigma\cdot m), \lt(\sigma\cdot g) = (\sigma\cdot d)(\sigma\cdot n)$ with $\sigma\cdot m, \sigma\cdot n\in M$ and $\sigma\cdot c,\sigma\cdot d\in K$. Since $\Sigma$ acts on the variables set $X(\Sigma)$ by injective maps, if $l = \lcm(m,n)$ then $\sigma\cdot l = \lcm(\sigma\cdot m,\sigma\cdot n)$ and therefore we have $$\begin{gathered} \sigma\cdot \spoly(f,g) = \sigma\cdot( \frac{l}{c m} f - \frac{l}{d n} g ) = \\ \frac{\sigma\cdot l}{(\sigma\cdot c)(\sigma\cdot m)}\sigma\cdot f - \frac{\sigma\cdot l}{(\sigma\cdot d)(\sigma\cdot n)}\sigma\cdot g = \spoly(\sigma\cdot f,\sigma\cdot g). \end{gathered}$$ In the theory of  bases one has the following important notion. Let $f\in P,f\neq 0$ and $G\subset P$. If $f = \sum_i f_i g_i$ with $f_i\in P,g_i\in G$ and $\lm(f)\succeq\lm(f_i)\lm(g_i)$ for all $i$, we say that [*$f$ has a  representation with respect to $G$*]{}. Note that if $f = \sum_i f_i g_i$ is a  representation then $\sigma\cdot f = \sum_i (\sigma\cdot f_i)(\sigma\cdot g_i)$ is also a  representation, for any $\sigma\in\Sigma$. In fact, from $\lm(f)\succeq\lm(f_i)\lm(g_i)$ it follows that $\lm(\sigma\cdot f) = \sigma\cdot \lm(f)\succeq (\sigma\cdot \lm(f_i))(\sigma\cdot \lm(g_i)) = \lm(\sigma\cdot f_i) \lm(\sigma\cdot g_i)$, for all indices $i$. Finally, if $\sigma = \prod_i \sigma_i^{\alpha_i}, \tau = \prod_i \sigma_i^{\beta_i} \in\Sigma = \langle \sigma_1,\ldots,\sigma_r\rangle$ we define $\gcd(\sigma,\tau) = \prod_i \sigma_i^{\gamma_i}$ where $\gamma_i = \min(\alpha_i,\beta_i)$. For the  $\Sigma$-bases of $P$ we have the following characterization. \[sigmacrit\] Let $G$ be a $\Sigma$-basis of a $\Sigma$-ideal $I\subset P$. Then, $G$ is a  $\Sigma$-basis of $I$ if and only if for all $f,g\in G,f,g\neq 0$ and for any $\sigma,\tau\in\Sigma$ such that $\gcd(\sigma,\tau) = 1$ and $\gcd(\sigma\cdot\lm(f),\tau\cdot\lm(g))\neq 1$, the S-polynomial $\spoly(\sigma\cdot f, \tau\cdot g)$ has a representation with respect to $\Sigma\cdot G$. Recall that $G$ is a  $\Sigma$-basis if and only if $\Sigma\cdot G$ is a  basis of $I$. By Buchberger’s criterion [@Bu] or by Bergman’s diamond lemma [@Be] this happens if and only if the S-polynomials $\spoly(\sigma\cdot f, \tau\cdot g)$ have a  representation with respect to $\Sigma\cdot G$, for all $f,g\in G,f,g\neq 0$ and $\sigma,\tau\in\Sigma$. By the product criterion (see for instance [@GP]) we may restrict ourselves to considering only S-polynomials such that $\gcd(\sigma\cdot\lm(f),\tau\cdot\lm(g))\neq 1$ since $\lm(\sigma\cdot f) = \sigma\cdot\lm(f)$ and $\lm(\tau\cdot g) = \tau\cdot\lm(g)$. Then, let $\spoly(\sigma\cdot f, \tau\cdot g)$ be any such S-polynomial and put $\delta = \gcd(\sigma,\tau)$ and therefore $\sigma = \delta \sigma', \tau = \delta \tau'$ with $\sigma',\tau'\in\Sigma, \gcd(\sigma',\tau') = 1$. One has that $\spoly(\sigma\cdot f, \tau\cdot g) = \delta\cdot \spoly(\sigma'\cdot f, \tau'\cdot g)$ owing to Proposition \[sigmaspoly\]. Note now that if $\spoly(\sigma'\cdot f, \tau'\cdot g) = h = \sum_\nu f_\nu (\nu\cdot g_\nu)$ ($\nu\in\Sigma,f_\nu\in P,g_\nu\in G$) is a  representation with respect to $\Sigma\cdot G$ then also $\spoly(\sigma\cdot f, \tau\cdot g) = \delta\cdot h = \sum_\nu (\delta\cdot f_\nu) (\delta\nu\cdot g_\nu)$ is a  representation because $\prec$ is a $\Sigma$-ordering of $P$. We conclude that the S-polynomials to be checked for  representations may be restricted to the ones satisfying both the conditions $\gcd(\sigma\cdot\lm(f),\tau\cdot\lm(g))\neq 1$ and $\gcd(\sigma,\tau) = 1$. From the above result one obtains a variant of Buchberger’s procedure based on the “$\Sigma$-criterion” $\gcd(\sigma,\tau) = 1$ which is able to compute  $\Sigma$-bases. A standard routine that one needs in this method is the following one. $G\subset P$ and $f\in P$. $h\in P$ such that $f - h\in\langle G\rangle$ and $h = 0$ or $\lm(h)\notin\LM(G)$. $h:= f$; choose $g\in G,g\neq 0$ such that $\lm(g)$ divides $\lm(h)$; $h:= h - (\lt(h)/\lt(g)) g$; ; $h$. Note that even if $G$ may consist of an infinite number of polynomials, the set of their leading monomials dividing $\lm(h)$ is always a finite one. In other words, the “choose” instruction in the above routine can be actually performed. Moreover, although the polynomial algebra $P = K[X(\Sigma)]$ is infinitely generated, the existence of monomial orderings for $P$ provides clearly the termination. By Proposition \[sigmacrit\] one obtains the correctness of the following procedure for enumerating a  $\Sigma$-basis of a $\Sigma$-ideal having a finite $\Sigma$-basis. $H$, a finite $\Sigma$-basis of a $\Sigma$-ideal $I\subset P$. $G$, a  $\Sigma$-basis of $I$. $G:= \{g\in H\mid g\neq 0\}$; $B:= \{(f,g) \mid f,g\in G\}$; choose $(f,g)\in B$; $B:= B\setminus \{(f,g)\}$; $h:= \Reduce(\spoly(\sigma\cdot f,\tau\cdot g), \Sigma\cdot G)$; $B:= B\cup\{(g,h),(h,h) \mid g\in G\}$; $G:= G\cup\{h\}$; ; ; ; $G$. For this procedure we do not have general termination owing to non-Noetherianity of the algebra $P$. In fact, even if we assume that the $\Sigma$-ideal $I\subset P$ has a finite $\Sigma$-basis, this may be not true for its initial $\Sigma$-ideal $\LM(I)$, that is, $I$ may have no finite  $\Sigma$-basis. In the next section, after introducing suitable monomial $\Sigma$-orderings of $P$ we will give an algorithm which is able to compute in a finite number of steps a finite  $\Sigma$-basis whenever this exists. Note anyway that in the above procedure all instructions can be actually performed. In particular, for any pair of elements $f,g\in G$ and for all $\sigma,\tau\in\Sigma$ there are only a finite number of S-polynomials $\spoly(\sigma\cdot f,\tau\cdot g)$ satisfying both the criteria $\gcd(\sigma,\tau) = 1$ and $\gcd(\sigma\cdot\lm(f),\tau\cdot\lm(g))\neq 1$. A proof is given by the arguments contained in Proposition \[finsigmacrit\] of the next section. Observe that the case $f = g$ has to be considered whenever $\sigma\neq\tau$. Finally, note that the chain criterion (see for instance [@GP]) can be added to  to shorten the number of S-polynomials that have to be reduced. In fact, we can view this procedure as a variant of the classical Buchberger’s one applied to the basis $\Sigma\cdot H$ of the ideal $I$ where Proposition \[sigmacrit\] provides the additional “$\Sigma$-criterion” to avoid useless pairs. In other words, this is one way to actually implement the procedure  (see [@LS]) in any commutative computer algebra system. In the following sections we propose two possible solutions for providing termination to . First, we introduce a grading on $P$ that is compatible with the action of $\Sigma$ which implies that the truncated variant of this procedure with homogeneous input stops in a finite number of steps. Another approach consists in obtaining finite  $\Sigma$-bases when elements with suitable linear leading monomials belong to the given $\Sigma$-ideal $I$. More precisely, we obtain the Noetherian property for a certain class of (quotient) $\Sigma$-algebras $P/I$. Grading and truncation ====================== A useful grading for the free $\Sigma$-algebra $P$ can be introduced in the following way. Consider the set $\hN = \N\cup\{-\infty\}$ endowed with the binary operations $\max$ and $+$. Clearly $(\hN,\max,+)$ is a commutative semiring which is also idempotent since $\max(d,d) = d$, for all $d\in\hN$. Moreover, for any $\sigma = \prod_i \sigma_i^{\alpha_i}\in\Sigma$ we put $\deg(\sigma) = \sum_i \alpha_i$. \[deford\] Let $\ord:M\to\hN$ be the unique mapping such that - $\ord(1) = -\infty$; - $\ord(m n) = \max(\ord(m),\ord(n))$, for all $m,n\in M$; - $\ord(x_i(\sigma)) = \deg(\sigma)$, for any variable $x_i(\sigma)\in X(\Sigma)$. Then, the map $\ord$ is a monoid homomorphism from $(M,\cdot)$ to $(\hN,\max)$. We call $\ord$ the [*order function*]{} of $P$. More explicitely, if $m = x_{i_1}(\delta_1)^{\alpha_1}\cdots x_{i_k}(\delta_k)^{\alpha_k} \in M = \Mon(P)$ is any monomial different from 1 ($x_{i_l}(\delta_l)\in X(\Sigma)$ and $\alpha_l > 0$, for each $1\leq l\leq k$) we have that $$\ord(m) = \max(\deg(\delta_1),\ldots,\deg(\delta_k)).$$ Let $X = \{x,y\}$ and $\Sigma = \langle \sigma_1,\sigma_2,\sigma_3 \rangle$. As in the Example \[monordex\], denote $X(\Sigma) = \{x(i,j,k),y(i,j,k)\mid i,j,k\geq 0\}$. If we consider the monomial $$m = y(1,1,0)^2 x(1,0,1) x(1,0,0)^3 y(0,0,0)^4$$ then $\ord(m) = 2$. Let $P_d = \langle\, m\in M \mid \ord(m) = d\, \rangle_K\subset P$, that is, $P_d$ is the $K$-subspace of $P$ generated by all monomials having order equal to $d$. A polynomial $f\in P_d$ is called [*ord-homogeneous*]{} and we denote $\ord(f) = d$. By property (ii) of Definition \[deford\] one has clearly that $P = \bigoplus_{d\in\hN} P_d$ is a grading of the algebra $P$ over the commutative monoid $(\hN,\max)$. \[ordgood\] The following properties hold for the order function: - $\ord(\sigma\cdot m) = \deg(\sigma) + \ord(m)$, for any $\sigma\in\Sigma$ and $m\in M$; - $\ord(\lcm(m,n)) = \ord(m n) = \max(\ord(m),\ord(n))$, for all $m,n\in M$. Therefore, if $m\mid n$ then $\ord(m)\leq \ord(n)$. If $m = 1$ then $\ord(\sigma\cdot m) = \ord(m) = -\infty = \deg(\sigma) + \ord(m)$. If otherwise $m = x_{i_1}(\delta_1)^{\alpha_1}\cdots x_{i_k}(\delta_k)^{\alpha_k}$ then $\sigma\cdot m = x_{i_1}(\sigma\delta_1)^{\alpha_1}\cdots x_{i_k}(\sigma\delta_k)^{\alpha_k}$ and hence $\ord(\sigma\cdot m) = \max(\deg(\sigma\delta_1),\ldots,\deg(\sigma\delta_k)) = \deg(\sigma) + \max(\deg(\delta_1),\ldots,\deg(\delta_k)) = \deg(\sigma) + \ord(m)$. To prove (ii) it is sufficient to note that the order of a monomial does not depend on the exponents of the variables occurring in it. An ideal $I\subset P$ is called [*$\ord$-graded*]{} if $I = \sum_d I_d$ with $I_d = I\cap P_d$. Note that if $I$ is in addition a $\Sigma$-ideal then by (i) of Proposition \[ordgood\] one has that $\sigma\cdot I_d\subset I_{\deg(\sigma) + d}$, for any $\sigma\in\Sigma$ and $d\in\hN$. Let $f,g\in P,f\neq g$ be any pair of $\ord$-homogeneous elements. Then, the S-polynomial $h = \spoly(f,g)$ is also $\ord$-homogeneous and by (ii) of Proposition \[ordgood\] one has that $\ord(h) = \max(\ord(f),\ord(g))$. If $\ord(f),\ord(g)\leq d$ for some $d\in\N$, we have therefore that $\ord(h)\leq d$ which implies the following result. \[ordtermin\] Let $I\subset P$ be an $\ord$-graded $\Sigma$-ideal and let $d\in\N$. Assume there is an $\ord$-homogeneous $\Sigma$-basis $H\subset I$ such that $H_d = \{f\in H\mid \ord(f)\leq d\}$ is a finite set. Then, there exists also an $\ord$-homogeneous  $\Sigma$-basis $G$ of $I$ such that $G_d$ is a finite set. In other words, if one uses for a selection strategy of the S-polynomials based on their orders then the $d$-truncated variant of  with input $H_d$ terminates in a finite number of steps. In the procedure  one computes a subset $G$ of a  basis $G' = \Sigma\cdot G$ obtained by applying Buchberger’s procedure to the basis $H' = \Sigma\cdot H$ of the ideal $I$. Moreover, Proposition \[ordgood\] implies that the set $H'$ and hence $G'$ consists of $\ord$-homogeneous elements. Define hence $H'_d = \{\sigma\cdot f\mid\sigma\in\Sigma,f\in H, \deg(\sigma) + \ord(f)\leq d\}$. Note that $\Sigma_d = \{\sigma\in\Sigma\mid \deg(\sigma)\leq d\}$ is clearly a finite set and by hypothesis we have that $H_d$ is also a finite one. We conclude that $H'_d\subset \Sigma_d\cdot H_d$ is a finite set. Denote now by $Y_d$ the finite set of variables of $P$ occurring in the elements of $H'_d$ and define the subalgebra $P_{(d)} = K[Y_d]\subset P$. In fact, the $d$-truncated variant of computes a subset of a  basis of the ideal $I_{(d)} \subset P_{(d)}$ generated by $H'_d$. The Noetherianity of the finitely generated polynomial algebra $P_{(d)}$ provides then termination. Note that this result implies an algorithmic solution to the ideal membership for finitely generated $\ord$-graded $\Sigma$-ideals. Another consequence of the grading defined by the order function is that one has a criterion, also in the non-graded case, for verifying that a $\Sigma$-basis computed by the procedure  using a finite number of variables of $P$ is a complete finite  $\Sigma$-basis, whenever this basis exists. This is of course important because actual computations can be only performed over a finite number of variables. Let $\prec$ be a monomial $\Sigma$-ordering of $P$. We say that $\prec$ is [*compatible with the order function*]{} if $\ord(m) < \ord(n)$ implies that $m\prec n$, for all $m,n\in M$. Denote by $\prec$ the monomial $\Sigma$-ordering of $P$ defined in Proposition \[weightord\] and let $<$ be the monomial ordering of $Q = K[\sigma_1,\ldots,\sigma_r]$ which is used to define $\prec$. Assume that $<$ is compatible with the function $\deg$, that is, $\deg(\sigma) < \deg(\tau)$ implies that $\sigma < \tau$, for any $\sigma,\tau\in\Sigma$. Then, one has that $\prec$ is compatible with the function $\ord$. Let $m = m_1\cdots m_k,n = n_1\cdots n_k$ be any pair of monomials of $P$, where $m_i,n_i\in M(\delta_i)$ $(\delta_i\in\Sigma)$ and $\delta_1 > \ldots > \delta_k$ (hence $\deg(\delta_1)\geq\ldots\geq\deg(\delta_k)$). Assume $m\prec n$, that is, there is $1\leq i\leq k$ such that $m_j = n_j$ when $j < i$ and $m_i\prec n_i$. If $i > 1$ or $m_i\neq 1$ one has clearly $\ord(m) = \ord(n) = \deg(\delta_1)$. Otherwise, we conclude that $\ord(m) \leq \deg(\delta_1) = \ord(n)$. As before, we denote $\Sigma_d = \{\sigma\in\Sigma \mid \deg(\sigma)\leq d\}$. \[finsigmacrit\] Assume that $P$ is endowed with a monomial $\Sigma$-ordering compatible with the order function. Let $G\subset P$ be a finite set and define the $\Sigma$-ideal $I = \langle G \rangle_\Sigma$. Moreover, denote $d = \max\{\ord(\lm(g))\mid g\in G,g\neq 0\}$. Then, $G$ is a  $\Sigma$-basis of $I$ if and only if for all $f,g\in G,f,g\neq 0$ and for any $\sigma,\tau\in\Sigma$ such that $\gcd(\sigma,\tau) = 1$ and $\gcd(\sigma\cdot\lm(f),\tau\cdot\lm(g))\neq 1$, the S-polynomial $\spoly(\sigma\cdot f,\tau\cdot g)$ has a  representation with respect to the finite set $\Sigma_{2d}\cdot G$. Let $\spoly(\sigma\cdot f,\tau\cdot g) = h = \sum_\nu f_\nu (\nu\cdot g_\nu)$ be a  representation with respect to $\Sigma\cdot G$, that is, $\lm(h)\succeq \lm(f_\nu)(\nu\cdot \lm(g_\nu))$, for all $\nu$. We want to bound the degree of the elements $\nu\in\Sigma$ occurring in this representation. Put $m = \lm(f),n = \lm(g)$ and hence $\lm(\sigma\cdot f) = \sigma\cdot m,\lm(\sigma\cdot g) = \sigma\cdot n$. By the product criterion one has that $u = \gcd(\sigma\cdot m,\tau\cdot n)\neq 1$, that is, there is a common variable $x_i(\sigma \alpha) = x_i(\tau \beta)$ dividing $u$ where $x_i(\alpha)$ divides $m$ and $x_i(\beta)$ divides $n$. Therefore $\sigma \alpha = \tau \beta$ and we have that $\deg(\alpha)\leq\ord(m)\leq d$ and $\deg(\beta)\leq\ord(n)\leq d$. From $\sigma \alpha = \tau \beta$ and the $\Sigma$-criterion $\gcd(\sigma,\tau) = 1$ it follows that $\sigma\mid \beta,\tau\mid \alpha$ and hence $\deg(\sigma),\deg(\tau)\leq d$. If $v = \lcm(\sigma\cdot m,\tau\cdot m)$ then we have that $\ord(v) = \max(\deg(\sigma) + \ord(m),\deg(\tau) + \ord(n))\leq 2d$. Clearly $v\succ \lm(h)\succeq \nu\cdot\lm(g_\nu)$ and therefore $2d\geq\ord(v) \geq \deg(\nu) + \ord(\lm(g_\nu))\geq \deg(\nu)$. In other words, we have that all elements $\nu$ belong to $\Sigma_{2d}$, that is, $\spoly(\sigma\cdot f,\tau\cdot g) = \sum_\nu f_\nu (\nu\cdot g_\nu)$ is in fact a  representation with respect to the set $\Sigma_{2d}\cdot G$. Under the assumption of a $\Sigma$-ordering compatible with the order function and for $\Sigma$-ideals that admit finite  $\Sigma$-bases, by the above criterion one obtains an algorithm to compute such a basis in a finite number of steps. In fact, this can be obtained as an adaptative procedure that keeps the bound $2d$ for the degree of the elements of $\Sigma$ applied to the generators, constantly updated with respect to the maximal order $d$ of the leading monomials of the current generators. In other words, if we denote by $\SigmaGBasis(H,d)$ the variant of the procedure $\SigmaGBasis(H)$ when one substitutes $\Sigma$ with $\Sigma_d$, then we have the following algorithm. $H$, a finite $\Sigma$-basis of a $\Sigma$-ideal $I\subset P$ such that $\LM(I)$ has also a finite $\Sigma$-basis. $G$, a finite  $\Sigma$-basis of $I$. $G:= \{g\in H\mid g\neq 0\}$; $d':= -\infty$; $d = \max\{\ord(\lm(g))\mid g\in G\}$; $d' = 2d$; $G:= \SigmaGBasis(G,d')$; $d = \max\{\ord(\lm(g))\mid g\in G\}$; ; $G$. Of course, the above algorithm may be refined to avoid a complete recomputation at each step. An illustrative example ======================= In this section we apply the procedure  to an example arising from the discretization of a well-known system of partial differential equations. Consider the unsteady two-dimensional motion of an incompressible viscous liquid of constant viscosity which is governed by the following system $$\left\{ \begin{array}{l} \displaystyle u_x + v_y = 0, \\ \vspace{-8pt} \\ \displaystyle u_t + u u_x + v u_y + p_x - \frac{1}{\rho}(u_{xx} + u_{yy}) = 0, \\ \vspace{-8pt} \\ \displaystyle v_t + u v_x + v v_y + p_y - \frac{1}{\rho}(v_{xx} + v_{yy}) = 0. \end{array} \right.$$ The last two nonlinear equations are the Navier-Stokes equations and the first linear equation is the continuity one. Equations are given in the dimensionless form where $(u,v)$ represents the velocity field and the function $p$ is the pressure. The parameter $\rho$ denotes the Reynolds number. For defining a finite difference approximation of this system one has therefore to fix $X = \{u,v,p\}$ and $\Sigma = \langle \sigma_1,\sigma_2,\sigma_3 \rangle$ since all functions are trivariate ones. To simplify the notation of the variables in $X(\Sigma)$, we identify $\Sigma$ with the additive monoid $\N^3$ and we denote $P = K[X(\Sigma)] = K[u(i,j,k),v(i,j,k),p(i,j,k)\mid i,j,k\geq 0]$. The base field $K$ is the field of rational numbers. The approximation of the derivatives of the function $u$ is given by the following formulas (forward differences) $$\begin{gathered} u_x \approx \frac{u(x + h,y,t) - u(x,y,t)}{h} = \frac{u(1,0,0) - u(0,0,0)}{h}, \\ u_y \approx \frac{u(x,y + h,t) - u(x,y,t)}{h} = \frac{u(0,1,0) - u(0,0,0)}{h}, \\ u_t \approx \frac{u(x,y,t + h) - u(x,y,t)}{h} = \frac{u(0,0,1) - u(0,0,0)}{h}, \\ u_{xx} \approx \frac{u(x + 2h,y,t) - 2u(x + h,y,t) + u(x,y,t)}{h^2} = \frac{u(2,0,0) - 2 u(1,0,0) + u(0,0,0)}{h^2}, \\ u_{yy} \approx \frac{u(x,y + 2h,t) - 2u(x,y + h,t) + u(x,y,t)}{h^2} = \frac{u(0,2,0) - 2 u(0,1,0) + u(0,0,0)}{h^2} \\ \end{gathered}$$ where $h$ is a parameter (mesh step). One has similar approximations for the derivatives of the functions $v,p$. If we put $H = \rho h$ then the Navier-Stokes system is approximated by the following system of partial difference equations $$\left\{ \begin{array}{l} \displaystyle f_1 := u(1,0,0) + v(0,1,0) - u(0,0,0) - v(0,0,0)) = 0, \\ \vspace{-8pt} \\ \displaystyle f_2 := (-u(2,0,0) -u(0,2,0) +2u(1,0,0) +2u(0,1,0) -2u(0,0,0)) \\ \vspace{-8pt} \\ \displaystyle \quad +\,H(p(1,0,0) +u(0,0,1) -p(0,0,0) -u(0,0,0)^2 \\ \vspace{-8pt} \\ \displaystyle \quad -\,(1 +v(0,0,0) -u(1,0,0)) u(0,0,0) +u(0,1,0) v(0,0,0)) = 0, \\ \vspace{-8pt} \\ \displaystyle f_3 := (-v(2,0,0) -v(0,2,0) +2v(1,0,0) +2v(0,1,0) -2v(0,0,0)) \\ \vspace{-8pt} \\ \displaystyle \quad +\,H(p(0,1,0) +v(0,0,1) -p(0,0,0) -v(0,0,0)^2 \\ \vspace{-8pt} \\ \displaystyle \quad +(v(1,0,0) -v(0,0,0)) u(0,0,0) -(1 -v(0,1,0)) v(0,0,0)) = 0. \end{array} \right.$$ We encode this system as the $\Sigma$-ideal $I = \langle f_1,f_2,f_3 \rangle_\Sigma\subset P$ and we want to compute a (hopefully finite)  $\Sigma$-basis of $I$. We may want to have such a basis to check for the “strong-consistency” [@Ge12] of the finite difference approximation that we are using. In fact, this property is necessary for inheritance at the discrete level of the algebraic properties of the differential equations. For instance, in [@ABGLS] we have compared the numerical behavior of three different finite difference approximations of the Navier-Stokes equations where just one of them is strongly consistent. The computational experiments have confirmed the superiority of the strongly consistent approximation. In the limit when the mesh steps go to zero, the elements in the difference  basis of the finite difference approximation under consideration become differential polynomials. Then, the strong consistency holds if and only if the latter polynomials belong to the radical differential ideal generated by the polynomials in the input differential equations. Note that this membership test can be done algorithmically by using the [*diffalg*]{} library [@BH] or the differential Thomas decomposition [@BGLHR]. To perform , we fix now the degree reverse lexicographic ordering on the polynomial algebra $K[\sigma_1,\sigma_2,\sigma_3]$ ($\sigma_1 > \sigma_2 > \sigma_3$) and the lexicographic ordering on $K[u,v,p]$ ($u\succ v\succ p$). By Proposition \[weightord\] one obtains then a (block) monomial $\Sigma$-ordering for $P$ which is in fact the lexicographic ordering such that $$\begin{array}{l} \ldots \succ u(2, 0, 0)\succ v(2, 0, 0)\succ p(2, 0, 0)\succ u(1, 1, 0)\succ v(1, 1, 0)\succ p(1, 1, 0)\succ \\ \hphantom{\ldots \succ\ } u(0, 2, 0)\succ v(0, 2, 0)\succ p(0, 2, 0)\succ u(1, 0, 1)\succ v(1, 0, 1)\succ p(1, 0, 1)\succ \\ \hphantom{\ldots \succ\ } u(0, 1, 1)\succ v(0, 1, 1)\succ p(0, 1, 1)\succ u(0, 0, 2)\succ v(0, 0, 2)\succ p(0, 0, 2)\succ \\ \hphantom{\ldots \succ\ } u(1, 0, 0)\succ v(1, 0, 0)\succ p(1, 0, 0)\succ u(0, 1, 0)\succ v(0, 1, 0)\succ p(0, 1, 0)\succ \\ \hphantom{\ldots \succ\ } u(0, 0, 1)\succ v(0, 0, 1)\succ p(0, 0, 1)\succ u(0, 0, 0)\succ v(0, 0, 0)\succ p(0, 0, 0). \end{array}$$ Note that this ordering is compatible with the order function and hence Proposition \[finsigmacrit\] is applicable to certify completeness of a $\Sigma$-basis computed over some finite set of variables $\{u(i,j,k),v(i,j,k),p(i,j,k)\mid i + j + k\leq d\}$. With respect to the monomial ordering assigned to $P$, the leading monomials of the $\Sigma$-generators of $I$ are $\lm(f_1) = u(1,0,0), \lm(f_2) = u(2,0,0), \lm(f_3) = v(2,0,0)$. Since $\sigma_1\cdot \lm(f_1) = \lm(f_2)$, by interreducing $f_2$ with respect to the set $\Sigma\cdot \{f_1,f_3\}$ we obtain the element $$\begin{array}{l} \displaystyle f'_2 := v(1,1,0) -u(0,2,0) -v(1,0,0) \\ \vspace{-8pt} \\ \displaystyle \quad +\,2u(0,1,0) -v(0,1,0) -u(0,0,0) +v(0,0,0) \\ \vspace{-8pt} \\ \displaystyle \quad +\,H(p(1,0,0) +u(0,0,1) -p(0,0,0) \\ \vspace{-8pt} \\ \displaystyle \quad -\,(1 +v(0,1,0)) u(0,0,0) +u(0,1,0) v(0,0,0)) \end{array}$$ whose leading monomial is $\lm(f'_2) = v(1,1,0)$. Owing to the $\Sigma$-criterion, the only S-polynomial to consider is then $\spoly(\sigma_1\cdot f'_2,\sigma_2\cdot f_3)$ whose reduction with respect to $\Sigma\cdot\{f_1,f'_2,f_3\}$ leads to the new element $$\begin{array}{l} \displaystyle f_4:= p(2,0,0) +p(0,2,0) -2(p(1,0,0) + p(0,1,0) - p(0,0,0)) \\ \vspace{-8pt} \\ \displaystyle \quad -\,2u(0,1,0)^2 -v(0,2,0) v(1,0,0) -u(0,0,0)^2 +2v(0,0,0)^2 \\ \vspace{-8pt} \\ \displaystyle \quad +\,(3u(0,1,0) -2v(1,0,0) +v(0,1,0) -u(0,2,0) +v(0,0,0)) u(0,0,0) \\ \vspace{-8pt} \\ \displaystyle \quad -\,(3v(0,1,0) +u(0,2,0) +v(1,0,0)) v(0,0,0) \\ \vspace{-8pt} \\ \displaystyle \quad +\,(2v(1,0,0) -2v(0,1,0) + u(0,2,0)) u(0,1,0) \\ \vspace{-8pt} \\ \displaystyle \quad +\,(2v(1,0,0) + u(0,2,0) + v(0,2,0)) v(0,1,0) \\ \vspace{-8pt} \\ \displaystyle \quad +\,H( (u(0,1,0) +v(0,1,0)) p(0,0,0) -(u(0,1,0) +v(0,1,0)) u(0,0,1) \\ \vspace{-8pt} \\ \displaystyle \quad -\,p(1,0,0) v(0,1,0) -p(1,0,0) u(0,1,0) -(v(0,1,0) + 1) u(0,0,0)^2 \\ \vspace{-8pt} \\ \displaystyle \quad +\,(p(1,0,0) -p(0,0,0) +u(0,0,1) +v(0,1,0) \\ \vspace{-8pt} \\ \displaystyle \quad +(u(0,1,0)-v(0,1,0)-1) v(0,0,0) \\ \vspace{-8pt} \\ \displaystyle \quad +\,(v(0,1,0)+1) u(0,1,0) +v(0,1,0)^2) u(0,0,0) +u(0,1,0) v(0,0,0)^2 \\ \vspace{-8pt} \\ \displaystyle \quad +\,(p(1,0,0) -p(0,0,0) +u(0,0,1) -u(0,1,0) v(0,1,0) -u(0,1,0)^2) v(0,0,0)). \end{array}$$ The leading monomial of this difference polynomial is $\lm(f_4) = p(2,0,0)$ and no more S-polynomials have to be considered. We conclude that the set $\{f_1,f'_2,f_3,f_4\}$ is a (finite)  $\Sigma$-basis of the $\Sigma$-ideal $I\subset P$. Since we make use of a monomial $\Sigma$-ordering for $P$, this is equivalent to say that $\Sigma\cdot\{f_1,f'_2,f_3,f_4\}$ is a  basis of the ideal $I$ and this can be verified also by applying the classical  bases routines to a proper truncation of the basis $\Sigma\cdot \{f_1,f_2,f_3\}$. In fact, because the maximal order in the input generators is 2, by Proposition \[finsigmacrit\] it is reasonable to bound initially the order of the variables of $P$ to 4 or 5. Even if it is not the case in this example, observe that the maximal order in the elements of a  $\Sigma$-basis may grow during the computation. Therefore, as a general strategy, we suggest to bound the variables order to a value which is reasonably greater than the double of the input maximal order. The computing time for obtaining a  basis of $I$ with the implementation in Maple of Faugère’s F4 algorithm amounts to 20 seconds for order 4 and 5 hours for order 5 on our laptop Intel Core 2 Duo at 2.10 GHz with 8 GB RAM. By the procedure  that we implemented in the Maple language as a variant of Buchberger’s one (see [@LS]), the computing time for a  $\Sigma$-basis of $I$ is instead 0 seconds for order 4 and 3 seconds for order 5 since just two reductions are needed. In other words, this speed-up is due to the $\Sigma$-criterion which decreases drastically the number of S-polynomial reductions which sometimes are very time-consuming. Note finally that the verification method of the property of strong consistency applied to the computed difference  basis shows that the finite difference approximation $\{f_1,f_2,f_3\}$ of the Navier-Stokes equations satisfies this property. A Noetherianity criterion ========================= As already noted, a critical feature of the algebra of partial difference polynomials $P = K[X(\Sigma)]$ is that some of its $\Sigma$-ideals are not only infinitely generated as ideals but also infinitely $\Sigma$-generated. One finds an immediate counterexample for $\Sigma = \langle \sigma \rangle$, that is, in the ordinary difference case. In fact, for some fixed variable $x_i\in X$ one has clearly that the ideal $I = \langle x_i(1)x_i(\sigma), x_i(1)x_i(\sigma^2),\ldots \rangle_\Sigma$ has no finite $\Sigma$-basis. For any $x_i\in X$ and for all $\sigma^j,\sigma^k\in\Sigma$ we have that $\sigma^k\cdot x_i(\sigma^j) = x_i(\sigma^{k+j})$ and one can identify $\sigma^k$ with the shift map $f_k:\N\to\N$ such that $f_k(j) = k + j$ which is a strictly increasing one. It is interesting to note that if we consider the larger monoid $\Inc(\N)$ of all strictly increasing maps $f:\N\to\N$ acting on $P$ as $f\cdot x_i(\sigma^j) = x_i(\sigma^{f(j)})$ then one has that $P$ is $\Inc(\N)$-Noetherian [@AH]. In other words, any $\Inc(\N)$-ideal of $P$ has a finite $\Inc(\N)$-basis. We may say hence that the monoid $\Sigma$ is “too small” to provide $\Sigma$-Noetherianity. One way to solve this problem is to consider suitable quotients of the algebra of partial difference polynomials where Noetherianity and a fortiori $\Sigma$-Noetherianity is restored. A similar approach is used for the free associative algebra which is also non-Noetherian where the concepts of “algebras of solvable type, PBW algebras, G-algebras”, etc naturally arise (see for instance [@Lev]). Countably generated algebras {#countalg} ---------------------------- We start now with a general discussion for (commutative) algebras generated by a countable set of elements. Let $Y = \{y_1,y_2,\ldots\}$ be a countable set and denote $P = K[Y]$ the polynomial algebra with variables set $Y$. Since $P$ is a free algebra, all algebras generated by a countable set of elements are clearly isomorphic to quotients $P' = P/J$, where $J$ is some ideal of $P$. To control the cosets in $P'$, a standard approach consists in defining a normal form modulo $J$ associated to a monomial ordering of $P$. Subsequently, let $\prec$ be a monomial ordering of $P$ such that $y_1\prec y_2\prec\ldots$. Put $M = \Mon(P)$ and denote $M'' = M\setminus\lm(J)$. Moreover, define the $K$-subspace $P'' = \langle M'' \rangle_K\subset P$. The elements of $M''$ are called [*normal monomials modulo $J$ (with respect to $\prec$)*]{}. The polynomials in $P''$ are said to be [*in normal form modulo $J$*]{}. Since $P$ is endowed with a monomial ordering, by a standard argument based on the algorithm  applied for the set $J$ one obtains the following result. \[macaulay\] A $K$-linear basis of the algebra $P'$ is given by the set $M' = \{m + J\mid m\in M''\}$. Let $f\in P$. Denote $\NF(f)$ the unique element of $P''$ such that $f - \NF(f)\in J$. In other words, one has $\NF(f) = \Reduce(f,J)$. We call $\NF(f)$ the [*normal form of $f$ modulo $J$ (with respect to $\prec$)*]{}. By Proposition \[macaulay\], one has that the mapping $f + J\mapsto NF(f)$ defines a linear isomorphism between $P' = P/J$ and $P''= \langle M'' \rangle_K$. An algebra structure is defined hence for $P''$ by imposing that such a mapping is also an algebra isomorphism, that is, we define $f\cdot g = \NF(f g)$, for all $f,g\in P''$. Then, we have a complete identification of $M'$ with $M''$ and $P'$ with $P''$, that is, we identify cosets with normal forms together with their algebra structures. We will make use of this from now on. We define hence the set of [*normal variables*]{} $$Y' = Y\cap M' = Y\setminus\lm(J).$$ Clearly, normal variables depend strictly on the monomial ordering one uses in $P$. \[noethcrit\] Let $P$ be endowed with a monomial ordering. If the set of normal variables $Y'$ is finite then $P'$ is a Noetherian algebra. It is sufficient to note that all normal monomials are products of normal variables and therefore the quotient algebra $P' = P/J$ is in fact generated by the set $Y'$. If $Y'$ is finite then $P'$ is a finitely generated (commutative) algebra and hence it satisfies the Noetherian property. We need now to introduce the notion of  basis for the ideals of $P' = P/J$. After the identification of cosets with normal forms, recall that $M' = M\setminus\lm(J)$ and $P' = \langle M' \rangle_K$ is a subspace of $P$ endowed with multiplication $f\cdot g = \NF(f g)$, for all $f, g\in P'$. Then, all ideals $I'\subset P'$ have the form $I' = I/J = \{ \NF(f)\mid f\in I\}$, for some ideal $J\subset I\subset P$. Note that $\NF(f)\in I$ for any $f\in I$, which implies that in fact $I' = I\cap P'$. Since the quotient algebra $P'/I'$ is isomorphic to $P/I$ and  bases give rise to $K$-linear bases of normal monomials for the quotients, one introduces the following definition. \[quogb\] Let $I' = I\cap P'$ be an ideal of $P'$ where $I$ is an ideal of $P$ containing $J$. Moreover, consider $G'\subset I'$. We call $G'$ a [* basis*]{} of $I'$ if $G'\cup J$ is a  basis of $I$. Let $G\subset P$. Recall that $\LM(G)$ denotes the ideal of $P$ generated by the set $\lm(G) = \{\lm(g)\mid g\in G, g\neq 0\}$. \[quogbchar\] Let $I'$ be an ideal of $P'$ and let $G'\subset I'$. Then, the set $G'$ is a  basis of $I'$ if and only if $\LM(G') = \LM(I')$. Let $J\subset I\subset P$ be an ideal such that $I' = I\cap P'$. Assume $\LM(G') = \LM(I')$. Let $f\in I$ and denote $f' = \NF(f)$. If $\lm(f)\notin\LM(J)$ then clearly $\lm(f) = \lm(f')$. Moreover, since $\lm(f')\in\LM(I') \subset \LM(G')$ one has that $\lm(f) = \lm(f') = m \lm(g')$, for some $m\in M, g'\in G'$. We conclude that $G'\cup J$ is a  basis of $I$. Suppose now that the latter condition holds. Since $G'\subset I'$, we have clearly that $\LM(G')\subset \LM(I')$. Let now $f'\in I'\subset I$. Then, there is $m\in M, g\in G'\cup J$ such that $\lm(f') = m \lm(g)$. Since $\lm(f')\in M'$ then also $\lm(g)\in M'$ and hence $g\in G'$. We conclude that $\LM(G') = \LM(I')$. \[normonid\] Assume that the set of normal variables $Y' = Y\cap M'$ is finite. Then, any monomial ideal $I = \langle I\cap M' \rangle\subset P$ has a finite basis. It is sufficient to invoke Dickson’s Lemma (see for instance [@CLO]) for the ideal $I$ which is generated by normal monomials that are products of a finite number of normal variables. \[quofingb\] If $Y'$ is a finite set then any ideal $I'\subset P'$ has a finite  basis. According to Proposition \[quogbchar\], consider the ideal $\LM(I')\subset P$ which is generated by the set of normal monomials $\lm(I')$. Then, it is sufficient to apply Proposition \[normonid\] to this ideal. It is clear that if $G$ is any  basis of an ideal $J\neq P$ then $Y' = Y\setminus\lm(G)$. Note that $Y$ is a countable set. Thus, if $Y'$ is finite and hence $P' = K[Y']$ is a Noetherian algebra then $G$ needs to be an infinite set. In general, such a  basis cannot be computed but this may be possible when $P'$ is a $\Sigma$-algebra owing to the notion of  $\Sigma$-basis. $\Sigma$-algebras ----------------- From now on, we assume again that $P = K[X(\Sigma)]$ is the algebra of partial difference polynomials. Let $J\subset P$ be a $\Sigma$-ideal and define the quotient $\Sigma$-algebra $P' = P/J$. As an algebra, we have clearly that $P'$ is generated by the cosets $x_i(\sigma) + J$, for all $x_i(\sigma)\in X(\Sigma)$. Moreover, $P'$ is a $\Sigma$-algebra which is $\Sigma$-generated by the cosets $x_i(1) + J$, for any $x_i(1)\in X(1)$. In fact, $J$ is the $\Sigma$-ideal containing all $\Sigma$-algebra relations satisfied by such generators. Let $P$ be endowed with a monomial $\Sigma$-ordering $\prec$ and define, as in Subsection \[countalg\], the set $M'\subset M = \Mon(P)$ of all normal monomials and the set $X(\Sigma)' = X(\Sigma)\cap M'$ of all normal variables. After the identification of cosets with normal forms, we have that $P'$ is an algebra generated by $X(\Sigma)'$ because normal monomials are products of normal variables. One has also the following result. The $\Sigma$-algebra $P'$ is $\Sigma$-generated by $X(1)' = X(1)\cap M'$. It is sufficient to show that $X(\Sigma)'\subset \Sigma\cdot X(1)'$. The set of non-normal variables $X(\Sigma)\setminus X(\Sigma)' = X(\Sigma)\cap\lm(J)$ is clearly invariant under the action of $\Sigma$. Therefore, if $x_i(1)$ is not a normal variable then $x_i(\sigma) = \sigma\cdot x_i(1)$ is also not a normal one. In other words, if $x_i(\sigma)$ is a normal variable then $x_i(1)$ is also such a variable and one has that $x_i(\sigma) = \sigma\cdot x_i(1)$. To provide the Noetherian property to the quotient algebra $P' = P/J$ by means of Proposition \[noethcrit\] one has the following key result. \[normfincrit\] The set of normal variables $X(\Sigma)'$ is finite if and only if for all $1\leq i\leq n, 1\leq j\leq r$ one has that $x_i(\sigma_j^{d_{ij}})\in\lm(J)$, for some integers $d_{ij}\geq 0$. Put $x_i(\Sigma) = \{x_i(\sigma)\mid \sigma\in\Sigma\}$ and denote $x_i(\Sigma)' = x_i(\Sigma)\cap X(\Sigma)'$, for any $i=1,2,\ldots,n$. We have then to characterize when $x_i(\Sigma)'$ is a finite set. Consider the polynomial algebra $Q = K[\sigma_1,\ldots,\sigma_r]$ and a monomial ideal $I\subset Q$. It is well-known (see for instance [@CLO], Ch. 5, §3, Th. 6) that the quotient algebra $Q/I$ is finite dimensional if and only if there are integers $d_j\geq 0$ such that $\sigma_j^{d_j}\in I$, for all $j=1,2,\ldots,r$. It follows that $x_i(\Sigma)'$ is a finite set if and only if there exist integers $d_{ij}\geq 0$ such that $x_i(\sigma_j^{d_{ij}})\in\lm(J)$, for all indices $i,j$. \[fingb\] Let $J\subset P$ be a $\Sigma$-ideal such that for all $1\leq i\leq n, 1\leq j\leq r$ there are integers $d_{ij}\geq 0$ such that $x_i(\sigma_j^{d_{ij}})\in\lm(J)$. Then $J$ has a finite $\Sigma$-basis. Denote $I = \langle x_i(\sigma_j^{d_{ij}})\mid 1\leq i\leq n, 1\leq j\leq r \rangle_\Sigma$ and $L = \LM(J)$. Then, we have that $I\subset L$ and the ideal $L/I\subset P/I$ has a finite basis owing to Proposition \[normonid\] and Proposition \[normfincrit\]. In other words, the $\Sigma$-ideal $L$ has a finite $\Sigma$-basis given by the finite $\Sigma$-basis of $I$ together with the finite basis of $L/I$. Note that the above result is not a necessary condition for finiteness of  $\Sigma$-bases. Consider for instance the example presented in Section 5 of [@LS]. Nevertheless, Corollary \[fingb\] guarantees termination of the procedure  when a complete set of variables $x_i(\sigma_j^{d_{ij}})$ for all $i,j$, occurs as leading monomials of some elements of the  $\Sigma$-basis at some intermediate step of the computation. In other words, reaching this condition ensures that  will definitely stop at some later step. Of course, if the elements $f_{ij}\in P$ such that $\lm(f_{ij}) = x_i(\sigma_j^{d_{ij}})$ belong to the input $\Sigma$-basis of a $\Sigma$-ideal $J\subset P$ then we know in advance that all properties of Noetherianity and termination are provided for the quotient $P' = P/J$. One may have that such polynomials are themselves a  $\Sigma$-basis of $J$ and this happens in particular in the monomial case, that is, when $J = \langle x_i(\sigma_j^{d_{ij}})\mid 1\leq i\leq n, 1\leq j\leq r \rangle_\Sigma$, for some $d_{ij}\geq 0$. For all $d\geq 0$, define therefore $$J^{(d)} = \langle x_i(\sigma)\mid 1\leq i\leq n, \deg(\sigma) = d + 1 \rangle_\Sigma \supset \langle x_i(\sigma_j^{d+1})\mid 1\leq i\leq n, 1\leq j\leq r \rangle_\Sigma$$ and put $J^{(-\infty)} = \langle X(1) \rangle_\Sigma = \langle X(\Sigma) \rangle$. If $P = \bigoplus_{d\in\hN} P_d$ is the grading of $P$ defined by the order function then the subalgebra $P^{(d)} = \bigoplus_{i\leq d} P_i\subset P$ is clearly isomorphic to the quotient $P/J^{(d)}$ and hence it can be endowed with the structure of a $\Sigma$-algebra. Then, to make use of the following filtration of subalgebras $$K = P^{(-\infty)}\subset P^{(0)}\subset P^{(1)}\subset \ldots \subset P$$ to perform concrete computations with  $\Sigma$-bases as explained in Section 4 corresponds to work progressively modulo the $\Sigma$-ideals $$\langle X(\Sigma) \rangle = J^{(-\infty)}\supset J^{(0)}\supset J^{(1)}\supset \ldots\supset 0$$ providing the finite set of normal variables $X(\Sigma_d) = \{x_i(\sigma)\mid 1\leq i\leq n,\deg(\sigma)\leq d\}$ and hence the Noetherian property for each quotient $P/J^{(d)}$ isomorphic to $P^{(d)}$. In other words, termination by truncation is essentially a special instance of termination by membership. Another interesting case is the ordinary one, that is, when $\Sigma = \langle \sigma \rangle$. In this case, any set of polynomials $f_1,\ldots,f_n\in P$ such that $\lm(f_i) = x_i(\sigma^{d_i})$ ($d_i\geq 0$) is a  $\Sigma$-basis since all S-polynomials trivially reduce to zero according to the product criterion. To motivate the last result of this section, let us consider the following problem. Assume that $K$ is a field of constants and let $V$ be a finite dimensional $K$-vector space. Denote by $\END_K(V)$ the algebra of $K$-linear endomorphisms of $V$ and let $Q'\subset\END_K(V)$ be a subalgebra generated by $r$ commuting endomorphisms. Since $Q = K[\sigma_1,\ldots,\sigma_r]$ is the free commutative algebra with $r$ generators, one has a $K$-algebra homomorphism $Q\to \END_K(V)$ sending the $\sigma_i$ onto the generators of $Q'$, that is, $V$ is a $Q$-module. Consider now the (Noetherian) polynomial algebra $R$ whose variables are a $K$-linear basis of $V$. In other words, $V$ is the subspace of linear forms of $R$ or equivalently $R$ is the symmetric algebra on $V$. Define $\End_K(R)$ the monoid of $K$-algebra endomorphisms of $R$. Since $\Sigma = \Mon(Q)$, we can extend the action of $\Sigma$ on $V$ to a monoid homomorphism $\Sigma\to \End_K(R)$, that is, $R$ is a $\Sigma$-algebra. Because $P = K[X(\Sigma)]$ is a free $\Sigma$-algebra, there is a suitable set $X = \{x_1,\ldots,x_n\}$ and a $\Sigma$-ideal $J\subset P$ such that $R$ is isomorphic to the quotient $\Sigma$-algebra $P' = P/J$. Since $Q$ acts linearly over $V$, one has that $J$ is $\Sigma$-generated by linear polynomials. Then, in the following result we analyze from the perspective of Proposition \[noethcrit\] and Proposition \[normfincrit\] the easiest case for a linear $\Sigma$-ideal providing the Noetherian property to the quotient $\Sigma$-algebra. In Section 7 we will show that this case corresponds to have the finite dimensional commutative algebra $Q'$ decomposable as the tensor product of $r$ cyclic subalgebras. This happens in particular if $Q'$ is the group algebra of a finite abelian group and one application of this specific case is given in Section 8. \[finlinact\] Let $K$ be a field of constants and consider the linear polynomials $f_{ij} = \sum_{0\leq k\leq d_{ij}} c_{ijk} x_i(\sigma_j^k)\in P$ where $c_{ijk}\in K$ and $c_{ijd_{ij}} = 1$, for all $1\leq i\leq n, 1\leq j\leq r$. Then $\lm(f_{ij}) = x_i(\sigma_j^{d_{ij}})$ and the set $\{f_{ij}\}$ is a  $\Sigma$-basis. Since $X(\Sigma)$ is endowed with a $\Sigma$-ordering, one has that $x_i(\sigma_j^k)\prec x_i(\sigma_j^l)$ if $k < l$ and hence $\lm(f_{ij}) = x_i(\sigma_j^{d_{ij}})$. Then, the only S-polynomials to be considered are $$s = \spoly(\sigma_q^{d_{iq}}\cdot f_{ip}, \sigma_p^{d_{ip}}\cdot f_{iq}) = \sum_{0\leq k<d_{ip}} c_{ipk} x_i(\sigma_q^{d_{iq}}\sigma_p^k) - \sum_{0\leq l<d_{iq}} c_{iql} x_i(\sigma_p^{d_{ip}}\sigma_q^l),$$ for all $1\leq i\leq n$ and $1\leq p<q\leq r$. By reducing $s$ with polynomials $\sigma_p^k\cdot f_{iq}$ and $\sigma_q^l\cdot f_{ip}$ one obtains $$s' = - \sum_{0\leq k<d_{ip},0\leq l<d_{iq}} c_{ipk} c_{iql} x_i(\sigma_q^l\sigma_p^k) + \sum_{0\leq l<d_{iq},0\leq k<d_{ip}} c_{iql} c_{ipk} x_i(\sigma_p^k\sigma_q^l) = 0.$$ Note explicitely that the assumption that $K$ is a field of constants is necessary in the above result. In fact, if $\Sigma$ acts on $K$ in a non-trivial way then generally $$\begin{gathered} s' = - \sum_{0\leq k<d_{ip},0\leq l<d_{iq}} (\sigma_q^{d_{iq}}\cdot c_{ipk}) (\sigma_p^k\cdot c_{iql}) x_i(\sigma_q^l\sigma_p^k) \\ \qquad \,+ \sum_{0\leq l<d_{iq},0\leq k<d_{ip}} (\sigma_p^{d_{ip}}\cdot c_{iql}) (\sigma_q^l\cdot c_{ipk}) x_i(\sigma_p^k\sigma_q^l)\neq 0. \end{gathered}$$ A Noetherian $\Sigma$-algebra of special interest ================================================= From now on we assume that $K$ is a field of constants. We define the ideal $J = \langle f_{ij} \rangle_\Sigma\subset P$ where $f_{ij} =\sum_{0\leq k\leq d_{ij}} c_{ijk} x_i(\sigma_j^k)$ ($c_{ijk}\in K, c_{ijd_{ij}} = 1$), for any $1\leq i\leq n,1\leq j\leq r$. We want to describe the (Noetherian) $\Sigma$-algebra $P' = P/J$. To simplify notations and since they are interesting in themselves, we consider separately the cases when $r = 1$ and $n = 1$. First assume that $r = 1$, that is, $\Sigma = \langle \sigma \rangle$ and hence $P' = P/J$ where $J = \langle f_1,\ldots,f_n \rangle_\Sigma$ with $f_i = \sum_{0\leq k\leq d_i} c_{ik} x_i(\sigma^k)$ ($c_{ik}\in K, c_{id_i} = 1$). Define $Q = K[\sigma]$ the algebra of polynomials in the single variable $\sigma$ and denote $g_i = \sum_{0\leq k\leq d_i} c_{ik} \sigma^k\in Q$. Moreover, put $d = \sum_i d_i$ and let $V = K^d$. Finally, consider the $d\times d$ block-diagonal matrix $$A = A_1 \oplus\ldots\oplus A_n = \left( \begin{array}{cccc} A_1 & 0 & \ldots & 0 \\ 0 & A_2 & \ldots & 0 \\ \vdots & \vdots & & \vdots \\ 0 & 0 & \ldots & A_n \\ \end{array} \right)$$ where each block $A_i$ is the companion matrix of the polynomial $g_i$, that is, $$A_i = \left( \begin{array}{cccccc} 0 & 0 & \ldots & 0 & - c_{i0} \\ 1 & 0 & \ldots & 0 & - c_{i1} \\ 0 & 1 & \ldots & 0 & - c_{i2} \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \ldots & 1 & - c_{id-1} \\ \end{array} \right).$$ Note that $A$ has all entries in the base field $K$ and it can be considered as the Frobenius normal form of a $d\times d$ matrix provided that $g_1\mid\ldots\mid g_n$. Recall that any square matrix is similar over the base field to its Frobenius normal form, that is, we are considering any $K$-linear endomorphism of $V$. Then, the monoid $\Sigma$ or equivalently the algebra $Q$ acts linearly over the vector space $V$ by means of the representation $\sigma^k\mapsto A^k$. If $\{v_q\}_{1\leq q\leq d}$ is the canonical basis of $V$, we denote $x_i(\sigma^k) = v_q$ where $q = \sum_{j < i} d_j + k + 1$ for all $1\leq i\leq n, 0\leq k < d_i$. We have hence $x_i(\sigma^k) = A^k x_i(1) = \sigma^k\cdot x_i(1)$. In other words, for the $Q$-module $V$ one has the decomposition $V = \bigoplus_i V_i$ where $V_i$ is the cyclic submodule generated by $x_i(1)$ and annihilated by the ideal $\langle g_i \rangle\subset Q$. Denote now by $R$ the (Noetherian) polynomial algebra generated by the finite set of variables $X(\Sigma)' = \{x_i(\sigma^k)\mid 1\leq i\leq n, 0\leq k < d_i\}$, that is, $V$ coincides with the subspace of linear forms of $R$. Then, one extends the action of the monoid $\Sigma = \langle \sigma \rangle$ to the polynomial algebra $R$ in the natural way that is by putting, for all $k\geq 0$ and $x_i(\sigma^j)\in X(\Sigma)'$ $$\sigma^k\cdot x_i(\sigma^j) = A^k x_i(\sigma^j).$$ Denote by $\END_K(P)$ the algebra of all $K$-linear mappings $P\to P$ and define by $\End_K(P)$ the monoid of $K$-algebra endomorphisms of $P$. Note that the representation $\rho:\Sigma\to\End_K(P)$ can be extended linearly to $\bar{\rho}:Q\to\END_K(P)$. Then, one has that $f_i = \sum_k c_{ik} x_i(\sigma^k) = \sum_k c_{ik} \sigma^k\cdot x_i(1) = g_i\cdot x_i(1)$, for all $i = 1,2,\ldots,n$. \[iso1\] If $\Sigma = \langle \sigma \rangle$ then the $\Sigma$-algebras $P',R$ are $\Sigma$-isomorphic. By Proposition \[finlinact\] we have that the set $\{f_i\}$ is a $\Sigma$-basis of the $\Sigma$-ideal $J\subset P$ and it is clear that the set of normal variables modulo $J$ is exactly $X(\Sigma)' = \{x_i(\sigma^k)\mid 1\leq i\leq n, 0\leq k < d_i\}$. Moreover, since $R\subset P$ and $f_i = g_i\cdot x_i(1)$ one has that $\NF(x_i(\sigma^k)) = \NF(\sigma^k\cdot x_i(1)) = A^k x_i(1)$, for all $k\geq 0$ and $x_i(1)\in X(1)'$. Note that $R$ is $\Sigma$-generated by the set $X(1)' = \{x_i(1)\mid 1\leq i\leq n, d_i > 0\}$. Since $P$ is a free $\Sigma$-algebra, a surjective $\Sigma$-algebra homomorphism $\varphi:P\to R$ is defined such that $$x_i(1)\mapsto \left\{ \begin{array}{cl} x_i(1) & \mbox{if}\ d_i > 0, \\ 0 & \mbox{otherwise}. \end{array} \right.$$ Then, the above result states that the $\Sigma$-ideal $\Ker\varphi\subset P$ of all $\Sigma$-algebra relations satisfied by the generating set $X(1)'\cup \{0\}$ of $R$ is exactly $J$. Assume now that $n = 1$, that is, $X = \{x\}$ and $\Sigma = \langle \sigma_1,\ldots,\sigma_r \rangle$. Then $P' = P/J$ where $J = \langle f_1,\ldots,f_r \rangle_\Sigma$ with $f_j = \sum_{0\leq k\leq d_j} c_{jk} x(\sigma_j^k)$ ($c_{jk}\in K, c_{jd_j} = 1$). Define $Q = K[\sigma_1,\ldots,\sigma_r]$ the algebra of polynomials in the variables $\sigma_j$ and denote $g_j = \sum_{0\leq k\leq d_j} c_{jk} \sigma_j^k\in Q$. One has clearly that $f_j = g_j\cdot x(1)$. As before, we consider the companion matrix $A_j$ of the polynomial $g_j$ in the single variable $\sigma_j$. If $d = \prod_j d_j$ then the monoid $\Sigma = \Sigma_1\times\cdots\times\Sigma_r$ ($\Sigma_j = \langle \sigma_j \rangle$), that is, the algebra $Q = Q_1 \otimes\cdots\otimes Q_r$ ($Q_j = K[\sigma_j]$) acts linearly over the space $V = K^d$ by means of the representation $$\sigma_1^{k_1} \cdots \sigma_r^{k_r}\mapsto A_1^{k_1} \otimes \cdots \otimes A_r^{k_r},$$ where $A_1^{k_1} \otimes \cdots \otimes A_r^{k_r}$ denotes the Kronecker product of the matrices $A_j^{k_j}$. In other words, the $Q$-module $V$ is the tensor product $V = V_1 \otimes\cdots\otimes V_r$ where $V_j$ is the cyclic $Q_j$-module defined by the representation $\sigma_j^k\mapsto A_j^k$. If $\{v_{k_1} \otimes\cdots\otimes v_{k_r}\}_{1\leq k_j\leq d_j}$ is the canonical basis of $V$, we put $x(\sigma_1^{k_1}\cdots\sigma_r^{k_r}) = v_{k_1+1} \otimes\cdots\otimes v_{k_r+1}$, for all $1\leq j\leq r, 0\leq k_j< d_j$. One has then $$x(\sigma_1^{k_1}\cdots\sigma_r^{k_r}) = (A_1^{k_1} \otimes\ldots\otimes A_r^{k_r}) x(1) = (\sigma_1^{k_1}\cdots\sigma_r^{k_r})\cdot x(1),$$ that is, $V$ is a cyclic module generated by $x(1)$. Denote now by $R$ the polynomial algebra generated by the finite set of variables $X(\Sigma)' = \{x(\sigma_1^{k_1}\cdots\sigma_r^{k_r})\mid 1\leq j\leq r, 0\leq k_j < d_j\}$, that is, $V$ is the subspace of linear forms of $R$. Again, we extend the action of the monoid $\Sigma = \langle \sigma_1,\ldots,\sigma_r \rangle$ to the polynomial algebra $R$ by putting, for all $k_1,\ldots,k_r\geq 0$ and $x(\sigma)\in X(\Sigma)'$ $$(\sigma_1^{k_1} \cdots \sigma_r^{k_r})\cdot x(\sigma) = (A_1^{k_1} \otimes \cdots \otimes A_r^{k_r}) x(\sigma).$$ If $X = \{x\}$ then $P',R$ are $\Sigma$-isomorphic. Assume $d\neq 0$, that is, $d_j\neq 0$ for all $j$. Again, by Proposition \[finlinact\] one has that the set $\{f_j\}$ is a  $\Sigma$-basis of $J\subset P$ and the set of normal variables modulo $J$ is clearly $X(\Sigma)' = \{x(\sigma_1^{k_1}\cdots\sigma_r^{k_r})\mid 1\leq j\leq r, 0\leq k_j < d_j\}$. Moreover, because $R\subset P$ and $f_j = g_j\cdot x(1)$ we obtain that, for all $k_1,\ldots,k_r\geq 0$ $$\NF(x(\sigma_1^{k_1} \cdots \sigma_r^{k_r})) = \NF((\sigma_1^{k_1} \cdots \sigma_r^{k_r})\cdot x(1)) = (A_1^{k_1} \otimes\ldots\otimes A_r^{k_r}) x(1).$$ Finally, if $d = 0$ then $P' = R = K$. Note that for $d\neq 0$ one has that $R$ is $\Sigma$-generated by the element $x(1)$. Then, the above result implies that the $\Sigma$-ideal $J\subset P$ coincides with the ideal of $\Sigma$-algebra relations satisfied by the generator $x(1)$, that is, it is the kernel of the $\Sigma$-algebra epimorphism $P\to R$ such that $x(1)\mapsto x(1)$. Consider finally the general case for the $\Sigma$-algebra $P' = P/J$ where $J = \langle f_{ij} \rangle_\Sigma$ and $f_{ij} = \sum_{0\leq k\leq d_{ij}} c_{ijk} x_i(\sigma_j^k)$ with $c_{ijk}\in K, c_{ijd_{ij}} = 1$, for all $1\leq i\leq n$ and $1\leq j\leq r$. By combining the previous results, one may conclude that such a structure arises from the $Q$-module $V = K^d$ where $d = \sum_{1\leq i\leq n} \prod_{1\leq j\leq r} d_{ij}$ and the representation is given by the mapping $$\prod_j \sigma_j^{k_j} \mapsto \bigoplus_i \bigotimes_j A_{ij}^{k_j}$$ where $A_{ij}$ is the companion matrix of the polynomial $g_{ij} = \sum_{0\leq k\leq d_{ij}} c_{ijk} \sigma_j^k$. In other words, we have that $V = \bigoplus_i \bigotimes_j V_{ij}$ where $V_{ij}$ is the cyclic $Q_j$-module annihilated by the ideal $\langle g_{ij} \rangle\subset Q_j$. By denoting $x_i(1)$ the generator of the $Q$-module $\bigotimes_j V_{ij}$, we obtain that $P'$ is isomorphic to the $\Sigma$-algebra $R = K[X(\Sigma)']$ where $X(\Sigma)' = \{x_i(\sigma_1^{k_{i1}}\cdots\sigma_r^{k_{ir}})\mid 1\leq i\leq n, 1\leq j\leq r, 0\leq k_{ij} < d_{ij}\}$ is the canonical basis of the space $V$. Then, one has that $J = \langle f_{ij} \rangle_\Sigma$ is exactly the $\Sigma$-ideal of $\Sigma$-algebra relations satisfied by generating set $X(1)'\cup \{0\}$ of $R$. Another example =============== A long-lasting problem in  bases theory is about the possibility to accord the definition and the computation of such bases to some form of symmetry, typically defined by groups, which one may have on the generators or on the ideal itself of some polynomial algebra (see for instance [@BF; @Ga]). The main objection against this possibility is that monomial orderings cannot be defined consistently with the group action which implies that the symmetry disappears in the  basis. In fact, if the symmetry is defined by a monoid $\Sigma$ isomorphic to $\N^r$ we have found that the notion of $\Sigma$-ideal perfectly accords with monomial orderings and  bases. Moreover, in the previous section we have shown that by means of the notion of quotient $\Sigma$-algebra and the corresponding  bases tools one can deal with symmetries defined by suitable finite dimensional commutative algebras. Among them one finds group algebras of finite abelian groups and therefore this section is devoted to such a case. In other words, we will show that  bases of ideals having a finite abelian group symmetry can be “tamed” by means of $\Sigma$-algebras and their quotients. We fix now a setting that has been recently considered in [@St]. Note that in our approach all computations can be performed over any field (of constants) but in [@St] the base field is required to contain roots of unity. Fix $r = 1$, that is, $\Sigma = \langle \sigma \rangle$ and $Q = K[\sigma]$. Consider $\Sym_d$ the symmetric group on $d$ elements and let $\gamma\in\Sym_d$ be any permutation. Denote $\Gamma = \langle \gamma \rangle \subset\Sym_d$ the cyclic subgroup generated by $\gamma$. Moreover, let $\gamma = \gamma_1\cdots\gamma_n$ be the cycle decomposition of $\gamma$ and denote by $d_i$ the length of the cycle $\gamma_i$. Consider the polynomial algebra $R = K[x_i(\sigma^j)\mid 1\leq i\leq n, 0\leq j < d_i]$ and identify the subset $\{x_i(1),\ldots,x_i(\sigma^{d_i-1})\}$ with the support of the cycle $\gamma_i$. Define $\Aut_K(R)$ the group of $K$-algebra automorphisms of $R$. Clearly $R$ is a $\Gamma$-algebra, that is, there is a (faithful) group representation $\rho':\Gamma\to\Aut_K(R)$. Consider now the polynomials $g_i = \sigma^{d_i} - 1\in Q$ and define the $d\times d$ block-diagonal matrix $$A = \left( \begin{array}{cccc} A_1 & 0 & \ldots & 0 \\ 0 & A_2 & \ldots & 0 \\ \vdots & \vdots & & \vdots \\ 0 & 0 & \ldots & A_n \\ \end{array} \right)$$ where each block $A_i$ is the companion matrix of the polynomial $g_i$ which is the permutation matrix $$A_i = \left( \begin{array}{cccccc} 0 & 0 & \ldots & 0 & 1 \\ 1 & 0 & \ldots & 0 & 0 \\ 0 & 1 & \ldots & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \ldots & 1 & 0 \\ \end{array} \right).$$ If we order the variables of $R$ as $x_1(1),\ldots,x_1(\sigma^{d_1-1}),\ldots, x_n(1),\ldots,x_n(\sigma^{d_n-1})$ then the representation $\rho'$ is defined as $\gamma^k\cdot x_i(\sigma^j) = A^k x_i(\sigma^j)$, for all $i,j,k$. In other words, by Proposition \[iso1\] one has that $R$ is a $\Sigma$-algebra isomorphic to $P' = P/J$ where $J = \langle f_1,\ldots,f_n \rangle_\Sigma$ and $f_i = g_i\cdot x_i(1) = x_i(\sigma^{d_i}) - x_i(1)\in P$. Consider now a $\Gamma$-ideal (equivalently a $\Sigma$-ideal) $L' = \langle h_1,\ldots,h_m \rangle_\Gamma\subset R$ and define the $\Sigma$-ideal $L = \langle h_1,\ldots,h_m, f_1,\ldots,f_n \rangle_\Sigma\subset P$. Note that $\Gamma$-ideals are called “symmetric ideals” in [@St]. According with Definition \[quogb\] and the identification of $R$ with the quotient $P'$ one has that $G'\subset L'$ is a  $\Gamma$-basis (equivalently $\Sigma$-basis) of $L'$ if by definition $G'\cup \{f_1,\ldots,f_n\}$ is a  $\Sigma$-basis of $L$. In practice, the computation of $G'$ is obtained by the algorithm $\SigmaGBasis$ which terminates owing to Corollary \[fingb\]. To illustrate the method we fix now $\gamma = (1 2 3 4 5 6 7 8)\in\Sym_8$ and $K = \Q$. To simplify the variables notation we identify $\Sigma$ with $\N$, that is, $R = K[x(0),x(1),\ldots,x(7)]$. Consider the following $\Gamma$-ideal of $R$ $$\begin{gathered} L' = \langle x(0)x(2) - x(1)^2, x(0)x(3) - x(1)x(2) \rangle_\Gamma = \\ \langle x(0)x(2) - x(1)^2, x(1)x(3) - x(2)^2, x(2)x(4) - x(3)^2, x(3)x(5) - x(4)^2, \\ x(4)x(6) - x(5)^2, x(5)x(7) - x(6)^2, x(7)^2 - x(0)x(6), x(1)x(7) - x(0)^2, \\ x(0)x(3) - x(1)x(2), x(1)x(4) - x(2)x(3), x(2)x(5) - x(3)x(4), \\ x(3)x(6) - x(4)x(5), x(4)x(7) - x(5)x(6), x(6)x(7) - x(0)x(5), \\ x(0)x(7) - x(1)x(6), x(2)x(7) - x(0)x(1) \rangle. \end{gathered}$$ Note that $x(0)x(2) - x(1)^2, x(1)x(3) - x(2)^2, x(0)x(3) - x(1)x(2)$ are well-known equations of the twisted cubic in $\P^3$. Define now $f = x(8) - x(0)\in P$ and hence $R = P' = P/J$ where $J = \langle f \rangle_\Sigma$. Then, a  $\Gamma$-basis (or $\Sigma$-basis) of $L'$ is obtained by computing a  $\Sigma$-basis of the ideal $$L = \langle x(0)x(2) - x(1)^2, x(0)x(3) - x(1)x(2), f \rangle_\Sigma\subset P.$$ Fix for instance the lexicographic monomial ordering on $P$ (hence on $R$) with $x(0)\prec x(1)\prec\ldots$ which is clearly a $\Sigma$-ordering. The usual minimal  basis of $L'$ consists of 54 elements whose leading monomials are $$\begin{gathered} x(7)^2, x(6)x(7), \\ x(0)x(2)\to x(1)x(3)\to x(2)x(4)\to x(3)x(5)\to x(4)x(6)\to x(5)x(7), \\ x(0)x(3)\to x(1)x(4)\to x(2)x(5)\to x(3)x(6)\to x(4)x(7), x(2)x(7), \\ x(1)x(7), x(0)x(7), x(6)^3, x(0)x(4)^2\to x(1)x(5)^2\to x(2)x(6)^2, \\ x(0)^2x(4)\to x(1)^2x(5)\to x(2)^2x(6)\to x(3)^2x(7), x(0)^2x(6), x(0)x(6)^2, \\ x(1)x(6)^2, x(1)^2x(6), x(3)^2x(4)\to x(4)^2x(5)\to x(5)^2x(6), \\ x(4)x(5)^2\to x(5)x(6)^2, x(0)x(1)x(6), x(0)x(4)x(5)\to x(1)x(5)x(6), \\ x(0)x(5)x(6), x(1)x(2)x(6), x(2)^4\to x(3)^4\to x(4)^4\to x(5)^4, x(0)^3x(5), \\ x(0)x(5)^3, x(2)^3x(3), x(2)x(3)^3\to x(3)x(4)^3, x(0)^2x(5)^2, x(0)^2x(1)x(5), \\ x(2)^2x(3)^2, x(1)^2x(2)^3, x(1)^4x(2)^2, x(1)^6x(2), x(1)^8. \end{gathered}$$ The arrow between two monomials means that a monomial can be obtained by the previous one by means of the $\Sigma$-action. Then, the minimal $\Gamma$-basis of $L'$ has just 32 elements and their leading monomials are $$\begin{gathered} x(7)^2, x(6)x(7), x(0)x(2), x(0)x(3), x(2)x(7), x(1)x(7), x(0)x(7), x(6)^3, \\ x(0)x(4)^2, x(0)^2x(4), x(0)^2x(6), x(0)x(6)^2, x(1)x(6)^2, x(1)^2x(6), x(3)^2x(4), \\ x(4)x(5)^2, x(0)x(1)x(6), x(0)x(4)x(5), x(0)x(5)x(6), x(1)x(2)x(6), x(2)^4, \\ x(0)^3x(5), x(0)x(5)^3, x(2)^3x(3) x(2)x(3)^3, x(0)^2x(5)^2, x(0)^2x(1)x(5), \\ x(2)^2x(3)^2, x(1)^2x(2)^3, x(1)^4x(2)^2, x(1)^6x(2), x(1)^8. \end{gathered}$$ In other words, our approach based on $\Sigma$-compatible structures is able to define appropriately a  basis that generates a group invariant ideal up to the group action and this basis is actually more compact than the usual  basis. The elements of the minimal  $\Gamma$-basis of $L'$ are the following ones $$\begin{gathered} x(7)^2 - x(0)x(6), x(6)x(7) - x(0)x(5), x(0)x(2) - x(1)^2, x(0)x(3) - x(1)x(2), \\ x(2)x(7) - x(0)x(1), x(1)x(7) - x(0)^2, x(0)x(7) - x(1)x(6), x(6)^3 - x(0)x(5)^2, \\ x(0)x(4)^2 - x(2)x(3)^2, x(0)^2x(4) - x(1)^2x(2), x(0)^2x(6) - x(0)x(1)x(5), \\ x(0)x(6)^2 - x(1)x(5)x(6), x(1)x(6)^2 - x(0)^2x(5), x(1)^2x(6) - x(0)^3, \\ x(3)^2x(4) - x(0)x(1)^2, x(4)x(5)^2 - x(0)x(1)x(5), x(0)x(1)x(6) - x(2)^2x(3), \\ x(0)x(4)x(5) - x(0)^2x(1), x(0)x(5)x(6) - x(3)x(4)^2, \\ x(1)x(2)x(6) - x(0)x(4)x(5), x(2)^4 - x(0)^4, x(0)^3x(5) - x(3)^3x(4), \\ x(0)x(5)^3 - x(3)x(4)^3, x(2)^3x(3) - x(0)^3x(1), x(2)x(3)^3 - x(0)x(1)^3, \\ x(0)^2x(5)^2 - x(2)^2x(3)^2, x(0)^2x(1)x(5) - x(3)^2x(4)^2, x(2)^2x(3)^2 - x(0)^2x(1)^2, \\ x(1)^2x(2)^3 - x(0)^5, x(1)^4x(2)^2 - x(0)^6, x(1)^6x(2) - x(0)^7, x(1)^8 - x(0)^8. \end{gathered}$$ We have computed these elements by applying the algorithm to the $\Sigma$-ideal $L\subset P$ in the same way as for the example in Section 5. For details about different strategies to implement this method we refer to [@LS]. Conclusions and further directions ================================== In this paper we showed that a viable theory of  bases exists for the algebra of partial difference polynomials which implies that one can perform symbolic (formal) computations for systems of partial difference equations. In fact, we prove that such  bases can be computed in a finite number of steps when truncated with respect to an appropriate grading or when they contain elements with suitable linear leading monomials. Precisely, since the algebras of difference polynomials are free objects in the category of $\Sigma$-algebras where $\Sigma$ is a monoid isomorphic to $\N^r$, we obtained the latter result as a Noetherianity criterion for a class of finitely generated $\Sigma$-algebras. Among such Noetherian $\Sigma$-algebras one finds polynomial algebras in a finite number of variables where a tensor product of a finite number of algebras generated by single matrices acts over the subspace of linear forms. Considering that such commutative tensor algebras include group algebras of finite abelian groups one obtains that there exists a consistent  basis theory for ideals of finitely generated polynomial algebras that are invariant under such groups. In our opinion, this represents an interesting step in the direction of development of computational methods for ideals or algebras that are subject to group or algebra symmetries. As for further developments, we may suggest that the study of important structures related to  bases like Hilbert series and free resolutions should be developed in the perspective that their definition and computation has to be consistent to the symmetry one defines eventually on a polynomial algebra. An important work in this direction is contained in [@KLMP]. Finally, the problem of studying conditions providing $\Sigma$-Noetherianity (instead of simple Noetherianity) for finitely generated $\Sigma$-algebras is also an intriguing subject. Acknowledgments {#acknowledgments .unnumbered} =============== The authors would like to express their gratitude to the reviewers for all valuable remarks that have helped to make the paper more readable. [00]{} Amodio, P.; Blinkov Yu.A.; Gerdt, V.P.; La Scala, R., On Consistency of Finite Difference Approximations to the Navier-Stokes Equations. In Gerdt, V.P. et al. (Eds.), Computer Algebra in Scientific Computing - CASC 2013, [*Lect. Notes Comput. Sc.*]{}, 8136, Springer-Verlag, Berlin, 2013, 46–60. Aschenbrenner, M.; Hillar, C.J., Finite generation of symmetric ideals. [*Trans. Amer. Math. Soc.*]{}, [**359**]{} (2007), no. 11, 5171–5192. Bächler, T.; Gerdt, V.; Lange-Hegermann, M.; Robertz, D., Algorithmic Thomas Decomposition of Algebraic and Differential Systems. [*J. Sym. Comput.*]{}, [**47**]{} (2012), 1233–1266. Bergman, G. M., The diamond lemma for ring theory. [*Adv. in Math.*]{}, [**29**]{} (1978), no. 2, 178–218. Björck, G.; Fröberg, R.: A Faster Way to Count the Solution of Inhomogeneous Systems of Algebraic Equations, with Applications to Cyclic n-Roots. [*J. Sym. Comput.*]{}, [**12**]{} (1991), 329–336. Boulier, F.; Hubert, E.: — Package for differential elimination and analysis of differential systems, (1996–2004). . Buchberger, B., Ein algorithmisches Kriterium für die Lösbarkeit eines algebraischen Gleichungssystems. (German), [*Aequationes Math.*]{}, [**4**]{} (1970), 374–383. Carrà Ferro, G., Differential Gröbner bases in one variable and in the partial case. Algorithms and software for symbolic analysis of nonlinear systems. [*Math. Comput. Modelling*]{}, [**25**]{} (1997), 1–10. Chyzak, F., Gröbner bases, symbolic summation and symbolic integration. Gröbner bases and applications (Linz, 1998), [*London Math. Soc. Lecture Note Ser.*]{}, 251, Cambridge Univ. Press, Cambridge, 1998, 32–60. Cohn, R.M., Difference algebra. Interscience Publishers John Wiley & Sons, New York-London-Sydney, 1965. Cox, D.; Little, J.; O’Shea, D., Ideals, varieties, and algorithms. An introduction to computational algebraic geometry and commutative algebra. Undergraduate Texts in Mathematics. Springer, New York, 2007. Gatermann, K., Computer algebra methods for equivariant dynamical systems. [*Lect. Notes Math.*]{}, 1728, Springer-Verlag, Berlin, 2000. Gerdt, V.P., Gröbner Bases in Perturbative Calculations, [*Nucl. Phys. B (Proc. Suppl.)*]{}, [**135**]{}, (2004), 232–237. Gerdt, V.P., Consistency Analysis of Finite Difference Approximations to PDE Systems. In: Proc. of Mathematical Modeling and Computational Physics. MMCP 2011, [*Lect. Notes Comput. Sci.*]{}, 7125, Springer, Heidelberg, 2012, 28–43. Gerdt, V.P.; Blinkov, Y.A.; Mozzhilkin, V.V., Gröbner bases and generation of difference schemes for partial differential equations. [*SIGMA Symmetry Integrability Geom. Methods Appl.*]{}, [**2**]{}, (2006), Paper 051, 26 pp. Gerdt, V.P.; Robertz, D., Consistency of Finite Difference Approximations for Linear PDE Systems and its Algorithmic Verification. In: Watt, S.M. (Ed.), Proceedings of ISSAC 2010 (München), ACM, New York, 2010, 53–59. Gerdt, V.P.; Robertz D., Computation of Difference Gröbner Bases, [*Computer Science Journal of Moldova*]{}, [**20**]{} (2012), 203–226. Greuel, G.-M.; Pfister, G., [*A Singular introduction to commutative algebra. Second, extended edition*]{}. With contributions by O. Bachmann, C. Lossen and H. Schönemann. Springer, Berlin, 2008. Higman, G., Ordering by divisibility in abstract algebras. [*Proc. London Math. Soc. (3)*]{}, [**2**]{} (1952), 326–336. Kondratieva M., Levin A., Mikhalev, A., Pankratiev E., [*Differential and Difference Dimension Polynomials*]{}. Mathematics and Its Applications, Kluwer, Dordrecht, 1999. La Scala, R.; Levandovskyy, V., Letterplace ideals and non-commutative Gröbner bases. [*J. Sym. Comput.*]{}, [**44**]{} (2009), 1374–1393. La Scala, R.; Levandovskyy, V., Skew polynomial rings, Gröbner bases and the letterplace embedding of the free associative algebra. [*J. Sym. Comput.*]{}, [**48**]{} (2013), 110–131. La Scala, R., Gröbner bases and gradings for partial difference ideals. [*Math. Comp.*]{}, to appear, (2014), 1–28. http://dx.doi.org/10.1090/S0025-5718-2014-02859-7 La Scala, R., Extended letterplace correspondence for nongraded noncommutative ideals and related algorithms. preprint (2012), 1–22. arXiv:1206.6027 Levandovskyy, V., PBW Bases, Non-Degeneracy Conditions and Applications. In: Buchweitz, R.O.; Lenzing, H. (eds.): Proceedings of ICRA X, (Toronto 2002), [*Fields Institute Communications*]{}, 45, AMS, 2005, 229–246. Levandovskyy V.; Martin B., A Symbolic Approach to Generation and Analysis of Finite Difference Schemes of Partial Differential Equations. In: Langer U. et al. (Eds.), Numerical and Symbolic Scientific Computing: Progress and Prospects. Springer (2012), 123–156. Levin, A., Difference algebra. [*Algebra and Applications*]{}, 8. Springer, New York, 2008. Li, Z.; Wu, M., Transforming linear functional systems into fully integrable systems. [*J. Sym. Comput.*]{}, [**47**]{} (2012), 711–732. Steidel, S., Gröbner bases of symmetric ideals. [*J. Sym. Comput.*]{}, [**54**]{} (2013), 72–86. Wu, M., On Solutions of Linear Functional Systems and Factorization of Modules over Laurent-Ore Algebras, PhD thesis, Chinese Academy of Science and Université de Nice-Sophia Antipolis. Zhou, M.; Winkler, F., Computing difference-differential dimension polynomials by relative Gröbner bases in difference-differential modules. [*J. Sym. Comput.*]{}, [**43**]{} (2008), 726–745. Zobnin, A.I., Admissible orderings and finiteness criteria for differential standard bases. ISSAC 2005 (Beijing), 365–372 (electronic), ACM, New York, 2005. [^1]: Both authors acknowledge the support of the University of Bari and of the visiting program of Istituto Nazionale di Alta Matematica. The author V.G. was also supported by the grant 13-01-00668 from the Russian Foundation for Basic Research and by grant 3802.2012.2 from the Ministry of Education and Science of the Russian Federation.
--- abstract: 'We study electron transport through a semiconductor superlattice subject to an electric field parallel to and a magnetic field perpendicular to the growth axis. Using a single miniband, semiclassical balance equation model with both elastic and inelastic scattering, we find that (1) the current-voltage characteristic becomes multistable in a large magnetic field; and (2) “hot” electrons display novel features in their current-voltage characteristics, including absolute negative conductivity (ANC) and, for sufficiently strong magnetic fields, a spontaneous dc current at zero bias. We discuss possible experimental situations providing the necessary hot electrons to observe the predicted ANC and spontaneous dc current generation.' address: | $^1$Department of Physics, University of Illinois at Urbana-Champaign, 1110 West Green St., Urbana, IL 61801\ $^2$Department of Physics, Loughborough University, Loughborough LE11 3TU, UK\ $^3$Theory of Nonlinear Processes Laboratory, Kirensky Institute of Physics, Krasnoyarsk 660036, Russia\ $^*$ Current address: Department of Electrical Engineering, University of Notre Dame, Notre Dame, IN 46556 author: - 'Ethan H. Cannon$^{1*}$, Feodor V. Kusmartsev$^{2}$, Kirill N. Alekseev$^{3}$, and David K. Campbell$^1$' title: Absolute Negative Conductivity and Spontaneous Current Generation in Semiconductor Superlattices with Hot Electrons --- As first realized by Esaki and Tsu [@esaki70], semiconductor superlattices (SSLs) are excellent systems for exploring nonlinear transport effects, since their long spatial periodicity implies that SSLs have small Brillouin zones and very narrow “minibands.” Applied fields accelerate Bloch electrons in a band according to Bloch’s acceleration theorem, $\dot{\mathbf{k}}=-(e/\hbar)[\mathbf{E}+(\mathbf{v}\times\mathbf{B})/c]$, where $\mathbf{k}$ is the crystal momentum of the electron, $-e$ its charge, $\mathbf{E}$ the electric field, $\mathbf{B}$ the magnetic field, $\mathbf{v}$ the electron’s velocity, and $c$ the speed of light. In SSLs, both the velocity and effective mass depend on the crystal momentum; in fact, the effective mass is negative above the band’s inflection point, corresponding to the fact that electrons slow down to zero velocity as the reach the edge of the Brillouin zone. The acceleration of the external fields is balanced by scattering processes that limit the crystal momentum of electrons. In clean SSLs with only modest fields, electrons can reach the negative effective mass (NEM) portion of the miniband before scattering. For an electric field oriented along the SSL growth axis, the current-voltage characteristic exhibits a peak followed by negative differential conductivity (NDC) when a significant fraction of electrons explore the NEM region of the miniband [@esaki70]; with an additional magnetic field perpendicular to the growth axis, NDC occurs at a [*larger*]{} bias because the magnetic field impedes the increase of crystal momentum along the growth axis [@ivexpt]. In this letter, we study electron transport through a single miniband of a spatially homogeneous SSL with growth axis in the $z$-direction in the presence of a constant magnetic field, $B$, in the $x$-direction and an electric field, $E$, in the $z$-direction. We assume a tight-binding dispersion relation for the SSL miniband, $\epsilon({\mathbf{k}})=\hbar^2k_y^2/2m^*+\Delta/2[1-\cos(k_z a)]$, where $\epsilon$ is the energy of an electron with crystal momentum $\mathbf{k}$, $m^*$ is the effective mass within the plane of the quantum wells (QWs) that form the SSL, $\Delta$ is the miniband width, and $a$ the SSL period. Generalizing the approach of [@bal_eqns] to include the effects of the magnetic field, we obtain the following balance equations [@future] $$\begin{aligned} \dot{V_y}&=&-\frac{eB}{m^*c}V_z-{\gamma_{vy}}V_y \label{eq:be2} \\ \dot{V_z}&=&-\frac{e}{m(\varepsilon_z)}[E-\frac{B V_y}{c}]-{\gamma_{vz}}V_z \label{eq:be3} \\ \dot{\varepsilon_z}&=&-eEV_z+\frac{eB}{c}V_y V_z-{\gamma_\varepsilon}[\varepsilon_z-\varepsilon_{eq,z}]. \label{eq:be4}\end{aligned}$$ The average electron velocity, ${\mathbf{V}}=(V_y,V_z)$, is obtained by integrating the distribution function satisfying the Boltzmann transport equation over the Brillouin zone; and ${\gamma_{vy}}$ and ${\gamma_{vz}}$ are the relaxation rates for the corresponding components of $\mathbf{V}$ following from elastic impurity, interface roughness and disorder scattering, and inelastic phonon scattering. It is convenient to separate the total energy of the electrons into parts associated with longitudinal and transverse motion. Doing so, $\varepsilon_z$ is the average energy of motion along the growth axis with equilibrium value $\varepsilon_{eq,z}$; ${\gamma_\varepsilon}$ represents its relaxation rate due mainly to inelastic phonon scattering (elastic scattering that reduces the energy of motion along the superlattice growth axis and increases the energy of (transverse) motion within the QWs also contributes). Note that the balance equations contain an effective mass term dependent on $\varepsilon_z$, $m(\varepsilon_z)=m_0/(1-2\varepsilon_z/\Delta)$, which follows from the crystal momentum dependence of the effective mass tensor; in this expression, $m_0=2\hbar^2/\Delta a^2$ is the effective mass at the bottom of the SSL miniband. Because of the constant effective mass for motion within the plane of the QWs, the energy of this motion does not enter the balance equations. While the magnetic field does not change the total electron energy, it does transfer energy between in-plane motion and $\varepsilon_z$, hence Eq. (\[eq:be4\]) contains the magnetic field-dependent term. For an intuitive understanding of the balance equations, we can consider them as describing an “average" electron whose velocity changes according to Newton’s second law, $\dot{\mathbf{V}}={\mathbf{F}}/{\mathbf{m(\varepsilon)}}$, with $\mathbf{F}$ representing electric, magnetic and damping forces. The mass tensor ${\mathbf{m(\varepsilon)}}$ is diagonal and $m_{zz}$ depends on the energy of motion in the $z$-direction; this component of the energy evolves according to $\dot{\varepsilon_z}=F_z V_z-P_{damp}$. Inelastic scattering to the average energy $\varepsilon_{eq,z}$ (which may not be the bottom of the miniband) leads to the damping term, $P_{damp}$. This gratifyingly intuitive picture should not obscure the result that our balance equations have been [*derived*]{} systematically from the full Boltzmann transport equation. For numerical simulations, we introduce the scalings $v_y=((m_0m^*)^{1/2}a/\hbar)V_y$, $v_z=(m_0 a/\hbar)V_z$, $w=(\varepsilon_z-\Delta/2)/(\Delta/2)$, $w_0=(\varepsilon_{eq,z}-\Delta/2)/(\Delta/2)$, ${{\cal B}}=eB/(m^*m_0)^{1/2}c$ and ${\omega_B}=eEa/\hbar$ (the Bloch frequency of the electric field). Note that the average electron energy is scaled such that -1 (+1) corresponds to the bottom (top) of the miniband. In terms of the scaled variables, the balance equations read $$\begin{aligned} \dot{v_y}&=&-{{\cal B}}v_z-{\gamma_{vy}}v_y \label{eq:sbe2} \\ \dot{v_z}&=&{\omega_B}w-{{\cal B}}v_yw-{\gamma_{vz}}v_z \label{eq:sbe3} \\ \dot{w}&=&-{\omega_B}v_z+{{\cal B}}v_yv_z-{\gamma_\varepsilon}(w-w_0) \label{eq:sbe4}.\end{aligned}$$ The current across the superlattice $I=-eNA(\Delta a/2\hbar)v_{z,ss}$, where $N$ is the carrier concentration, $A$ the cross-sectional area and $v_{z,ss}$ the steady-state solution to Eq. (\[eq:sbe3\]). By setting the time derivatives in Eqs. (\[eq:sbe2\])-(\[eq:sbe4\]) to zero, we obtain the following equation relating $v_{z,ss}$ and hence the SSL current to the applied bias, $$C^2v_{z,ss}^3+2C{\omega_B}v_{z,ss}^2+[{\gamma_{vz}}{\gamma_\varepsilon}+{\omega_B}^2-{\gamma_\varepsilon}w_0C]v_{z,ss}-{\gamma_\varepsilon}w_0{\omega_B}=0, \label{eq:IV}$$ where $C={{\cal B}}^2/{\gamma_{vy}}$. This cubic equation for $v_{z,ss}$ implies that there may be up to three steady-state current values for a given bias [@epshtein]. In figure 1, we plot $-v_{z,ss}$, which is proportional to the current across the SSL, as a function of scaled voltage, ${\omega_B}$, for various scaled magnetic fields strengths, $C$, with equal momentum and energy relaxation rates, ${\gamma_{vz}}={\gamma_\varepsilon}$. With no magnetic field (Fig. 1a), the current exhibits a peak followed by negative differential conductance (NDC) and satisfies the well-known expression $v_{z,ss}=(-w_0/{\gamma_{vz}}){\omega_B}/(1+{\omega_B}^2/{\gamma_{vz}}{\gamma_\varepsilon})$ [@ktitorov72]. A magnetic field increases the value of the electric field at which the current reaches its maximum value (Fig. 1b), as has been observed in recent experiments [@ivexpt]. Finally, for larger magnetic fields (Fig. 1c), the current-voltage characteristic from the balance equations has a region of multistability with three possible currents. For SSL parameters of $\Delta=23$ meV, $a=84$Å, and ${\gamma_{vy}}={\gamma_{vz}}={\gamma_\varepsilon}=1.5\times 10^{13}$sec$^{-1}$, [@ivexpt], multistability requires a magnetic field of 21 T, but the semiclassical balance equation is not applicable at such large fields. However, for SSL parameters of $\Delta=22$meV, $a=90$Å, and ${\gamma_{vy}}={\gamma_{vz}}={\gamma_\varepsilon}=10^{12}$sec$^{-1}$, [@rauch98], multistability should occur for a modest magnetic field of 1.4 T. Let us now consider the situation of “hot” electrons. In this case the electron distribution is highly non-thermal, even without the applied fields. The electrons do not have time to relax to the bottom of the miniband before leaving the SSL. We can effectively describe these hot carriers as relaxing to the top half of the miniband, [*i.e,*]{} as having $w_0>0$. This may happen in a very clean SSL, at very low temperatures, when the inelastic mean free path is comparable with the SSL size. The hot electrons may be obtained by injection [@rauch97; @rauch98; @rauch99] or by an optical excitation. Below we will discuss how to achieve this situation experimentally. For zero or small magnetic fields (Fig. 2a), absolute negative conductance (ANC) occurs as the current flows in the opposite direction as the applied bias. Then, in larger magnetic fields (Fig. 2b), a region of multistability appears around zero bias; a linear stability analysis shows the zero current solution becomes unstable as soon as the nonzero current solutions emerge. [*In other words, the SSL will spontaneously develop a current across it at zero bias.*]{} The three possible steady-state velocities at zero bias are $$v_{z,ss}=0,\pm(\frac{{\gamma_\varepsilon}w_0 C-{\gamma_{vz}}{\gamma_\varepsilon}}{C^2})^{1/2}, \label{eq:Izb}$$ so a spontaneous current will appear when the condition $w_0C>{\gamma_{vz}}$ (in other words, $w_0{{\cal B}}^2/{\gamma_{vy}}>{\gamma_{vz}}$) is satisfied. Since $C$ and ${\gamma_{vz}}$ are always positive, this requires that $w_0$ be positive; neither thermal effects nor doping can fulfill the necessary condition for a zero-bias current: hot electrons are required. Physically, one clearly needs energy to create the spontaneous current, and this energy is supplied by hot electrons. These two results for hot electrons, [*i.e.*]{} ANC and spontaneous current generation, follow from their negative effective mass in the top half of the miniband. To understand the origin of the ANC, consider a one-dimensional SSL with electrons at their equilibrium position at the bottom of the band, $w_0=-1$, and no electric field; when a positive bias is applied, ${\omega_B}>0$, the electrons move through the band according to $\dot{k_za}=-{\omega_B}$ until a scattering event occurs. Elastic scattering conserves energy, sending an electron across the band in this one-dimensional case. Inelastic scattering changes the electron energy to $w_0$, [*i.e.*]{} $k_z$=0. (Hot electrons inelastically scatter to $w_0>0$, possibly gaining energy.) In Fig. 3, the electric field accelerates the electrons from their equilibrium position at point A; inelastic scattering prevents many electrons from passing point B, so electrons are found mainly in the segment AB. Elastic scattering sends electrons into the segment AC, which contains fewer electrons than the segment AB. In this tight-binding miniband, the velocity of an electron with crystal momentum $k_z$ is ${\cal V}(k_z)\equiv\hbar^{-1}\partial\epsilon/\partial k_z=(\Delta a/2\hbar)\sin(k_za)$; because the segment AB has more electrons, there is a net negative velocity, or a positive current, as expected for a positive voltage. In the presence of hot electrons, when $w_0>0$, the two points labeled D1 and D2 initially are occupied with equal numbers of electrons and no current flows. Once applied, the electric field accelerates electrons such that they occupy the segments D1E1 and D2E2, as inelastic (non-energy-conserving) scattering returns them to their quasi-equilibrium energy at points D1 and D2; elastic scattering leads to a smaller number of electrons in the segments D1F1 and D2F2. The speed of electrons above the inflection point of the miniband decreases as the magnitude of their crystal momentum increases towards the edge of the Brillouin zone, thus the electrons in the segment D2E2 have a larger speed than those in the segment D1E1. A positive net velocity or, in other words, a negative current results; this is the absolute negative conductivity shown in figure 2a. An intuitive picture of the spontaneous current generation also follows from the miniband structure of an SSL in an external magnetic field. Consider a small, positive current fluctuation across the SSL resulting from extra electrons at the initial energy $w_0>0$, point D2 in Fig. 3. The crystal momentum evolves according to $\dot{\mathbf{k}}=-(e/\hbar c)\mathbf{\cal V}\times\mathbf{B}$; with $B_x>0$, initially $\dot{k_y}>0$, hence $\dot{k_z}<0$. The electron moves from point D2 towards E2 with increasing speed, until inelastic scattering returns the electron to its quasi-equilibrium initial position or elastic scattering sends it across the band. For a large enough magnetic field, small enough elastic scattering and electrons far enough into the NEM region of the miniband (as specified by the requirement $w_0{{\cal B}}^2>{\gamma_{vy}}{\gamma_{vz}}$), the initial current fluctuation will increase, the zero current state will be unstable to such small fluctuations, and the SSL will develop a spontaneous current. Experimentally, it is possible to obtain these hot electrons with $w_0>0$ by injecting electrons into the NEM portion of the miniband, as was described recently in references [@rauch97; @rauch98; @rauch99]. In this injection geometry, two mechanisms contribute to the current through the SSL: first, coherent tunneling through the whole SSL, and, second, incoherent transport of scattered electrons that do not maintain phase information [@rauch98]. The balance equations describe these latter electrons. While the electrons in the NEM region can support a current instability, those that have scattered to the bottom of the miniband cannot, so it is vitally important to keep the miniband width below the LO phonon energy of 36 meV in order to limit phonon scattering. In this case, the balance equations describe the behavior of electrons that have scattered elastically, primarily because of disorder. When the injection energy is in the forbidden region below the miniband, there is no appreciable current through the SSL; as the injection energy is swept through the miniband, the current increases, since electrons incident at the miniband energy can traverse the SSL. The current then decreases again when the injection energy passes through the top half of the miniband. The sharpness of this decrease depends on the miniband width because phonon replicas emerge when electrons having undergone LO phonon scattering are at the miniband position. For a sharp feature, a narrow miniband is important (see Fig. 3 in reference [@rauch97]), such that the width of the incident wavepacket (about 17meV) plus the miniband width is less than the LO phonon energy (36meV). To observe the hot electron effects we predict, the transmitted current- injection energy characteristic must first be measured with no external fields; the experiment must then be repeated with a magnetic field in the plane of the quantum well. While such a field reduces the coherent current [@rauch99], if sufficiently strong, it can lead to spontaneous current generation for electrons incident near the top of the miniband, [*i.e.*]{} between the peaks in the current-injection energy curve. This current instability would cause the current to flatten, or even increase, between the main peak and its phonon replica. To observe the ANC, the current-injection energy curves must be measured for positive and negative voltages; it is known that in a positive or negative bias the location of the current peak shifts due to the voltage drop across the drift region and the coherent current decreases [@rauch98]. When the phonon replica current is small, it may also be possible to observe a change in the shape of the current peak. For positive bias, the current below the peak, at injection energies in the lower half of the miniband, increases. Meanwhile the current above the peak, for injection energy in the top half of the miniband, decreases due to ANC; the current drops off more rapidly on the high-injection energy side of the current peak. Just the opposite occurs for a negative bias: since ANC causes the current to increase for injection energies in the top half of the miniband, the peak drops less sharply. Finally, we note that recently Kempa and coworkers have studied the possibility of generating non-equilibrium plasma instabilities through a similar selective energy injection scheme [@kempa]. The other possibility to create the hot electrons is an optical excitation of electron-hole pairs. As far as we aware, this approach has not yet been used specifically as a method of injecting hot electrons into an SSL. In summary, we have described new physical effects—-incoherent current flow opposite to the direction of the applied electric bias and spontaneous current generation for hot electrons in a transverse magnetic field—in an SSL with nonequilibrium electron excitations and have suggested how they might be observed in experiments. We hope our experimental colleagues will search for these effects. We are grateful to Lawrence Eaves for stimulating discussions. F.V.K. thanks the Department of Physics at the University of Illinois at Urbana-Champaign for its hospitality. This work was partially supported by NATO Linkage Grant NATO LG 931602 and INTAS. E.H.C. acknowledges support by a graduate traineeship under NSF GER93-54978. L. Esaki and R. Tsu, IBM J. Res. Dev. [**14**]{}, 61 (1970). F. Aristone [*et al.*]{}, Appl. Phys. Lett. [**67**]{}, 2916 (1995); L. Canali [*et al.*]{}, Superlattices Microstruct. [**22**]{}, 155 (1997). A. A. Ignatov and V. I. Shashkin, Phys. Lett. A [**94**]{}, 169 (1983). E. H. Cannon, K. N. Alekseev, D. K. Campbell, F. V. Kusmartsev, unpublished. . M. [É]{}pshtein, Radiophysics and Quantum Electronics (Consultant’s Bureau) [**22**]{}, 259 (1979), Sov. Phys. Semicond. [**25**]{}, 216 (1991). In these references, [É]{}pshtein discussed a similar effect—namely, the Hall field across a current-biased superlattice to lowest order in the magnetic field—and found that the Hall voltage becomes multivalued for sufficient current; in contrast, we discuss the current across a voltage-biased superlattice in a magnetic field of arbitrary strength (although our semiclassical model breaks down in a quantizing magnetic field). S. A. Ktitorov, G. S. Simin, and V. Ya. Sindalovskii, Sov. Phys. Solid State [**13**]{}, 1872 (1972); A. A. Ignatov, E. P. Dodin, and V. I. Shashkin, Mod. Phys. Lett. B [**5**]{}, 1087 (1991). C. Rauch [*et al.*]{}, Phys. Rev. Lett. [**81**]{}, 3495 (1998). C. Rauch [*et al.*]{}, Appl. Phys. Lett. [**70**]{}, 649 (1997). C. Rauch [*et al.*]{}, Superlattices Microstruct. [**25**]{}, 47 (1999). K. Kempa, P. Bakshi, and E. Gornik, Phys. Rev. B [**54**]{}, 8231 (1996); K. Kempa [*et al.*]{}, J. Appl. Phys. [**85**]{}, 3708 (1999).
--- abstract: 'We address the problem of algorithmic fairness: ensuring that sensitive variables do not unfairly influence the outcome of a classifier. We present an approach based on empirical risk minimization, which incorporates a fairness constraint into the learning problem. It encourages the conditional risk of the learned classifier to be approximately constant with respect to the sensitive variable. We derive both risk and fairness bounds that support the statistical consistency of our approach. We specify our approach to kernel methods and observe that the fairness requirement implies an orthogonality constraint which can be easily added to these methods. We further observe that for linear models the constraint translates into a simple data preprocessing step. Experiments indicate that the method is empirically effective and performs favorably against state-of-the-art approaches.' author: - | Michele Donini\ CSML\ Istituto Italiano di Tecnologia\ Genoa, Italy\ `[email protected]` Luca Oneto\ DIBRIS\ University of Genova\ Genoa, Italy\ `[email protected]` Shai Ben-David\ School of Computer Science\ University of Waterloo\ Waterloo, Ontario, Canada\ `[email protected]` John Shawe-Taylor\ Department of Computer Science\ University College London\ London, UK\ `[email protected]` Massimiliano Pontil\ CSML\ Istituto Italiano di Tecnologia\ Genoa, Italy\ `[email protected]` bibliography: - 'biblio.bib' title: | Empirical Risk Minimization\ Under Fairness Constraints --- Introduction {#sec:intro} ============ In recent years there has been a lot of interest on algorithmic fairness in machine learning see, e.g., [@dwork2018decoupled; @hardt2016equality; @zafar2017fairness; @zemel2013learning; @kilbertus2017avoiding; @kusner2017counterfactual; @calmon2017optimized; @joseph2016fairness; @chierichetti2017fair; @jabbari2016fair; @yao2017beyond; @lum2016statistical; @zliobaite2015relation] and references therein. The central question is how to enhance supervised learning algorithms with fairness requirements, namely ensuring that sensitive information (e.g. knowledge about the ethnic group of an individual) does not ‘unfairly’ influence the outcome of a learning algorithm. For example if the learning problem is to decide whether a person should be offered a loan based on her previous credit card scores, we would like to build a model which does not unfair use additional sensitive information such as race or sex. Several notions of fairness and associated learning methods have been introduced in machine learning in the past few years, including Demographic Parity [@calders2009building], Equal Odds and Equal Opportunities [@hardt2016equality], Disparate Treatment, Impact, and mistreatment [@zafar2017fairness]. The underlying idea behind such notions is to balance decisions of a classifier among the different sensitive groups and label sets. In this paper, we build upon the notion Equal Opportunity (EO) which defines fairness as the requirement that the true positive rate of the classifier is the same across the sensitive groups. In Section \[sec:luca:th:Fairness\] we introduce a generalization of this notion of fairness which constrains the conditional risk of a classifier, associated to positive labeled samples of a group, to be approximately constant with respect to group membership. The risk is measure according to a prescribed loss function and approximation parameter $\epsilon$. When the loss is the misclassification error and $\epsilon = 0$ we recover the notion EO above. We study the problem of minimizing the expected risk within a prescribed class of functions subject to the fairness constraint. As a natural estimator associated with this problem, we consider a modified version of Empirical Risk Minimization (ERM) which we call Fair ERM (FERM). We derive both risk and fairness bounds, which support that FERM is statistically consistent, in a certain sense which we explain in the paper in Section \[sec:luca:th:FERM\]. Since the FERM approach is impractical due to the non-convex nature of the constraint, we propose, still in Section \[sec:luca:th:FERM\], a surrogate convex FERM problem which relates, under a natural condition, to the original goal of minimizing the misclassification error subject to a relaxed EO constraint. We further observe that our condition can be empirically verified to judge the quality of the approximation in practice. As a concrete example of the framework, in Section \[sec:luca:th:FK\] we describe how kernel methods such as support vector machines (SVMs) can be enhanced to satisfy the fairness constraint. We observe that a particular instance of the fairness constraint for $\epsilon=0$ reduces to an orthogonality constraint. Moreover, in the linear case, the constraint translates into a preprocessing step that implicitly imposes the fairness requirement on the data, making fair any linear model learned with them. We report numerical experiments using both linear and nonlinear kernels, which indicate that our method improves on the state-of-the-art in four out of five datasets and is competitive on the fifth dataset[^1]. In summary the contributions of this paper are twofold. First we outline a general framework for empirical risk minimization under fairness constraints. The framework can be used as a starting point to develop specific algorithms for learning under fairness constraints. As a second contribution, we shown how a linear fairness constraint arises naturally in the framework and allows us to develop a novel convex learning method that is supported by consistency properties both in terms of EO and risk of the selected model, performing favorably against state-of-the-art alternatives on a series of benchmark datasets. [**Previous Work.**]{} Work on algorithmic fairness can be divided in three families. Methods in the first family modify a pretrained classifier in order to increase its fairness properties while maintaining as much as possible the classification performance: [@pleiss2017fairness; @beutel2017data; @hardt2016equality; @feldman2015certifying] are examples of these methods but no consistency property nor comparison with state-of-the-art proposal are provided. Methods in the second family enforce fairness directly during the training step: [@agarwal2017reductions; @agarwal2018reductions; @woodworth2017learning; @zafar2017fairness; @menon2018cost; @zafar2017parity; @bechavod2018Penalizing; @zafar2017fairnessARXIV; @kamishima2011fairness; @kearns2017preventing] are examples of this method which provide non-convex approaches to the solution of the problem or they derive consistency results just for the non-convex formulation resorting later to a convex approach which is not theoretically grounded; [@Prez-Suay2017Fair; @dwork2018decoupled; @berk2017convex; @alabi2018optimizing] are other examples of convex approaches which do not compare with other state-of-the-art solutions and do not provide consistency properties except for the [@dwork2018decoupled] which, contrarily to our proposal, does not enforce a fairness constraint directly in the learning phase and the [@olfat2018spectral] which proposes a computational tractable fair SVM starting from a constraint on the covariance matrices. Specifically, it leads to a non-convex constraint which is imposed iteratively with a sequence of relaxation exploiting spectral decompositions. Finally, the third family of methods implements fairness by modifying the data representation and then employs standard machine learning methods: [@adebayo2016iterative; @calmon2017optimized; @kamiran2009classifying; @zemel2013learning; @kamiran2012data; @kamiran2010classification] are examples of these methods but, again, no consistency property nor comparison with state-of-the-art proposal are provided. Our method belongs to the second family of methods, in that it directly optimizes a fairness constraint related to the notion of EO discussed above. Furthermore, in the case of linear models, our method translates to an efficient preprocessing of the input data, with methods in the third family. As we shall see, our approach is theoretically grounded and performs favorably against the state-of-the-art[^2]. Fair Empirical Risk Minimization {#sec:luca:th:Fairness} ================================ In this section, we present our approach to learning with fairness. We begin by introducing our notation. We let $\mathcal{D} = \left\{ (\boldsymbol{x}_1,s_1,y_1),\dots, (\boldsymbol{x}_n,s_n,y_n) \right\}$ be a sequence of $n$ samples drawn independently from an unknown probability distribution $\mu$ over $\mathcal{X} \times \mathcal{S} \times \mathcal{Y}$, where $\mathcal{Y} = \{ -1, +1 \}$ is the set of binary output labels, $\mathcal{S} = \{a,b\}$ represents group membership among two groups[^3] (e.g. ‘female’ or ‘male’), and $\mathcal{X}$ is the input space. We note that the $\boldsymbol{x} \in \mathcal{X}$ may further contain or not the sensitive feature $s \in \mathcal{S}$ in it. We also denote by $\mathcal{D}^{+,g} {=}\{(\boldsymbol{x}_i,s_i,y_i) : y_i {=} 1,s_i {=} g \}$ for $g \in \{ a,b \}$ and $n^{+,g} = |\mathcal{D}^{+,g}|$. Let us consider a function (or model) $f: \mathcal{X} \rightarrow \mathbb{R}$ chosen from a set $\mathcal{F}$ of possible models. The error (risk) of $f$ in approximating $\mu$ is measured by a prescribed loss function $\ell:\mathbb{R} \times \mathcal{Y} \rightarrow \mathbb{R}$. The risk of $f$ is defined as ${L}(f) = \mathbb{E} \left[ \ell(f(\boldsymbol{x}),y) \right]$. When necessary we will indicate with a subscript the particular loss function used, i.e. ${L}_p(f) = \mathbb{E} \left[ \ell_p(f(\boldsymbol{x}),y) \right]$. The purpose of a learning procedure is to find a model that minimizes the risk. Since the probability measure $\mu$ is usually unknown, the risk cannot be computed, however we can compute the empirical risk $\hat{L}(f) = \hat{\mathbb{E}} [\ell(f(\boldsymbol{x}),y)]$, where $\hat{\mathbb{E}}$ denotes the empirical expectation. A natural learning strategy, called Empirical Risk Minimization (ERM), is then to minimize the empirical risk within a prescribed set of functions. Fairness Definitions {#sec:luca:th:Definitions} -------------------- In the literature there are different definitions of fairness of a model or learning algorithm [@hardt2016equality; @dwork2018decoupled; @zafar2017fairness; @zafar2017fairness], but there is not yet a consensus on which definition is most appropriate. In this paper, we introduce a general notion of fairness which encompasses some previously used notions and it allows to introduce new ones by specifying the loss function used below. \[def:fairness\] Let ${L}^{+,g}(f) {=} \mathbb{E} [ \ell(f(\boldsymbol{x}),y) | y {=} 1, s {=} g ]$ be the risk of the positive labeled samples in the $g$-th group, and let $\epsilon \in [0,1]$. We say that a function $f$ is $\epsilon$-fair if  $| {L}^{+,a}(f) - {L}^{+,b}(f)| \leq \epsilon$. This definition says that a model is fair if it commits approximately the same error on the positive class independently of the group membership. That is, the conditional risk $L^{+,g}$ is approximately constant across the two groups. Note that if $\epsilon = 0$ and we use the hard loss function, $\ell_h(f(\boldsymbol{x}),y) = \mathds{1}_{\{y f(\boldsymbol{x}) \leq 0\}}$, then Definition \[def:fairness\] is equivalent to definition of EO proposed by [@hardt2016equality], namely $$\begin{aligned} \mathbb{P}\left\{ f(\boldsymbol{x}) > 0 ~|~ y = 1, s = a \right\} = \mathbb{P}\left\{ f(\boldsymbol{x}) > 0 ~|~ y = 1, s = b \right\}. \label{eq:DEO}\end{aligned}$$ This equation means that the true positive rate is the same across the two groups. Furthermore, if we use the linear loss function $\ell_l(f(\boldsymbol{x}),y) = (1 - y f(\boldsymbol{x}))/2 $ and set $\epsilon = 0$, then Definition \[def:fairness\] gives $$\begin{aligned} \mathbb{E}[f(\boldsymbol{x}) ~|~ y = 1, s = a] = \mathbb{E}[f(\boldsymbol{x}) ~|~ y = 1, s = b ]. \label{eq:lollo}\end{aligned}$$ By reformulating this expression we obtain a notion of fairness that has been proposed by [@dwork2018decoupled] $$\begin{aligned} \sum_{g \in \{a,b\}} \big| \mathbb{E}[f(\boldsymbol{x}) ~|~ y = 1, s = g] - \mathbb{E}[f(\boldsymbol{x}) ~|~ y = 1] \big| = 0. \nonumber\end{aligned}$$ Yet another implication of Eq.  is that the output of the model is uncorrelated with respect to the group membership conditioned on the label being positive, that is, for every $g {\in} \{ a, b \}$, we have $$\begin{aligned} \mathbb{E}\big[ f(\boldsymbol{x}) \mathds{1}_{\{s{=}g\}}~|~y=1 \big] = \mathbb{E} \big[f(\boldsymbol{x})|y=1\big] \mathbb{E} \big[\mathds{1}_{\{s=g\}}~|~y=1\big]. \nonumber\end{aligned}$$ Finally, we observe that our approach naturally generalizes to other fairness measures, e.g. equal odds [@hardt2016equality], which could be subject of future work. Specifically, we would require in Definition \[def:fairness\] that $| {L}^{y,a}(f) - {L}^{y,b}(f)| \leq \epsilon$ for both $y \in \{-1,1\}$. Fair Empirical Risk Minimization {#sec:luca:th:FERM} -------------------------------- In this paper, we aim at minimizing the risk subject to a fairness constraint. Specifically, we consider the problem $$\begin{aligned} \min\Big\{L(f) : f {\in} \mathcal{F} ,~ \big| {L}^{+,a}(f) - {L}^{+,b}(f)\big| \leq \epsilon\Big\} \label{eq:alg:deterministic},\end{aligned}$$ where $\epsilon \in [0,1]$ is the amount of unfairness that we are willing to bear. Since the measure $\mu$ is unknown we replace the deterministic quantities with their empirical counterparts. That is, we replace Problem \[eq:alg:deterministic\] with $$\begin{aligned} \min\Big\{\hat{L}(f) : f {\in} \mathcal{F} ,~ \big| \hat{L}^{+,a}(f) - \hat{L}^{+,b}(f)\big| \leq \hat{\epsilon}\Big\} \label{eq:alg:empirical},\end{aligned}$$ where $\hat{\epsilon} \in [0,1]$. We will refer to Problem \[eq:alg:empirical\] as FERM. We denote by $f^*$ a solution of Problem \[eq:alg:deterministic\], and by $\hat{f}$ a solution of Problem \[eq:alg:empirical\]. In this section we will show that these solutions are linked one to another. In particular, if the parameter $\hat{\epsilon}$ is chosen appropriately, we will show that, in a certain sense, the estimator $\hat{f}$ is consistent. In order to present our observations, we require that it holds with probability at least $1-\delta$ that $$\begin{aligned} \sup_{f \in \mathcal{F}} \big|L(f) - \hat{L}(f)\big| \leq B(\delta,n,\mathcal{F}) \label{eq:bartlett}\end{aligned}$$ where the bound $B(\delta,n,\mathcal{F})$ goes to zero as $n$ grows to infinity if the class $\mathcal{F}$ is learnable with respect to the loss [see e.g. @shalev2014understanding and references therein]. For example, if $\mathcal{F}$ is a compact subset of linear separators in a Reproducing Kernel Hilbert Space (RKHS), and the loss is Lipschitz in its first argument, then $B(\delta,n,\mathcal{F})$ can be obtained via Rademacher bounds [see e.g. @bartlett2002rademacher]. In this case $B(\delta,n,\mathcal{F})$ goes to zero at least as ${\sqrt{1/n}}$ as $n$ grows to infinity, where $n = |\mathcal{D}|$. We are ready to state the first result of this section (proof is reported in supplementary materials). \[thm:mainresult1\] Let $\mathcal{F}$ be a learnable set of functions with respect to the loss function $\ell: \mathbb{R} \times {\cal Y} \rightarrow \mathbb{R}$, let $f^*$ be a solution of Problem (\[eq:alg:deterministic\]) and let $\hat{f}$ be a solution of Problem (\[eq:alg:empirical\]) with $$\begin{aligned} \textstyle \hat{\epsilon} = \epsilon + \sum_{g \in \{a,b\}} B(\delta,n^{+,g},\mathcal{F}).\end{aligned}$$ With probability at least $1-6 \delta$ it holds simultaneously that $$\begin{aligned} \textstyle L(\hat{f}) - L(f^*) \leq 2 B(\delta,n,\mathcal{F}) \quad \text{and} \quad \textstyle \Big| L^{+,a}(\hat{f}) - L^{+,b}(\hat{f}) \Big| \leq \epsilon + 2 \sum_{g \in \{a,b\}} B(\delta,n^{+,g},\mathcal{F}). \nonumber\end{aligned}$$ A consequence of the first statement of Theorem \[thm:mainresult1\] is that as $n$ tends to infinity $L(\hat{f})$ tends to a value which is not larger than $L(f^*)$, that is, FERM is consistent with respect to the risk of the selected model. The second statement of Theorem \[thm:mainresult1\], instead, implies that as $n$ tends to infinity we have that $\hat{f}$ tends to be $\epsilon$-fair. In other words, FERM is consistent with respect to the fairness of the selected model. Thanks to Theorem \[thm:mainresult1\] we can state that $f^{*}$ is close to $\hat{f}$ both in term of its risk and its fairness. Nevertheless, our final goal is to find an $f^*_h$ which solves the following problem $$\begin{aligned} \label{eq:problemHard} \min\Big\{L_h(f) : f {\in} \mathcal{F} ,~ \big| {L}^{+,a}_h(f) - {L}^{+,b}_h(f)\big| \leq \epsilon\Big\}.\end{aligned}$$ Note that the objective function in Problem \[eq:problemHard\] is the misclassification error of the classifier $f$, whereas the fairness constraint is a relaxation of the EO constraint in Eq. . Indeed, the quantity $\big| {L}^{+,a}_h(f) - {L}^{+,b}_h(f)\big|$ is equal to $$\begin{aligned} \!\!\big| \mathbb{P}\left\{ f(\boldsymbol{x}) > 0 ~|~ y = 1,\! s = a \right\}\! - \mathbb{P}\left\{ f(\boldsymbol{x}) > 0 ~|~ y = 1,\! s = b \right\}\! \big|. \label{def:DEOmichele}\end{aligned}$$ We refer to this quantity as difference of EO (DEO). Although Problem \[eq:problemHard\] cannot be solved, by exploiting Theorem \[thm:mainresult1\] we can safely search for a solution $\hat{f}_h$ of its empirical counterpart $$\begin{aligned} \label{eq:problemHardempirical} \min\Big\{\hat{L}_h(f) : f {\in} \mathcal{F} ,~ \big| \hat{L}^{+,a}_h(f) - \hat{L}^{+,b}_h(f)\big| \leq \hat{\epsilon}\Big\}.\end{aligned}$$ Unfortunately Problem \[eq:problemHardempirical\] is a difficult nonconvex nonsmooth problem, and for this reason it is more convenient to solve a convex relaxation. That is, we replace the hard loss in the risk with a convex loss function $\ell_c$ (e.g. the Hinge loss $\ell_{c} = \max\{0, \ell_l \}$) and the hard loss in the constraint with the linear loss $\ell_l$. In this way, we look for a solution $\hat{f}_c$ of the convex FERM problem $$\begin{aligned} \label{eq:problemSoft} \min\Big\{\hat{L}_c(f) : f {\in} \mathcal{F} ,~ \big| \hat{L}^{+,a}_l(f) - \hat{L}^{+,b}_l(f)\big| \leq \hat{\epsilon}\Big\}.\end{aligned}$$ The questions that arise here are whether, and how close, $\hat{f}_c$ is to $\hat{f}_h$, how much, and under which assumptions. The following theorem sheds some lights on these issues (proof is reported in supplementary materials, Section \[sec:SMproofs\]). \[thm:mainresult2\] If $\ell_c$ is the Hinge loss then $ \hat{L}_{h}(f) \leq \hat{L}_{c}(f)$. Moreover, if for $f: \mathcal{X} \rightarrow \mathbb{R}$ the following condition is true $$\begin{aligned} \label{eq:hp1} \textstyle \frac{1}{2} \sum_{g \in \{a,b\}} \left| \hat{\mathbb{E}} \left[ \operatorname{sign}\big(f(\boldsymbol{x})\big)- f(\boldsymbol{x}) ~\big|~ y = 1, s = g \right] \right| \leq \hat{\Delta},\end{aligned}$$ then it also holds that $$\begin{aligned} \textstyle \big| \hat{L}^{+,a}_h(f) - \hat{L}^{+,b}_h(f) \big| \leq \big| \hat{L}^{+,a}_l(f) - \hat{L}^{+,b}_l(f)\big| + \hat{\Delta}. \nonumber\end{aligned}$$ The first statement of Proposition \[thm:mainresult2\] tells us that exploiting $\ell_c$ instead of $\ell_h$ is a good approximation if $\hat{L}_{c}(\hat{f}_c)$ is small. The second statement of Proposition \[thm:mainresult2\], instead, tells us that if the hypothesis of inequality (\[eq:hp1\]) holds, then the linear loss based fairness is close to the EO. Obviously the smaller $\hat{\Delta}$ is, the closer they are. Inequality (\[eq:hp1\]) says that the functions $\operatorname{sign}\big(f(\boldsymbol{x})\big)$ and $ f(\boldsymbol{x})$ distribute, on average, in a similar way. This condition is quite natural and it has been exploited in previous work [see e.g. @maurer2004note]. Moreover, in Section \[sec:exps\] we present experiments showing that $\hat{\Delta}$ is small. The bound in Proposition \[thm:mainresult2\] may be tighten by using different nonlinear approximations of EO [see e.g. @calmon2017optimized]. However, the linear approximation proposed in this work gives a convex problem, and as we shall see in Section 5, works well in practice. In summary, the combination of Theorem \[thm:mainresult1\] and Proposition \[thm:mainresult2\] provides conditions under which a solution $\hat{f}_c$ of Problem \[eq:alg:empirical\], which is convex, is close, [*both in terms of classification accuracy and fairness*]{}, to a solution $f^*_h$ of Problem \[eq:problemHard\], which is our final goal. Fair Learning with Kernels {#sec:luca:th:FK} ========================== In this section, we specify the FERM framework to the case that the underlying space of models is a reproducing kernel Hilbert space (RKHS) [see e.g. @shawe2004kernel; @smola2001 and references therein]. We let $\kappa: \mathcal{X} \times \mathcal{X} \rightarrow \mathbb{R}$ be a positive definite kernel and let $\boldsymbol{\phi}: \mathcal{X} \rightarrow \mathbb{H}$ be an induced feature mapping such that $\kappa(\boldsymbol{x},\boldsymbol{x}') = \langle \boldsymbol{\phi}(\boldsymbol{x}),\boldsymbol{\phi}(\boldsymbol{x}')\rangle$, for all $\boldsymbol{x},\boldsymbol{x}' \in \mathcal{X}$, where $\mathbb{H}$ is the Hilbert space of square summable sequences. Functions in the RKHS can be parametrized as $$f(\boldsymbol{x}) = \langle \boldsymbol{w} , \boldsymbol{\phi}(\boldsymbol{x})\rangle,~~~\boldsymbol{x} \in \mathcal{X}, \label{eq:222}$$ for some vector of parameters $\boldsymbol{w} \in \mathbb{H}$. In practice a bias term (threshold) can be added to $f$ but to ease our presentation we do not include it here. We solve Problem  with $\mathcal{F}$ a ball in the RKHS and employ a convex loss function $\ell$. As for the fairness constraint we use the linear loss function, which implies the constraint to be convex. Let $\boldsymbol{u}_g$ be the barycenter in the feature space of the positively labelled points in the group $g\in \{a,b\}$, that is $$\begin{aligned} \textstyle \boldsymbol{u}_g= \frac{1}{n^{+,g}} \sum_{ i \in \mathcal{I}^{+,g}} \boldsymbol{\phi}(\boldsymbol{x}_i),\end{aligned}$$ where $\mathcal{I}^{+,g} = \{i: y_i {=} 1, x_{i,1} {=} g \}$. Then using Eq.  the constraint in Problem  takes the form $\big|\langle \boldsymbol{w},\boldsymbol{u}_a-\boldsymbol{u}_b\rangle\big| \leq \epsilon$. In practice, we solve the Tikhonov regularization problem $$\begin{aligned} \textstyle \min\limits_{\boldsymbol{w} \in \mathbb{H}} \ \sum_{i =1}^n \ell(\langle \boldsymbol{w},\boldsymbol{\phi}(\boldsymbol{x}_i)\rangle ,y_i) + \lambda \|\boldsymbol{w}\|^2 \quad \text{s.t.}\ \big|\langle \boldsymbol{w},\boldsymbol{u}\rangle\big| \leq \epsilon \label{prob:ker} \end{aligned}$$ where $\boldsymbol{u} = \boldsymbol{u}_a - \boldsymbol{u}_b$ and $\lambda$ is a positive parameter which controls model complexity. In particular, if $\epsilon = 0$ the constraint in Problem  reduces to an orthogonality constraint that has a simple geometric interpretation. Specifically, the vector $\boldsymbol{w}$ is required to be orthogonal to the vector formed by the difference between the barycenters of the positive labelled input samples in the two groups. By the representer theorem [@scholkopf2001generalized], the solution to Problem  is a linear combination of the feature vectors $\boldsymbol{\phi}(\boldsymbol{x}_1),\dots,\boldsymbol{\phi}(\boldsymbol{x}_n)$ and the vector $\boldsymbol{u}$. However, in our case $\boldsymbol{ u}$ is itself a linear combination of the feature vectors (in fact only those corresponding to the subset of positive labeled points) hence $\boldsymbol{w}$ is a linear combination of the input points, that is $\boldsymbol{ w}=\sum_{i=1}^n \alpha_i\phi(\boldsymbol{x}_i)$. The corresponding function used to make predictions is then given by $f(\boldsymbol{x}) = \sum_{i=1}^n \alpha_i \kappa(\boldsymbol{x}_i,\boldsymbol{x})$. Let $K$ be the Gram matrix. The vector of coefficients $\boldsymbol{\alpha}$ can then be found by solving $$\begin{aligned} \min_{\boldsymbol{\alpha} \in \mathbb{R}^n} \hspace{-.04truecm}\Bigg\{\hspace{-.04truecm} \sum_{i=1}^n \ell\bigg(\sum_{j=1}^n K_{ij}\alpha_j,y_i\bigg) {+} \lambda \! \! \sum_{i,j=1}^n\alpha_i \alpha_j K_{ij} \quad \hspace{-.04truecm} \text{s.t.} \ & \bigg| \hspace{-.04truecm}\sum_{i=1}^n \alpha_i \bigg[ \frac{1}{n^{+,a}} \!\!\! \hspace{-.04truecm}\sum_{j \in \mathcal{I}^{+,a}} \!\!\! K_{ij} {-} \frac{1}{n^{+,b}} \!\!\! \sum_{j \in \mathcal{I}^{+,b}} \!\!\! K_{ij} \bigg] \bigg| \leq \epsilon\Bigg\}. \nonumber\end{aligned}$$ In our experiments below we consider this particular case of Problem  and furthermore choose the loss function $\ell_c$ to be the Hinge loss. The resulting method is an extension of SVM. The fairness constraint and, in particular, the orthogonality constraint when $\epsilon = 0$, can be easily added within standard SVM solvers[^4] It is instructive to consider Problem  when $\boldsymbol{\phi}$ is the identity mapping (i.e. $\kappa$ is the linear kernel on $\mathbb{R}^d$) and $\epsilon=0$. In this special case we can solve the orthogonality constraint $\langle \boldsymbol{w},\boldsymbol{u}\rangle = 0$ for $w_i$, where the index $i$ is such that $| u_i | = \|\boldsymbol{u}\|_\infty$, obtaining that $w_{i} = - \sum_{j=1, j \neq i}^d w_j \frac{u_j}{u_i}$. Consequently the linear model rewrites as $\sum_{j=1}^{d} w_j x_j = \sum_{j=1, j \neq i}^d w_j (x_j - x_i \frac{u_i}{u_j})$. In this way, we then see the fairness constraint is implicitly enforced by making the change of representation $\boldsymbol{x} \mapsto \boldsymbol{\tilde{x}} \in \mathbb{R}^{d-1}$, with $$\textstyle \tilde{x}_j = x_j - x_i \frac{u_i}{u_j}, \quad j \in \{ 1, \dots, i-1, i+1, \dots, d \}. \label{eq:gggg}$$ In other words, we are able to obtain a fair linear model without any other constraint and by using a representation that has one feature fewer than the original one[^5] Experiments {#sec:exps} =========== In this section, we present numerical experiments with the proposed method on one synthetic and five real datasets. The aim of the experiments is threefold. First, we show that our approach is effective in selecting a fair model, incurring only a moderate loss in accuracy. Second, we provide an empirical study of the properties of the method, which supports our theoretical observations in  Section \[sec:luca:th:Fairness\]. Third, we highlight the generality of our approach by showing that it can be used effectively within other linear models such as Lasso. We use our approach with $\epsilon {=} 0$ in order to simplify the hyperparameter selection procedure. For the sake of completeness, a set of results for different values of $\epsilon$ is presented in the supplementary material and briefly we comment on these below. In all the experiments, we collect statistics concerning the classification accuracy and DEO of the selected model. We recall that the DEO is defined in Eq.  and is the absolute difference of the true positive rate of the classifier applied to the two groups. In all experiments, we performed a 10-fold cross validation (CV) to select the best hyperparameters[^6]. For the Arrhythmia, COMPAS, German and Drug datasets, this procedure is repeated $10$ times, and we reported the average performance on the test set alongside its standard deviation. For the Adult dataset, we used the provided split of train and test sets. Unless otherwise stated, we employ two steps in the 10-fold CV procedure. In the first step, the value of the hyperparameters with highest accuracy is identified. In the second step, we shortlist all the hyperparameters with accuracy close to the best one (in our case, above $90 \%$ of the best accuracy). Finally, from this list, we select the hyperparameters with the lowest DEO. This novel validation procedure, that we wil call NVP, is a sanity-check to ensure that fairness cannot be achieved by a simple modification of hyperparameter selection procedure. [**The code of our method is available at:**]{} <https://github.com/jmikko/fair_ERM>. **Synthetic Experiment.** The aim of this experiment is to study the behavior of our method, in terms of both DEO and classification accuracy, in comparison to standard SVM (with our novel validation procedure). To this end, we generated a synthetic binary classification dataset with two sensitive groups in the following manner. For each group in the class $-1$ and for the group $a$ in the class $+1$, we generated $1000$ examples for training and the same amount for testing. For the group $b$ in the class $+1$, we generated $200$ examples for training and the same number for testing. Each set of examples is sampled from a $2$-dimensional isotropic Gaussian distribution with different mean $\mu$ and variance $\sigma^2$: (i) Group $a$, Label $+1$: $\mu=(-1, -1)$, $\sigma^2=0.8$; (ii) Group $a$, Label $-1$: $\mu=(1, 1)$, $\sigma^2=0.8$; (iii) Group $b$, Label $+1$: $\mu=(0.5, -0.5)$, $\sigma^2=0.5$; (iv) Group $b$, Label $-1$: $\mu=(0.5, 0.5)$, $\sigma^2=0.5$. When a standard machine learning method is applied to this toy dataset, the generated model is unfair with respect to the group $b$, in that the classifier tends to negatively classify the examples in this group. We trained different models, varying the value of the hyperparameter $C$, and using the standard linear SVM and our linear method. Figure \[fig:toydistrib\] (Left) shows the performance of the various generated models with respect to the classification error and DEO on the test set. Note that our method generated models that have an higher level of fairness, maintaining a good level of accuracy. The grid in the plots emphasizes the fact that both the error and DEO have to be simultaneously considered in the evaluation of a method. Figure \[fig:toydistrib\] (Center and Left) depicts the histogram of the values of $\langle \boldsymbol{w}, \boldsymbol{x}\rangle $ (where $\boldsymbol{w}$ is the generated model) for test examples with true label equal to $+1$ for each of the two groups. The results are reported both for our method (Right) and standard SVM (Center). Note that our method generates a model with a similar true positive rate among the two groups (i.e. the areas of the value when the horizontal axis is greater than zero are similar for groups $a$ and $b$). Moreover, due to the simplicity of the toy test, the distribution with respect to the two different groups is also very similar when our model is used. **Real Data Experiments.** We next compare the performance of our model to set of different methods on $5$ publicly available datasets: Arrhythmia, COMPAS, Adult, German, and Drug. A description of the datasets is provided in the supplementary material. These datasets have been selected from the standard databases of datasets (UCI, mldata and Fairness-Measures[^7]). We considered only datasets with a DEO higher than $0.1$, when the model is generated by an SVM validated with the NVP. For this reason, some of the commonly used datasets have been discarded (e.g. Diabetes, Heart, SAT, PSU-Chile, and SOEP). We compared our method both in the linear and not linear case against: (i) Naïve SVM, validated with a standard nested 10-fold CV procedure. This method ignores fairness in the validation procedure, simply trying to optimize accuracy; (ii) SVM with the NVP. As noted above, this baseline is the simplest way to inject the fairness into the model; (iii) Hardt method [@hardt2016equality] applied to the best SVM; (iv) Zafar method [@zafar2017fairness], implemented with the code provided by the authors for the linear case[^8]. Concerning our method, in the linear case, it exploits the preprocessing presented in Section \[sec:luca:th:FK\]. [|l|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|]{} & & & & &\ Method & ACC & DEO & ACC & DEO & ACC & DEO & ACC & DEO & ACC & DEO\ \ Naïve Lin. SVM & $0.75 {\pm} 0.04$ & $0.11 {\pm} 0.03$ & $0.73 {\pm} 0.01$ & $0.13 {\pm} 0.02$ & $0.78$ & $0.10$ & $0.71 {\pm} 0.06$ & $0.16 {\pm} 0.04$ & $0.79 {\pm} 0.02$ & $0.25 {\pm} 0.03$\ Lin. SVM & $0.71 {\pm} 0.05$ & $0.10 {\pm} 0.03$ & $0.72 {\pm} 0.01$ & $0.12 {\pm} 0.02$ & $0.78$ & $0.09$ & $0.69 {\pm} 0.04$ & $0.11 {\pm} 0.10$ & $0.79 {\pm} 0.02$ & $0.25 {\pm} 0.04$\ Hardt & - & - & - & - & - & - & - & - & - & -\ Zafar & $0.67 {\pm} 0.03$ & $0.05 {\pm} 0.02$ & $0.69 {\pm} 0.01$ & $0.10 {\pm} 0.08$ & $0.76$ & $0.05$ & $0.62 {\pm} 0.09$ & $0.13 {\pm} 0.10$ & $0.66 {\pm} 0.03$ & $0.06 {\pm} 0.06$\ Lin. Ours & $0.75 {\pm} 0.05$ & $0.05 {\pm} 0.02$ & $0.73 {\pm} 0.01$ & $0.07 {\pm} 0.02$ & $0.75$ & $0.01$ & $0.69 {\pm} 0.04$ & $0.06 {\pm} 0.03$ & $0.79 {\pm} 0.02$ & $0.10 {\pm} 0.06$\ Naïve SVM & $0.75 {\pm} 0.04$ & $0.11 {\pm} 0.03$ & $0.72 {\pm} 0.01$ & $0.14 {\pm} 0.02$ & $0.80$ & $0.09$ & $0.74 {\pm} 0.05$ & $0.12 {\pm} 0.05$ & $0.81 {\pm} 0.02$ & $0.22 {\pm} 0.04$\ SVM & $0.71 {\pm} 0.05$ & $0.10 {\pm} 0.03$ & $0.73 {\pm} 0.01$ & $0.11 {\pm} 0.02$ & $0.79$ & $0.08$ & $0.74 {\pm} 0.03$ & $0.10 {\pm} 0.06$ & $0.81 {\pm} 0.02$ & $0.22 {\pm} 0.03$\ Hardt & - & - & - & - & - & - & - & - & - & -\ Ours & $0.75 {\pm} 0.05$ & $0.05 {\pm} 0.02$ & $0.72 {\pm} 0.01$ & $0.08 {\pm} 0.02$ & $0.77$ & $0.01$ & $0.73 {\pm} 0.04$ & $0.05 {\pm} 0.03$ & $0.79 {\pm} 0.03$ & $0.10 {\pm} 0.05$\ \ Naïve Lin. SVM & $0.79 {\pm} 0.06$ & $0.14 {\pm} 0.03$ & $0.76 {\pm} 0.01$ & $0.17 {\pm} 0.02$ & $0.81$ & $0.14$ & $0.71 {\pm} 0.06$ & $0.17 {\pm} 0.05$ & $0.81 {\pm} 0.02$ & $0.44 {\pm} 0.03$\ Lin. SVM & $0.78 {\pm} 0.07$ & $0.13 {\pm} 0.04$& $0.75 {\pm} 0.01$ & $0.15 {\pm} 0.02$ & $0.80$ & $0.13$ & $0.69 {\pm} 0.04$ & $0.11 {\pm} 0.10$ & $0.81 {\pm} 0.02$ & $0.41 {\pm} 0.06$\ Hardt & $0.74 {\pm} 0.06$ & $0.07 {\pm} 0.04$& $0.67 {\pm} 0.03$ & $0.21 {\pm} 0.09$ & $0.80$ & $0.10$ & $0.61 {\pm} 0.15$ & $0.15 {\pm} 0.13$ & $0.77 {\pm} 0.02$ & $0.22 {\pm} 0.09$\ Zafar & $0.71 {\pm} 0.03$ & $0.03 {\pm} 0.02$ & $0.69 {\pm} 0.02$ & $0.10 {\pm} 0.06$ & $0.78$ & $0.05$ & $0.62 {\pm} 0.09$ & $0.13 {\pm} 0.11$ & $0.69 {\pm} 0.03$ & $0.02 {\pm} 0.07$\ Lin. Ours & $0.79 {\pm} 0.07$ & $0.04 {\pm} 0.03$ & $0.76 {\pm} 0.01$ & $0.04 {\pm} 0.03$ & $0.77$ & $0.01$ & $0.69 {\pm} 0.04$ & $0.05 {\pm} 0.03$ & $0.79 {\pm} 0.02$ & $0.05 {\pm} 0.03$\ Naïve SVM & $0.79 {\pm} 0.06$ & $0.14 {\pm} 0.04$& $0.76 {\pm} 0.01$ & $0.18 {\pm} 0.02$ & $0.84$ & $0.18$ & $0.74 {\pm} 0.05$ & $0.12 {\pm} 0.05$ & $0.82 {\pm} 0.02$ & $0.45 {\pm} 0.04$\ SVM & $0.78 {\pm} 0.06$ & $0.13 {\pm} 0.04$& $0.73 {\pm} 0.01$ & $0.14 {\pm} 0.02$ & $0.82$ & $0.14$ & $0.74 {\pm} 0.03$ & $0.10 {\pm} 0.06$ & $0.81 {\pm} 0.02$ & $0.38 {\pm} 0.03$\ Hardt & $0.74 {\pm} 0.06$ & $0.07 {\pm} 0.04$ & $0.71 {\pm} 0.01$ & $0.08 {\pm} 0.01$ & $0.82$ & $0.11$ & $0.71 {\pm} 0.03$ & $0.11 {\pm} 0.18$ & $0.75 {\pm} 0.11$ & $0.14 {\pm} 0.08$\ Ours & $0.79 {\pm} 0.09$ & $0.03 {\pm} 0.02$& $0.73 {\pm} 0.01$ & $0.05 {\pm} 0.03$ & $0.81$ & $0.01$ & $0.73 {\pm} 0.04$ & $0.05 {\pm} 0.03$ & $0.80 {\pm} 0.03$ & $0.07 {\pm} 0.05$\ \[tab:results\] Table \[tab:results\] shows our experimental results for all the datasets and methods both when $s$ is inside $\boldsymbol{x}$ or not. This result suggests that our method performs favorably over the competitors in that it decreases DEO substantially with only a moderate loss in accuracy. Moreover having $s$ inside $\boldsymbol{x}$ increases the accuracy but - for the methods without the specific purpose of producing fairness models - decreases the fairness. On the other hand, having $s$ inside $\boldsymbol{x}$ ensures to our method the ability of improve the fairness by exploiting the value of $s$ also in the prediction phase. This is to be expected, since knowing the group membership increases our information but also leads to behaviours able to influence the fairness of the predictive model. In order to quantify this effect, we present in Figure \[fig:tableexplanation\] the results of Table \[tab:results\] of linear (left) and nonlinear (right) methods, when the error (one minus accuracy) and the DEO are normalized in $[0,1]$ column-wise and when the $s$ is inside $\boldsymbol{x}$[^9]. In the figure, different symbols and colors refer to different datasets and methods, respectively. The closer a point is to the origin, the better the result is. The best accuracy is, in general, reached by using the Naïve SVM (in red) both for the linear and nonlinear case. This behavior is expected due to the absence of any fairness constraint. On the other hand, Naïve SVM has unsatisfactory levels of fairness. Hardt [@hardt2016equality] (in blue) and Zafar [@zafar2017fairness] (in cyan, for the linear case) methods are able to obtain a good level of fairness but the price of this fair model is a strong decrease in accuracy. Our method (in magenta) obtains similar or better results concerning the DEO preserving the performance in accuracy. In particular in the nonlinear case, our method reaches the lowest levels of DEO with respect to all the methods. For the sake of completeness, in the nonlinear (bottom) part of Figure \[fig:tableexplanation\], we show our method when the parameter $\epsilon$ is set to $0.1$ (in brown) instead of $0$ (in magenta). As expected, the generated models are less fair with a (small) improvement in the accuracy. An in depth analysis of the role of $\epsilon$ is presented in supplementary materials. ![[Results of Table \[tab:results\] of linear (left) and nonlinear (right) methods, when the error and the DEO are normalized in $[0,1]$ column-wise and when $s$ is inside $\boldsymbol{x}$. Different symbols and colors refer to different datasets and method respectively. The closer a point is to the origin, the better the result is.]{}](LI.png "fig:"){width="0.45\columnwidth"} ![[Results of Table \[tab:results\] of linear (left) and nonlinear (right) methods, when the error and the DEO are normalized in $[0,1]$ column-wise and when $s$ is inside $\boldsymbol{x}$. Different symbols and colors refer to different datasets and method respectively. The closer a point is to the origin, the better the result is.]{}](NL.png "fig:"){width="0.45\columnwidth"} \[fig:tableexplanation\] **Application to Lasso.** Due to the particular proposed methodology, we are able in principle to apply our method to any learning algorithm. In particular, when the algorithm generates a linear model we can exploit the data preprocessing in Eq. , to directly impose fairness in the model. Here, we show how it is possible to obtain a sparse and fair model by exploiting the standard Lasso algorithm in synergy with this preprocessing step. For this purpose, we selected the Arrhythmia dataset as the Lasso works well in a high dimensional / small sample setting. We performed the same experiment described above, where we used the Lasso algorithm in place of the SVM. In this case, by Naïve Lasso, we refer to the Lasso when it is validated with a standard nested 10-fold CV procedure, whereas by Lasso we refer to the standard Lasso with the NVP outlined above. The method of [@hardt2016equality] has been applied to the best Lasso model. Moreover, we reported the results obtained using Naïve Linear SVM and Linear SVM. We also repeated the experiment by using a reduced training set in order to highlight the effect of the sparsity. Table \[tab:results\_lasso\] reported in the supplementary material shows the results. It is possible to note that, reducing the training sets, the generated models become less fair (i.e. the DEO increases). Using our method, we are able to maintain a fair model reaching satisfactory accuracy results. **The Value of $\hat{\Delta}$.** Finally, we show experimental results to highlight how the hypothesis of Proposition \[thm:mainresult2\] (Section \[sec:luca:th:FERM\]) are reasonable in the real cases. We know that, if the hypothesis of inequality (\[eq:hp1\]) are satisfied, the linear loss based fairness is close to the EO. Specifically, these two quantities are closer when $\hat{\Delta}$ is small. We evaluated $\hat{\Delta}$ for benchmark and toy datasets. The obtained results are in Table \[tab:delta\] of supplementary material, where $\hat{\Delta}$ has the order of magnitude of $10^{-2}$ in all the datasets. Consequently, our method is able to obtain a good approximation of the DEO. Conclusion and Future Work {#sec:conc} ========================== We have presented a generalized notion of fairness, which encompasses previously introduced notion and can be used to constrain ERM, in order to learn fair classifiers. The framework is appealing both theoretically and practically. Our theoretical observations provide a statistical justification for this approach and our algorithmic observations suggest a way to implement it efficiently in the setting of kernel methods. Experimental results suggest that our approach is promising for applications, generating models with improved fairness properties while maintaining classification accuracy. We close by mentioning directions of future research. On the algorithmic side, it would be interesting to study whether our method can be improved by other relaxations of the fairness constraint beyond the linear loss used here. Applications of the fairness constraint to multi-class classification or to regression tasks would also be valuable. On the theory side, it would be interesting to study how the choice of the parameter $\epsilon$ affects the statistical performance of our method and derive optimal accuracy-fairness trade-off as a function of this parameter. Supplementary Material {#supplementary-material .unnumbered} ====================== Proofs {#sec:SMproofs} ====== \[Proof of Theorem \[thm:mainresult1\]\] We first use Eq.  to conclude that, with probability at least $1- 2 \delta$, $$\begin{aligned} \textstyle \sup_{f \in \mathcal{F}} \Big| \big| L^{+,a}(f) - L^{+,b}(f) \big| - \big| \hat{L}^{+,a}(f) - \hat{L}^{+,b}(f) \big| \Big| \leq \sum\limits_{g \in \{a,b\}} B(\delta,n^{+,g},\mathcal{F}). \label{eq:proof1_eq2}\end{aligned}$$ This inequality in turn implies that, with probability at least $1-2\delta$, it holds that $$\begin{aligned} \left\{ f: f \in \mathcal{F}, \big| L^{+,a}(f) - L^{+,b}(f) \big| \leq \epsilon \right\} \subseteq \left\{ f: f \in \mathcal{F}, \big| \hat{L}^{+,a}(f) - \hat{L}^{+,b}(f) \big| \leq \hat{\epsilon} \right\}. \label{eq:proof1_eq3}\end{aligned}$$ Now, in order to prove the first statement of the theorem, let us decompose the excess risk as $$\begin{aligned} L(\hat{f}) {-} L(f^*\!) = L(\hat{f}) - \hat{L}(\hat{f}) + \hat{L}(\hat{f}) - \hat{L}(f^*\!) + \hat{L}(f^*\!) - L(f^*\!). \nonumber \end{aligned}$$ Inequality (\[eq:proof1\_eq3\]) implies that $\hat{L}(\hat{f}) - \hat{L}(f^*) \leq 0$ with probability at least $1 -2\delta$ and consequently with probability at least $1 - 2\delta$ it holds that $$\begin{aligned} L(\hat{f}) - L(f^*) \leq L(\hat{f}) - \hat{L}(\hat{f}) + \hat{L}(f^*) - L(f^*). \nonumber \end{aligned}$$ The first statement now follows by Eq. . As for the second statement, its proof consists in exploiting the results of Eqns.  and  together with a union bound. The proof of the first statement follows directly by the inequality $\ell_h(f(\boldsymbol{x}),y) \leq \ell_c(f(\boldsymbol{x}),y)$. In order to prove the second statement, we first note that $$\begin{aligned} \textstyle \big| \hat{L}^{+,a}_l(f) - \hat{L}^{+,b}_l(f)\big| = \frac{1}{2} \left| \hat{\mathbb{E}} \left[ f(\boldsymbol{x}) | y=1,s=a \right] - \hat{\mathbb{E}} \left[ f(\boldsymbol{x}) | y=1,s=b \right] \right|. \nonumber\end{aligned}$$ By applying the same reasoning to $\big| \hat{L}^{+,a}_h(f) {-} \hat{L}^{+,b}_h(f)\big| $ and by exploiting inequality (\[eq:hp1\]) the result follows. Literature Review of Fairness Methods {#sec:appreview} ===================================== In this section, we provide a brief analysis of the different existing methods concerning fairness. We show our findings in Table \[tab:review\], where the rows represent properties, characteristics and experimental results of different fairness methods. The columns represent the different algorithms and, specifically, the first column is our approach. We think that, at this stage of development of fairness in machine learning, a clear understanding of the differences and similarities among the current available algorithms is a fundamental step. Table \[tab:review\] describes, in the first row, the family of the different methods, following the taxonomy defined in this paper (see Section \[sec:intro\]). The following $8$ rows describe general properties of the methods, as for example the convexity of the approach, the convergence of the learning phase or the consistency with respect to the risk and the fairness notion. The next $9$ rows describes the presence of a specific comparison between methods and, finally, in the last row the availability of the code online is analyzed. **Ref.** **Ours** [@adebayo2016iterative] [@calmon2017optimized] [@agarwal2017reductions; @agarwal2018reductions] [@woodworth2017learning] [@zafar2017fairness] [@kamiran2009classifying] [@Prez-Suay2017Fair] [@zemel2013learning] [@menon2018cost] [@dwork2018decoupled] [@zafar2017parity] [@pleiss2017fairness] [@beutel2017data] [@bechavod2018Penalizing] [@hardt2016equality] [@zafar2017fairnessARXIV] [@berk2017convex] [@kamishima2011fairness] [@feldman2015certifying] [@kamiran2012data] [@kamiran2010classification] [@alabi2018optimizing] [@olfat2018spectral] ---------------------------------------------- ---------- ------------------------- ------------------------ -------------------------------------------------- -------------------------- ---------------------- --------------------------- ---------------------- ---------------------- ------------------ ----------------------- -------------------- ----------------------- ------------------- --------------------------- ---------------------- --------------------------- ------------------- -------------------------- -------------------------- -------------------- ------------------------------ ------------------------ ---------------------- Method Family 2&3 3 3 2 2 2 3 2 3 2 2 2 1 - 2 1 2 2 2 1 3 3 2 2 Classification x x x x x x x x x x x x x x x x x x x x x x New Fairness Notions x x x x x x x x x x x x x x x x x x x x x x Use of EO x x x x x x x Convex Approach x x x$^{*}$ x x x x x x x x Convergence Learning x x x x x x x x Consistency Risk-Fairness x x x x x Experimental Results x x x x x x x x x x x x x x x x x x x x x x Epsilon validate x Exp. w.r.t. [@hardt2016equality] x x x x x Exp. w.r.t. [@zafar2017fairness] x x x x Exp. w.r.t. [@kamiran2012data] x Exp. w.r.t. Baseline in [@zafar2017fairness] x x Exp. w.r.t. [@kamiran2009classifying] x x x Exp. w.r.t. [@kamishima2011fairness] x x x Exp. w.r.t. [@kamiran2010classification] x Exp. w.r.t. [@zemel2013learning] x Code Available x x x x x x x : A summary of the characteristics of the different methods concerning fairness. The symbol ’x’ means the presence of a property (row) for a specific method (column). x$^{*}$: the theoretical results however do not correspond to their convex method.[]{data-label="tab:review"} Datasets {#app:dataset} ======== In the following the datasets used in Section \[sec:exps\] are presented, outlining their tasks, type of features and source of data. Table \[tab:datasets\] provide a summary of the datasets statistics. - *Arrhythmia*: from UCI repository, this database contains 279 attributes concerning the study of H. Altay Guvenir. The aim is to distinguish between the presence and absence of cardiac arrhythmia and to classify it in one of the 16 groups. In our case, we changed the task with the binary classification between “Class 01” (i.e. “Normal”) against the other 15 classes (different classes of arrhythmia). - *COMPAS* (Correctional Offender Management Profiling for Alternative Sanctions): it is a popular commercial algorithm used by judges and parole officers for scoring criminal defendant’s likelihood of reoffending (recidivism). It has been shown that the algorithm is biased in favor of white defendants based on a 2 year follow up study. This dataset contains variables used by the COMPAS algorithm in scoring defendants, along with their outcomes within 2 years of the decision, for over 10000 criminal defendants in Broward County, Florida. In the original data, 3 subsets are provided. We concentrate on the one that includes only violent recividism[^10]. - *Adult*: from UCI repository, this database contains 14 features concerning demographic characteristics of $45222$ instances ($32561$ for training and $12661$ for test). The task is to predict if a person has an income per year that is more (or less) than $50000\,\$$. Concerning the Adult dataset we used the provided training and test sets. - *German*: it is a dataset where the task is to classify people described by a set of 20 features (7 numerical, 13 categorical) as good or bad credit risks. The features are related to the economical situation of the person, as for example: credit history and amount, saving account and bonds, year of the present employment, property and others. Moreover, a set of features is concerning personal information, e.g. age, gender, if the person is a foreign, and personal status. - *Drug*: this dataset contains records for 1885 respondents. Each respondent is described by 12 features: Personality measurements which include NEO-FFI-R (neuroticism, extraversion, openness to experience, agreeableness, and conscientiousness), BIS-11 (impulsivity), and ImpSS (sensation seeking), level of education, age, gender, country of residence and ethnicity. All input attributes are originally categorical and are quantified. After quantification values of all input features can be considered as real-valued. In addition, participants were questioned concerning their use of 17 legal and illegal drugs and one fictitious drug (Semeron) which was introduced to identify over-claimers. For each drug, the respondents have to select one of the answers: never used the drug, used it over a decade ago, or in the last decade, year, month, week, or day. In this sense, this dataset contains 18 classification problems, each one with seven classes: “Never Used”, “Used over a Decade Ago”, “Used in Last Decade”, “Used in Last Year”, “Used in Last Month”, “Used in Last Week”, and “Used in Last Day”. We make the problem number $16$ (concerning heroin) a binary problem by exploiting the task “Never used” versus “Others” (i.e. “Used”). Dataset Examples Features Sensitive Variable ------------ -------------- ---------- -------------------- Arrhythmia 452 279 Gender COMPAS 6172 10 Ethnicity Adult 32561, 12661 12 Gender German 1700 20 Foreign Drug 1885 11 Ethnicity Varying the Value of $\epsilon$ {#app:epsilon} =============================== In this section we present a set of experiments, as a proof of concept, that our selection of $\epsilon = 0$ for our method is reasonable and study the impact of different values of $\epsilon$ have concerning DEO and accuracy performance. ![Results concerning the Drug dataset for Naïve SVM, Hard method and our method with different values of $\epsilon$.[]{data-label="fig:drugeps"}](drugeps.png){width="0.50\columnwidth"} We follow the same experimental setting presented in Section \[sec:exps\] for the Drug dataset, implementing our nonlinear method with $\epsilon$ equals to $0, 0.01, 0.1, 0.2, 0.3$. The results of this experiment are presented in Figure \[fig:drugeps\], where we show also the results for Naïve SVM and Hard method. It is possible to note how increasing the value of $\epsilon$, our model has smaller error but stronger unfairness (i.e. higher DEO). Visualization of the results of Table \[tab:results\] ===================================================== In Figure \[fig:tableexplanation2\] we reported the equivalent of Figure \[fig:tableexplanation\] for the case when $s$ is not inside $\boldsymbol{x}$. Note that we can reach the same conclusions drown for Table \[tab:results\] and Figure \[fig:tableexplanation\]. ![[Results of Table \[tab:results\] of linear (left) and nonlinear (right) methods, when the error and the DEO are normalized in $[0,1]$ column-wise and when $s$ is not inside $\boldsymbol{x}$. Different symbols and colors refer to different datasets and method respectively. The closer a point is to the origin, the better the result is.]{}](LI2.png "fig:"){width="0.45\columnwidth"} ![[Results of Table \[tab:results\] of linear (left) and nonlinear (right) methods, when the error and the DEO are normalized in $[0,1]$ column-wise and when $s$ is not inside $\boldsymbol{x}$. Different symbols and colors refer to different datasets and method respectively. The closer a point is to the origin, the better the result is.]{}](NL2.png "fig:"){width="0.45\columnwidth"} \[fig:tableexplanation2\] Approximation of the DEO {#app:exp_approximation_DEO} ======================== In this section, we numerically show the difference between the DEO and our approximation of it. Figure \[fig:approxdeo\] compares the DEO with our approximation of the DEO and the classification error. We collected these results for the German dataset on the validation set, changing the two hyperparameters $C$ and $\gamma$ (in the nonlinear case). We can note how our approximation of the DEO is empirically similar to the original DEO. It is interesting to highlight that, a correct approximation of the DEO is particularly important where the error is low. Dual Problem for SVM with Fairness Constraint {#app:dual} ============================================= We follow the usual approach to derive the dual problem for SVMs, which uses the method of Lagrange multipliers [@vapnik1998statistical]. We define the Lagrangian function $$\begin{aligned} \nonumber {\mathcal L}(\boldsymbol{w},\boldsymbol{\xi},\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\rho}) {=} \frac{1}{2} \langle \boldsymbol{w},\boldsymbol{w} \rangle + C \sum_{i=1}^n \xi_i - \sum_{i=1}^n \alpha_i (y_i \langle \boldsymbol{\phi}(\boldsymbol{x}_i), \boldsymbol{w} \rangle - 1 + \xi_i) - \beta_i \xi_i +~~~~~~~~ \\ ~~~~~~~~~~~\rho_1(\langle \boldsymbol{w},\boldsymbol{u} \rangle - \epsilon) -\rho_2(\langle \boldsymbol{w},\boldsymbol{u} \rangle + \epsilon) \label{eq:Lag}\end{aligned}$$ where $\boldsymbol{\alpha},\boldsymbol{\beta}$ and $\boldsymbol{\rho}$ are the vector of Lagrange multipliers and are constrained to be nonnegative. We set the derivative of the Lagrangian with respect to the primal variables $\boldsymbol{w}$ and $\boldsymbol{\xi}$ equal to zero. In the latter case we obtain that $$C - \alpha_i - \beta_i = 0 \label{eq:d11}$$ from which we can remove the variable $\beta_i$ in place of the constraint $\alpha_i \leq C$. In the former case we obtain the expression for $\boldsymbol{w}$, $$\boldsymbol{w} = \sum_{i=1}^n \alpha_i y_i \boldsymbol{\phi}(\boldsymbol{x}_i) + (\rho_1-\rho_2) \boldsymbol{u}. \label{eq:d22}$$ Using  and  in  we obtain the expression $$- \frac{1}{2} \Big\|\sum_{i=1}^n \alpha_i y_i \boldsymbol{\phi}(\boldsymbol{x}_i) + (\rho_1-\rho_2) \boldsymbol{u} \Big\|^2 + \sum_{i=1}^n \alpha_i - \epsilon (\rho_1+ \rho_2). \label{eq:Dual}$$ The dual problem is then to maximize this quantity subject to the constraints that $\alpha_i \in [0,C]$ and $\rho_1,\rho_2 \geq 0$. The KKT conditions are $$\begin{aligned} \alpha_i ( 1 - y_i \langle \boldsymbol{w},\boldsymbol{\phi}(\boldsymbol{x}_i) \rangle - 1 + \xi_i) & = & 0 \\ C-\alpha_i \xi_i & = & 0 \\ \rho_1(\langle \boldsymbol{w}, \boldsymbol{u} \rangle - \epsilon) & = & 0 \\ \rho_2(\langle \boldsymbol{w}, \boldsymbol{u} \rangle + \epsilon) & = & 0.\end{aligned}$$ Clearly at most one of the variables $\rho_1$ and $\rho_2$ can be strictly positive. We may then let $\rho = \rho_1-\rho_2$ and rewrite the objective function as $$- \frac{1}{2} \Big\|\sum_{i=1}^n \alpha_i y_i \boldsymbol{\phi}(\boldsymbol{x}_i) + \rho \boldsymbol{u} \Big\|^2 + \sum_{i=1}^n \alpha_i - \epsilon |\rho| \label{eq:Dual2}$$ and optimize over $\boldsymbol{\alpha} \in [0,C]^n$ and $\rho\in \mathbb{R}$. It is interesting to study this problem when $\epsilon = 0$. In this case we can easily solve for $\rho$ obtaining the simplified objective $$- \frac{1}{2} \Big\| \sum_{i=1}^n \alpha_i y_i (I-P) \boldsymbol{\phi}(\boldsymbol{x}_i) ) \Big\|^2 + \sum_{i=1}^n \alpha_i$$ where $P$ is the orthogonal projection along the direction of $\boldsymbol{u}$, that is $P= \frac{\boldsymbol{u}}{\| \boldsymbol{u} \|} \otimes \frac{\boldsymbol{u}}{\| \boldsymbol{u} \|}$. This is equivalent to use the standard SVM with the kernel $${\widetilde \kappa}(\boldsymbol{x},\boldsymbol{t}) = \langle \boldsymbol{\phi}(\boldsymbol{x}), (I-P) \boldsymbol{\phi}(\boldsymbol{t})\rangle = \kappa(\boldsymbol{x},\boldsymbol{t}) -\frac{\langle \boldsymbol{x}, \boldsymbol{u} \rangle \langle \boldsymbol{t}, \boldsymbol{u} \rangle}{\| \boldsymbol{u} \|^2}$$ In particular if $\displaystyle \boldsymbol{u} = \frac{1}{n_a} \sum_{i\in \mathcal{I}^{+,a}} \boldsymbol{x}_i -\frac{1}{n_b} \sum_{i\in \mathcal{I}^{+,b}} \boldsymbol{x}_i$, we obtain $${\widetilde \kappa}(\boldsymbol{x},\boldsymbol{t}) =\kappa(\boldsymbol{x},\boldsymbol{t}) -\frac{\frac{1}{n_a}\sum\limits_{i\in \mathcal{I}^{+,a}} \kappa(\boldsymbol{x},\boldsymbol{x}_i) -\frac{1}{n_b} \sum\limits_{i\in \mathcal{I}^{+,b}} \kappa(\boldsymbol{x},\boldsymbol{x}_i)}{\frac{1}{n_a^2}\sum\limits_{i,j \in \mathcal{I}^{+,a}} \kappa(\boldsymbol{x}_i,\boldsymbol{x}_j) + \frac{1}{n_b^2}\sum\limits_{i,j \in \mathcal{I}^{+,b}} \kappa(\boldsymbol{x}_i,\boldsymbol{x}_j) - \frac{2}{n_an_b}\sum\limits_{i \in \mathcal{I}^{+,a}}\sum\limits_{j \in \mathcal{I}^{+,b}} \kappa(\boldsymbol{x}_i,\boldsymbol{x}_j)}.$$ This new kernel can then be interpreted as a change of feature mapping $\boldsymbol{x} \mapsto (I-P) \boldsymbol{\phi}(\boldsymbol{x}) = \boldsymbol{\phi}(\boldsymbol{x}) - \langle \boldsymbol{\phi}(\boldsymbol{x}), \frac{\boldsymbol{u}}{\| \boldsymbol{u} \|} \rangle \frac{\boldsymbol{u}}{\| \boldsymbol{u} \|}$. As a final remark, we note that for other proper convex loss functions (e.g. square loss or logistic loss) the dual problem can be derived via Fenchel duality [see e.g. @Rockafellar1970]. We leave the full details to a future occasion. Multiple Valued Sensitive Features {#sec:luca:th:FK:MMF} ================================== Our method presented in Section \[sec:luca:th:FK\] can be naturally extended to the case that the sensitive variable takes multiple categorical values, that is $s \in \{g_1,\dots,g_k\}$ for some $k \geq 2$. In particular, when $\epsilon = 0$, the fairness constraint in Problem  requires that $$\begin{aligned} \hat{L}^{+,g_1}(f) = \hat{L}^{+,g_2}(f) = \cdots = \hat{L}^{+,g_k}(f).\end{aligned}$$ Furthermore if the linear loss function is used, these constraints becomes$$\langle \mathbf{w} , \mathbf{u}_{1} - \mathbf{u}_g \rangle = 0 , \quad \forall g \in \{g_2, \cdots, g_k\}$$ where we defined, for $g\in \{g_1,\dots,g_k\}$ $$\mathbf{u}_g= \frac{1}{n^{+,g}} \sum_{ i \in \mathcal{I}^{+,g}} \mathbf{\phi}(\boldsymbol{x}_i),$$ with $\mathcal{I}^{+,g} = \{i: y_i = 1, s = g \}$ and $n^{+,g} = |\mathcal{I}^{+,g}|$. Thus, we need to satisfy $k-1$ orthogonality constraints which try to enforce a balance between the different sensitive groups as measured by the barycenters of the within groups positive labeled points. Similar considerations apply when dealing with multiple sensitive features. [|l|c|c|c|]{}\ Method & Accuracy & DEO & Selected Features\ Naïve Lin. SVM & $0.79 \pm 0.06$ & $0.14 \pm 0.03$ & -\ Linear SVM & $0.78 \pm 0.07$ & $0.13 \pm 0.04$ & -\ Naïve Lasso & $0.79 \pm 0.07$ & $0.11 \pm 0.04$ & $ 22.7 \pm 9.1$\ Lasso & $0.74 \pm 0.04$ & $0.07 \pm 0.04$ & $ 5.2 \pm 3.7$\ Hardt & $0.71 \pm 0.05$ & $0.04 \pm 0.06$ & $ 5.2 \pm 3.7$\ Our Lasso & $0.77 \pm 0.02$ & $0.03 \pm 0.02$ & $ 7.5 \pm 2.0$\ \ \ Method & Accuracy & DEO & Selected Features\ Naïve Lin. SVM & $0.69 \pm 0.03$ & $0.16 \pm 0.03$ & -\ Linear SVM & $0.68 \pm 0.03$ & $0.15 \pm 0.03$ & -\ Naïve Lasso & $0.73 \pm 0.04$ & $0.15 \pm 0.06$ & $ 14.1 \pm 6.6$\ Lasso & $0.70 \pm 0.04$ & $0.09 \pm 0.05$ & $ 7.9 \pm 8.0$\ Hardt & $0.67 \pm 0.06$ & $0.08 \pm 0.07$ & $ 7.9 \pm 8.0$\ Our Lasso & $0.71 \pm 0.04$ & $0.03 \pm 0.04$ & $ 9.0 \pm 7.3$\ Dataset $\hat{\Delta}$ --------------- ---------------- Toytest 0.03 Toytest Lasso 0.02 Arrhythmia 0.03 COMPAS 0.04 Adult 0.06 German 0.05 Drug 0.03 : The $\hat{\Delta}$ for the exploited datasets. A smaller $\hat{\Delta}$ means a better approximation of the DEO in our method.[]{data-label="tab:delta"} [^1]: Additional technical steps and experiments are presented in the supplementary materials. [^2]: A detailed comparison between our proposal and state-of-the-art is reported in the supplementary materials. [^3]: The extension to multiple groups (e.g. ethnic group) is briefly discussed in the supplementary material. [^4]: In supplementary material we derive the dual of Problem  when $\ell_c$ is the Hinge loss. [^5]: In supplementary material is reported the generalization of this argument to kernel for SVM. [^6]: The regularization parameter $C$ (for both SVM and our method) with $30$ values, equally spaced in logarithmic scale between $10^{-4}$ and $10^{4}$; we used both the linear or RBF kernel (i.e. for two examples $\boldsymbol{x}$ and $\boldsymbol{z}$, the RBF kernel is $e^{-\gamma ||\boldsymbol{x}-\boldsymbol{z}||^2}$) with $\gamma \in \{0.001, 0.01, 0.1, 1\}$. In our case, $C=\frac{1}{2 \lambda}$ of Eq. . [^7]: Fairness-Measures website: [fairness-measures.org](fairness-measures.org) [^8]: Python code for [@zafar2017fairness]: <https://github.com/mbilalzafar/fair-classification> [^9]: The case when $s$ is not inside $\boldsymbol{x}$ is reported in the supplementary materials). [^10]: Analysis of the recidivism COMPAS dataset: www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm
--- abstract: 'We give asymptotics for the left and right tails of the limiting Quicksort distribution. The results agree with, but are less precise than, earlier non-rigorous results by Knessl and Spankowski.' address: 'Department of Mathematics, Uppsala University, PO Box 480, SE-751 06 Uppsala, Sweden' author: - Svante Janson date: '28 August, 2015; minor revision 26 September, 2015' --- Introduction {#S:intro} ============ Let $X_n$ be the number of comparisons used by the algorithm Quicksort when sorting $n$ distinct numbers, initially in a uniformly random order. Equivalently, $X_n$ is the internal pathlength in a random binary search tree with $n$ nodes. (See [e.g[.=1000]{}]{} @KnuthIII [Sections 5.2.2 and 6.2.2] or @Drmota [Chapter 8 and Section 1.4.1] for a description of the algorithm and of binary search trees.) It follows that $X_n$ satisfies the distributional recurrence relation $$\label{Xn} X_n {\overset{\mathrm{d}}{=}}X_{U_n - 1} + X^*_{n - U_n} + n - 1,\qquad n \geq 1,$$ where ${\overset{\mathrm{d}}{=}}$ denotes equality in distribution, and, on the right, $U_n$ is distributed uniformly on the set $\{1, \ldots, n\}$, $X_j^* {\overset{\mathrm{d}}{=}}X_j$, $X_0=0$, and $U_n, X_0, \dots,\allowbreak X_{n - 1},\allowbreak X^*_0,\allowbreak \dots, X^*_{n - 1}$ are all independent. (Thus, can be regarded as a definition of $X_n$.) It is well-known, and easy to show from , that $${\operatorname{\mathbb E{}}}X_n=2(n+1)H_n-4n\sim 2n\ln n,$$ where $H_n:=\sum_{k=1}^n k{^{-1}}$ is the $n$:th harmonic number. Moreover, it was proved by Régnier [@Reg] and Rösler [@Roesler], using different methods, that the normalized variables $$Z_n := \frac{X_n - {\operatorname{\mathbb E{}}}X_n}n$$ converge in distribution to some limiting random variable $Z$, as [${n\to\infty}$]{}. There is no simple description of the distribution of $Z$, but various results have been shown by several different authors. For example, $Z$ has an everywhere finite moment generating function, and thus all moments are finite [@Roesler], with ${\operatorname{\mathbb E{}}}Z=0$ and ${\operatorname{Var}}Z=7-\frac{2}3\pi^2$; furthermore, $Z$ has a density which is infinitely differentiable [@TanH; @SJ131]. Moreover, the recurrence relation yields in the limit a distributional identity, which can be written as $$\label{recZ} Z{\overset{\mathrm{d}}{=}}UZ'+(1-U)Z''+g(U),$$ where $U$, $Z'$ and $Z''$ are independent, $U\sim {\mathsf{U}}(0,1)$ is uniform, $Z',Z''{\overset{\mathrm{d}}{=}}Z$, and $g$ is the deterministic function $$\label{g} g(u):=2u\ln u+2(1-u)\ln(1-u)+1.$$ Furthermore, @Roesler showed that together with ${\operatorname{\mathbb E{}}}Z=0$ and ${\operatorname{Var}}Z<\infty$ determines the distribution of $Z$ uniquely; see further [@SJ134]. The identity is the basis of much of the study of $Z$, including the present work. In the present paper we study the asymptotics of the tail probabilities ${\operatorname{\mathbb P{}}}(Z{\leqslant}-x)$ and ${\operatorname{\mathbb P{}}}(Z{\geqslant}x)$ as [${x\to\infty}$]{}. Using non-rigorous methods from applied mathematics (assuming an as yet unverified regularity hypothesis), @KnSz found very precise asymptotics of both the left tail and the right tail. Their result for the left tail is that, as [${x\to\infty}$]{}, with ${\gamma}=(2-\frac{1}{\ln2}){^{-1}}$, $$\begin{aligned} \label{knsz-} {\operatorname{\mathbb P{}}}(Z{\leqslant}-x) =({\stepcounter{cc}{c_{\arabic{cc}}}}+o(1)){\xdef\cco{{c_{\arabic{cc}}}}}\exp{\bigl(-{\stepcounter{cc}{c_{\arabic{cc}}}}e^{{\gamma}x}\bigr)} {\xdef\ccz{{c_{\arabic{cc}}}}} =\exp{\bigl(-e^{{\gamma}x +{\stepcounter{cc}{c_{\arabic{cc}}}}+o(1)}\bigr)},\end{aligned}$$ where $\cco,\ccz,{c_{\arabic{cc}}}$ are some constants ($\cco$ is explicit in [@KnSz], but not $\ccz$). For the right tail, they give a more complicated expression, which by ignoring higher order terms implies, for example, $$\begin{aligned} \label{knsz+} {\operatorname{\mathbb P{}}}(Z{\geqslant}x) =\exp{\bigl(-x\ln x-x\ln\ln x+(1+\ln 2)x+o(x)\bigr)}.\end{aligned}$$ It has been a challenge to justify these asymptotics rigorously, and so far very little progress has been made. Some rigorous upper bounds were given by @SJ138, in particular $$\begin{aligned} \label{fj+} {\operatorname{\mathbb P{}}}(Z{\geqslant}x) {\leqslant}\exp{\bigl(-x\ln x +(1+\ln2)x\bigr)}, \qquad x{\geqslant}303,\end{aligned}$$ with the same leading term (in the exponent) as , and for the left tail $$\begin{aligned} \label{fj-} {\operatorname{\mathbb P{}}}(Z{\leqslant}-x) {\leqslant}\exp(-x^2/5), \qquad x{\geqslant}0,\end{aligned}$$ which is much weaker than . Also the present paper falls short of the (non-rigorous) asymptotics – from [@KnSz], but we show, by simple methods, the following results, which at least show that the leading terms in the top exponents in – are correct. Let ${\gamma}:=(2-{\frac{1}{\ln2}}){^{-1}}$. As ${\ensuremath{{x\to\infty}}}$, $$\label{sj-} \exp{\bigl(-e^{{\gamma}x+\ln\ln x+O(1)}\bigr)} {\leqslant}{\operatorname{\mathbb P{}}}(Z{\leqslant}-x) {\leqslant}\exp{\bigl(-e^{{\gamma}x+O(1)}\bigr)}$$ As ${\ensuremath{{x\to\infty}}}$, $$\label{sj+} \exp{\bigl(-x\ln x-x\ln\ln x+O(x)\bigr)} {\leqslant}{\operatorname{\mathbb P{}}}(Z{\geqslant}x) {\leqslant}\exp{\bigl(-x\ln x+O(x)\bigr)}.$$ We show the lower bounds in Sections \[Slowerleft\] and \[Slowerright\], and the upper bounds in Sections \[Supperleft\] and \[Supperright\]. The lower bounds are proved by direct arguments using the identity ; the upper bounds are proved by the standard method of first estimating the moment generating function. The right inequality in follows from the more precise , where an explicit value is given for the implicit constant; we include this part of for completeness. (The proof in [Section \[Supperright\]]{} actually yields a better constant than for large $x$, see .) We expect that, similarly, the implicit constants in the other parts of – could be replaced by explicit bounds, using more careful versions of the arguments and estimates below. However, in order to keep the proofs simple, we have not attempted this. We consider only the limiting random variable $Z$, and not $Z_n$ or $X_n$ for finite $n$. Of course, the results for $Z$ imply corresponding results for the tails ${\operatorname{\mathbb P{}}}(Z_n{\leqslant}-x)$ and ${\operatorname{\mathbb P{}}}(Z_n{\geqslant}x)$ for $n$ sufficiently large (depending on $x$), but we do not attempt to give any explicit results for finite $n$. For some bounds for finite $n$, see [@SJ141] and (for large deviations) [@McDH]. Although we do not work with $Z_n$ for finite $n$, the proofs below of the lower bounds can be interpreted for finite $n$, saying that we can obtain $Z_n{\leqslant}-x$ with roughly the given probability (for large $n$) by considering the event that in the first $\Theta(x)$ generations, all splits are close to balanced (with proportions $\frac12\pm x{^{-1/2}}$, say); similarly, to obtain $Z_n{\geqslant}x$ we let there be one branch of length $\Theta(x)$ where all splits are extremely unbalanced (with at most a fraction $(x\ln x){^{-1}}$ on the other side). The fact that we require an exponential number of splits to be extreme for the lower tail, but only a linear number for the right tail, can be seen as an explanation of the difference between the two tails, with the left tail doubly exponential and the right tail roughly exponential. Preliminaries ============= Note that $g$ in is a continuous convex function on ${[0,1]}$, with maximum $g(0)=g(1)=1$ and minimum $g(1/2)=1-2\ln2=-(2\ln2-1)<0$. Let $\psi(t):={\operatorname{\mathbb E{}}}e^{tZ}$ be the moment generating function of $Z$. As said above, Rösler [@Roesler] showed that $\psi(t)$ is finite for every real $t$. The distributional identity yields, by conditioning on $U$, the functional equation $$\label{psi} \psi(t):={\operatorname{\mathbb E{}}}e^{tZ}={\int_0^1}\psi(ut)\psi((1-u)t)e^{tg(u)}{\,\mathrm{d}}u.$$ We may replace $Z$ by the right-hand side of ; hence we may without loss of generality assume the equality (not just in distribution) $$\label{recZ=} Z= UZ'+(1-U)Z''+g(U).$$ Left tail, lower bound {#Slowerleft} ====================== Let ${\varepsilon}>0$ be so small that $g(\frac12+{\varepsilon})<0$, and let $a:=-g(\frac12+{\varepsilon})>0$. For any $z$, on the event [{$Z'{\leqslant}-z$, $Z''{\leqslant}-z$, and $|U-\frac12|{\leqslant}{\varepsilon}$}]{}, yields $$Z{\leqslant}-Uz-(1-U)z+g(U)=-z+g(U){\leqslant}-z-a .$$ Hence, for any real $z$, $${\operatorname{\mathbb P{}}}(Z{\leqslant}-z-a) {\geqslant}2{\varepsilon}{\operatorname{\mathbb P{}}}(Z{\leqslant}-z)^2.$$ It follows by induction that $${\operatorname{\mathbb P{}}}(Z{\leqslant}-na) {\geqslant}(2{\varepsilon})^{2^n-1}{\operatorname{\mathbb P{}}}(Z{\leqslant}0)^{2^n}, \qquad n{\geqslant}0.$$ Consequently, using $2{\varepsilon}{\leqslant}1$, $ {\operatorname{\mathbb P{}}}(Z{\leqslant}-na) {\geqslant}(2{\varepsilon}{\operatorname{\mathbb P{}}}(Z{\leqslant}0))^{2^n}$, and thus, with $c:=\ln(2{\operatorname{\mathbb P{}}}(Z{\leqslant}0))>-\infty$, $$\ln{\operatorname{\mathbb P{}}}(Z{\leqslant}-na){\geqslant}2^n{\bigl(\ln{\varepsilon}+c\bigr)}, \qquad n{\geqslant}0.$$ If $x>0$, we take $n={\lceilx/a\rceil}$ and obtain $$\label{lukas} \ln{\operatorname{\mathbb P{}}}(Z{\leqslant}-x){\geqslant}2^{x/a+1}{\bigl(\ln{\varepsilon}+c\bigr)}.$$ We choose (for large $x$) ${\varepsilon}=x{^{-1/2}}$, so, using Taylor’s formula, $$a=-g{\bigl(\tfrac12+{\varepsilon}\bigr)}=-g{\bigl(\tfrac12\bigr)} +O{\bigl({\varepsilon}^2\bigr)} =2\ln2-1+O{\bigl(x{^{-1}}\bigr)}$$ and thus $$a{^{-1}}=(2\ln2-1){^{-1}}+O{\bigl(x{^{-1}}\bigr)}.$$ Consequently, yields $$\ln{\operatorname{\mathbb P{}}}(Z{\leqslant}-x){\geqslant}2^{x/(2\ln2-1)+O(1)}{\bigl(\ln x{^{-1/2}}+c\bigr)} =-e^{{\gamma}x+O(1)+\ln\ln x}.$$ Right tail, lower bound {#Slowerright} ======================= Let $0<{\delta}<\frac12$. If $0<U{\leqslant}{\delta}$, then $$\label{pyret} g(U){\geqslant}g({\delta})=1+2{\delta}\ln{\delta}+O({\delta}) {\geqslant}1+3{\delta}\ln{\delta},$$ with the last inequality holding provided ${\delta}$ is small enough. Assume that holds, and assume that $Z'{\geqslant}0$, $Z''{\geqslant}z{\geqslant}0$ and $U{\leqslant}{\delta}$. Then yields $$Z{\geqslant}(1-{\delta})z+g({\delta}){\geqslant}z-{\delta}z+1-3{\delta}\ln{\delta}{^{-1}}.$$ Consequently, $${\operatorname{\mathbb P{}}}( Z{\geqslant}z+1-{\delta}z-3{\delta}\ln{\delta}{^{-1}}) {\geqslant}{\delta}{\operatorname{\mathbb P{}}}(Z{\geqslant}0){\operatorname{\mathbb P{}}}(Z{\geqslant}z).$$ Let $x$ be sufficiently large and choose ${\delta}=1/(x\ln x)$. Then, for $0{\leqslant}z{\leqslant}x$, $$z+1-{\delta}z-3{\delta}\ln{\delta}{^{-1}}{\geqslant}z+1-\frac{1}{\ln x}-3\frac{\ln (x\ln x)}{x\ln x} {\geqslant}z+1-\frac{2}{\ln x},$$ provided $x$ is large enough. Hence, if $b:=1-\frac{2}{\ln x}$ and $c:={\operatorname{\mathbb P{}}}(Z{\geqslant}0)>0$, then for $0{\leqslant}z{\leqslant}x$ we have $${\operatorname{\mathbb P{}}}(Z{\geqslant}z+b ) {\geqslant}c{\delta}{\operatorname{\mathbb P{}}}(Z{\geqslant}z).$$ By induction, we find for $0{\leqslant}n{\leqslant}x/b+1$, $${\operatorname{\mathbb P{}}}(Z{\geqslant}nb){\geqslant}c^n{\delta}^n {\operatorname{\mathbb P{}}}(Z{\geqslant}0)=c^{n+1}{\delta}^n > (c{\delta})^{n+1}.$$ Consequently, taking $n:={\lceilx/b\rceil}$, $$\begin{split} \ln{\operatorname{\mathbb P{}}}(Z{\geqslant}x) &{\geqslant}(n+1)(\ln c+ \ln {\delta}) {\geqslant}(x/b+2)(\ln c+\ln {\delta}) \\& ={\bigl(x+O(x/\ln x)\bigr)}{\bigl(-\ln x-\ln\ln x+O(1)\bigr)} \\& =-x\ln x-x\ln\ln x+O(x). \end{split}$$ Left tail, upper bound {#Supperleft} ====================== \[L-\] There exists $a{\geqslant}0$ such that for all $t>0$, with ${\kappa}:={\gamma}{^{-1}}=2-{\frac{1}{\ln2}}$, $$\label{6a} \psi(-t)< \exp{\bigl({\kappa}t\ln t+at+1\bigr)}.$$ We note that $t\ln t{\geqslant}-e{^{-1}}$ for $t>0$, and thus ${\kappa}t\ln t+at+1{\geqslant}-{\kappa}e{^{-1}}+1> 0$. Since $\psi(t)$ is continuous and $\psi(0)=1$, there exists $t_1>0$ such that $\psi(-t)< \exp{\bigl(1-{\kappa}e{^{-1}}\bigr)}$ for $0{\leqslant}t{\leqslant}t_1$, and thus holds for all such $t$, and any $a{\geqslant}0$. Next, let $t_2:=\pi e^{2}$. We may choose $a>0$ such that holds for $t\in[t_1,t_2]$. Before proceeding to larger $t$, define $$\label{h} h(u):=u\ln u+(1-u)\ln(1-u)$$ and note that $g(u)=2h(u)+1$ by . Now suppose that fails for some $t>0$ and let $T:=\inf{\ensuremath{\{t>0:\text{\eqref{6a} fails}\}}}$. Then $T{\geqslant}t_2$, and, by continuity, $$\label{7a} \psi(-T)= \exp{\bigl({\kappa}T\ln T+aT+1\bigr)}.$$ Furthermore, if $0<u<1$, then holds for $t=uT$ and $t=(1-u)T$, and thus, recalling , [ $$\begin{gathered} \psi(-uT)\psi{\bigl(-(1-u)T\bigr)} \\ \begin{aligned} &<\exp{\bigl({\kappa}uT\ln(uT)+{\kappa}(1-u)T\ln((1-u)T)+auT+a(1-u)T+2\bigr)} \\& =\exp{\bigl({\kappa}T\ln T+{\kappa}{\bigl(u\ln u+(1-u)\ln(1-u)\bigr)}T+aT+2\bigr)}. \\& =\exp{\bigl({\kappa}T\ln T+{\kappa}h(u)T+aT+2\bigr)}. \end{aligned} \end{gathered}$$]{} Furthermore, $g(u)=1+2h(u)$, and thus we obtain $$\label{winston} \psi(-uT)\psi{\bigl(-(1-u)T\bigr)}e^{-Tg(u)} \\ {\leqslant}\exp{\bigl({\kappa}T\ln T-((2-{\kappa}) h(u)+1)T+aT+2\bigr)}.$$ By , $h(u)$ is a convex function with $h(\frac12)=-\ln2$, $h'(\frac12)=0$ and $h''(u)=u{^{-1}}+(1-u){^{-1}}{\geqslant}4$, and thus by Taylor’s formula, $h(u){\geqslant}-\ln2+2(u-\frac12)^2$. Furthermore, $2-{\kappa}=1/\ln2$, and thus $$\label{jw} (2-{\kappa})h(u)+1{\geqslant}\frac{2}{\ln2}(u-\tfrac12)^2 {\geqslant}(u-\tfrac12)^2 .$$ Combining , , and , we obtain $$\begin{split} \psi(-T) &{\leqslant}{\int_0^1}\exp{\Bigl({\kappa}T\ln T + aT + 2 -(u-\tfrac12)^2T\Bigr)}{\,\mathrm{d}}u \\& < \exp{\bigl({\kappa}T\ln T + aT + 2\bigr)}{\int_{-\infty}^\infty}e^{ -(u-\frac12)^2T}{\,\mathrm{d}}u \\& = \sqrt{\frac{\pi}{T}} \exp{\bigl({\kappa}T\ln T + aT + 2\bigr)}. \end{split}$$ Since $T{\geqslant}t_2=\pi e^{2}$, this yields $\psi(-T)< \exp{\bigl({\kappa}T\ln T + aT + 1\bigr)}$, which contradicts . This contradiction shows that no such $T$ exists, and thus holds for all $t>0$. For $x{\geqslant}0$ and any $t{\geqslant}0$, by [Lemma \[L-\]]{}, $${\operatorname{\mathbb P{}}}(Z{\leqslant}-x){\leqslant}e^{-tx}{\operatorname{\mathbb E{}}}e^{-tZ}=e^{-tx}\psi(-t) <\exp{\bigl(-tx+{\kappa}t\ln t+at +1\bigr)}.$$ We optimize by taking $t=\exp({\kappa}{^{-1}}(x-a)-1)$ and obtain $$\ln {\operatorname{\mathbb P{}}}(Z{\leqslant}-x) <t({\kappa}\ln t+a-x) +1 =-{\kappa}t+1=-e^{{\kappa}{^{-1}}x+O(1)},$$ which is the upper bound in because ${\kappa}{^{-1}}={\gamma}$. Right tail, upper bound {#Supperright} ======================= As said in the introduction, was proved in [@SJ138]. Nevetheless we give for completeness a proof of the upper bound in , similar to the proof in [Section \[Supperleft\]]{}. (It is also similar to the proof in [@SJ138] but simpler, partly because we do not keep track of all constants and do not try to optimize; nevertheless, it yields a slight improvement of for large $x$, see below.) \[L+\] There exists $a{\geqslant}0$ such that for all $t{\geqslant}0$, $$\label{2a} \psi(t){\leqslant}\exp{\bigl(e^t+at\bigr)}.$$ Note that [@SJ138 Corollary 4.3] shows the bound $\psi(t){\leqslant}\exp(2e^t)$ for $t{\geqslant}5.02$, which is explicit, but weaker for large $t$. Since $\psi(0)=1<e$, it follows by continuity that there exists $t_1>0$ such that $\psi(t){\leqslant}e$ for $t\in[0,t_1]$, and thus holds for $t\in[0,t_1]$ and any $a{\geqslant}0$. Let $t_2:=100$, and choose $a$ so that holds for $t\in[t_1,t_2]$. Assume that fails for some $t>0$, and let $T:=\inf{\ensuremath{\{t>0:\text{\eqref{2a} fails}\}}}$. Then $T{\geqslant}t_2$, and, by continuity, $$\label{2c} \psi(T)= \exp{\bigl(e^T+aT\bigr)}.$$ Furthermore, if $0<u<1$, then holds for $t=uT$ and $t=(1-u)T$, and thus, using and the symmetry $u\leftrightarrow1-u$ there, and $g(u){\leqslant}1$, $$\label{2b} \begin{split} \psi(T)& {\leqslant}2\int_0^{1/2} \exp{\Bigl(e^{uT}+auT+e^{(1-u)T}+a(1-u)T+Tg(u)\Bigr)}{\,\mathrm{d}}u \\& {\leqslant}2\int_0^{1/2} \exp{\Bigl(e^{uT}+e^{T-uT}+aT+T\Bigr)}{\,\mathrm{d}}u. \end{split}$$ We consider two cases. \(i) If $uT{\leqslant}1$, then $e^{-uT}{\leqslant}1-\frac12uT$, and thus $$e^{uT}+e^{T-uT}+aT+T {\leqslant}e+e^T(1-\tfrac12uT)+(a+1)T.$$ Hence, the contribution to for $u{\leqslant}1/T$ is no more than $$\begin{split} \label{3a} 2\int_0^{1/T} \exp{\Bigl(e^{T}+(a+1)T&+e-\tfrac12 Te^T u\Bigr)}{\,\mathrm{d}}u \\ &<2 \exp{\Bigl(e^{T}+(a+1)T+e\Bigr)}\frac{1}{\frac12 Te^T} \\& =\frac{4e^e}{T} \exp{\bigl(e^T+aT\bigr)} {\leqslant}0.7 \psi(T), \end{split}$$ by and $T{\geqslant}t_2=100$, since $4e^e\doteq 60.62$. \(ii) For $uT>1$ and $u<\frac12$, recalling $T{\geqslant}t_2=100$, $$\begin{split} e^{uT}+e^{T-uT}+aT+T & {\leqslant}2e^{T-uT}+aT+T {\leqslant}2e{^{-1}}e^{T}+aT+T \\& {\leqslant}0.8 e^{T}+T +aT {\leqslant}0.9 e^T+aT \\& = e^T+aT-0.1 e^{T} {\leqslant}e^T+aT-100. \end{split}$$ Hence, the contribution to for $uT>1$ is less than, recalling , $$\label{4a} \exp{\bigl(e^T+aT-100\bigr)} =e^{-100}\psi(T) <0.1 \psi(T).$$ Using and in , we find $$\psi(T)<0.7\psi(T)+0.1\psi(T),$$ a contradiction. Hence $T$ cannot exist and holds for all $t{\geqslant}0$. For $x{\geqslant}0$ and any $t{\geqslant}0$, by [Lemma \[L+\]]{}, $${\operatorname{\mathbb P{}}}(Z{\geqslant}x) {\leqslant}e^{-tx}{\operatorname{\mathbb E{}}}e^{tZ} =e^{-tx}\psi(t) {\leqslant}\exp{\bigl(-tx+e^t+at\bigr)}.$$ We take $t=\ln x$ (assuming $x{\geqslant}1$) and obtain $$\label{ql} {\operatorname{\mathbb P{}}}(Z{\geqslant}x) {\leqslant}\exp{\bigl(-x\ln x+x+O(\ln x)\bigr)}, \qquad x{\geqslant}1.$$ (The optimal choice of $t$ is actually $\ln(x-a)$, but this leads to the same result up to $o(1)$ in the exponent, which is absorbed by the error term $O(\ln x)$.) I thank David Belius and Jim Fill for helpful comments. \#1\#2,[\#2, no. \#1,]{} [99]{} Michael Drmota, *Random Trees*. Springer, Vienna, 2009. James Allen Fill and Svante Janson, Smoothness and decay properties of the limiting Quicksort density function. *Mathematics and Computer Science (Proceedings, Colloquium on Mathematics and Computer Science, Versailles 2000)*, eds. D. Gardy and A. Mokkadem, Birkh[ä]{}user, Basel, 2000, James Allen Fill and Svante Janson, A characterization of the set of fixed points of the Quicksort transformation. *Electronic Comm. Probab.* 5 (2000), no. 9, 77–84. James Allen Fill and Svante Janson, Approximating the limiting Quicksort distribution. *Random Structures Algorithms* [19]{} (2001), no. 3-4, 376–406. James Allen Fill and Svante Janson, Quicksort asymptotics. *J. Algorithms* [44]{} (2002), no. 1, 4–28. Charles Knessl and Wojciech Szpankowski, Quicksort algorithm again revisited. *Discrete Math. Theor. Comput. Sci.* 3 (1999), 43–64. Donald E. Knuth, *The Art of Computer Programming. Vol. 3: Sorting and Searching*. 2nd ed., Addison-Wesley, Reading, Mass., 1998. C. J. H. McDiarmid and R. B. Hayward, Large deviations for Quicksort. *J. Algorithms* [21]{} (1996), no. 3, 476–507. Mireille Régnier, A limiting distribution for quicksort. *RAIRO Inform. Théor. Appl.* [23]{} (1989), no. 3, 335–343. Uwe Rösler, A limit theorem for “Quicksort”. *RAIRO Inform. Théor. Appl.* [25]{} (1991), no. 1, 85–100. Kok Hooi Tan and Petros Hadjicostas, Some properties of a limiting distribution in Quicksort. *Statist. Probab. Lett.* [25]{} (1995), 87–94.
--- abstract: 'We have determined the proper motion (PM) of the Large Magellanic Cloud (LMC) relative to four background quasi-stellar objects, combining data from two previous studies made by our group, and new observations carried out in four epochs not included the original investigations. The new observations provided a significant increase in the time base and in the number of frames, relative to what was available in our previous studies. We have derived a total LMC PM of $\mu$ = ($+$2.0$\pm$0.1) mas yr$^{-1}$, with a position angle of $\theta$ = (62.4$\pm$3.1)$^\circ$. Our new values agree well with most results obtained by other authors, and we believe we have clarified the large discrepancy between previous results from our group. Using published values of the radial velocity for the center of the LMC, in combination with the transverse velocity vector derived from our measured PM, we have calculated the absolute space velocity of the LMC. This value, along with some assumptions regarding the mass distribution of the Galaxy, has in turn been used to calculate the mass of the Milky Way. Our measured PM also indicates that the LMC is not a member of a proposed stream of galaxies with similar orbits around our galaxy.' author: - 'Mario H. Pedreros' - 'Edgardo Costa and René A. Méndez' title: 'The Proper Motion of the Large Magellanic Cloud: A Reanalysis' --- INTRODUCTION ============ The present study is a follow-up of the works by Anguita, Loyola & Pedreros (2000, hereafter ALP) and Pedreros, Anguita & Maza (2002, hereafter PAM) in which the PM of the LMC was determined using the “quasar method”. This method, fully described in ALP and PAM, consists in using quasi-stellar objects (QSOs) in the background field of the LMC, as fiducial reference points to determine its PM In this method, the position of the background QSOs is measured at different epochs with respect to bona-fide field stars of the LMC which define a local reference system (hereafter LRS). Because a QSO can be considered a fiducial reference point, any motion detected will will be a reflexion of the motion of the LRS of LMC stars. As shown in Table 1, there is a rather large discrepancy, particularly in Decl., between the PM of the LMC derived by ALP and that derived by PAM, with ALP$-$PAM differences of $-$0.3 mas yr$^{-1}$ (1.5 $\sigma$) in R.A., and 2.5 mas yr$^{-1}$ (12.5 $\sigma$) in Decl. This difference prompted us to add new epochs to our database (using the same equipment and set-up used by ALP and PAM) and to make a full reanalysis of the entire data set. Here we report the results obtained combining data from previous studies by our group, with new observations carried out in three additional epochs (not included in the original investigation), for the LMC quasar fields Q0459-6427, Q0557-6713, Q0558-6707, and Q0615-6615 (in the same nomenclature used by ALP and PAM). The original study of field Q0459-6427 was reported in PAM, and those of Q0557-6713, Q0558-6707 and Q0615-6615 in ALP. As can be seen in Table 2, which summarizes the total observational material used in the present paper, our new data provides a significant increase, in time base and in the number of frames, relative to what was available in ALP and PAM. The increase in time base for the fields Q0459-6427, Q0557-6713, Q0558-6707 and Q0615-6615 was 19%, 65%, 126% and 65%, respectively. The corresponding increase in data points was 7%, 18%, 59% and 56%, respectively. OBSERVATIONS AND REDUCTIONS =========================== The new observations were carried out with a 24$\mu$ pixels Tektronix 1024x1024 CCD detector attached to the Cassegrain focus of the CTIO 1.5 m telescope in its f/13.5 configuration (scale: 0.24 $\arcsec$/pixel). Only astrometric observations were secured. Because for each QSO field we adopted the same LRS used by ALP or PAM, there was no need for additional photometric observations. Finding charts for the reference stars and the background QSO in each field can be found in ALP or PAM. As was done in our previous studies, the astrometric observations were made using a Kron-Cousins [*R*]{}-band filter, in order to minimize differential color refraction effects. The method used for the determination of the LMC’s PM is the same as that explained in ALP and PAM. Only data not included in those two previous studies went through the full reduction procedure. For data already included in those studies, we used the available raw coordinates for the centroids of the reference stars and background QSOs. Both, the existing and the newly determined raw coordinates, were treated by means of the same custom programs used in PAM. In brief, the (x,y) coordinates of the QSO and the LMC field reference stars in each image were determined using the DAOPHOT package (Stetson 1987), and then corrected for differential color refraction and transformed to barycentric coordinates. Then, by averaging the barycentric coordinates of the best set of consecutive images taken of each QSO field throughout our program, a standard reference frame (SRF) was defined for every field. All images, taken at different epochs, of each field, were then referred to its corresponding SRF. This was done through multiple regression analysis by fitting both sets of coordinates to quadratic equations of the form: $X = a_0 + a_1x + a_2y + a_3x^2;~~~~Y = b_0 + b_1x + b_2y + b_3x^2$; where ($X,Y$) are the coordinates on the SRF system and ($x,y$) are the the observed barycentric coordinates. It was found that the above transformation equations yielded the best results for the registration into the SRF, showing no remaining systematic trends in the data. RESULTS ======= Tables 3-6 list the residual PM (relative to the barycenter of the field’s SRF) and photometry (this latter from ALP or PAM, and included here for completeness) of the stars defining the LRS in each of our four QSO fields. Star IDs are the same as those in PAM and ALP, for the corresponding fields. The PM uncertainties correspond to the error in the determination of the slope of the best-fit line. Inspection of these tables shows that the PM uncertainty of most of the reference stars is comparable to, or larger than, their derived PM value, implying that these PM do not represent internal motions in the LMC. In Figure 1 we present the PM maps for the reference stars listed in Tables 3-6. The dispersion around the mean turned out to be $\pm$0.34, $\pm$0.79, $\pm$0.54, and $\pm$0.41 mas yr$^{-1}$ in R.A., and $\pm$0.52, $\pm$0.71, $\pm$0.58, $\pm$0.62 mas yr$^{-1}$ in Decl., for Q0459-6427, Q0557-6713, Q0558-6707 and Q0615-6615, respectively. Based on the above argument, the scatter seen in the plots probably stems entirely from the random errors in the measurements, and does not represent the actual velocity dispersion in the LMC. In Figure 2 we present position $vs.$ epoch diagrams for the QSO fields in R.A. ($\Delta\alpha$cos$\delta$) and Decl. ($\Delta\delta$), were $\Delta\alpha$cos$\delta$ and $\Delta\delta$ represent the positions of the QSOs on different CCD frames, relative to the barycenter of the SRF. These diagrams were constructed using individual position data for the QSO in each CCD image as a function of epoch. In Table 7 we give, for each epoch, the mean barycentric positions of the QSOs along with their mean errors, the number of points used to calculate the mean for each coordinate, and the CCD detectors used. Symbol sizes in Figure 2 are proportional to the number of times the measurements yielded the same coordinate value for a particular epoch. The best-fit straight lines resulting from simple linear regression analysis on the data points are also shown. The negative values of the line slopes correspond to the measured PM of the barycenter of the LRS, in each QSO field, relative to the SRF. Table 8 summarizes our results for the measured PM of the LMC. Column (1) gives the quasar identification, columns (2) and (3) the R.A. and Decl. components of the LMC’s PM (together with their standard deviations) respectively, and, finally, columns (4), (5) and (6) the number of frames, the number of epochs, and the observation period, respectively. It should be noted that the rather small quoted errors for the PM come out directly from what the least-square fit yields as the uncertainty in the determination of the slope of the best fit line. COMPARISON TO OTHER PROPER MOTION WORK ====================================== Table 9 lists the results of all available measurements of the LMC’s PM having uncertainties smaller than 1 mas yr$^{-1}$ in both components, as well as the reference system used in each case. With the exception of those cases noted as “Field” in the first column, all the PM listed in Table 9 are relative to the LMC’s center. To facilitate comparisons, we present our current results in both ways. As explained in the next section, our PM values relative to the LMC’s center were obtained correcting the field PM for the rotation of the plane of the LMC. Our results are in reasonable agreement with most of the available data. They agree particularly well with those of Kroupa et al. (1994), who used the Positions and Proper Motions Star Catalog (PPM, Röser et al., 1993) as reference system, and also with the HST unpublished result of Kallivayalil et al. (2005), who used QSOs as reference system. On the other hand, there still is a significant discrepancy with ALP’s result in Decl. We will further discuss this issue in §6. In Table 9 we have not included a recent determination of the LMC’s PM by Momany & Zaggia (2005) using the USNO CCD Astrograph all-sky Catalog (UCAC2, Zacharias et al. 2004), because, as confirmed by the errors declared by the authors themselves ($\sim$3 mas in both coordinates), the internal accuracy of their methodology is not comparable with ours. Numerous tests carried out by our group, favor the use of fiducial reference points in combination with a LRS defined by relatively few, well studied (bona-fide members, free of contamination from neighboring stars, good signal-to-noise, etc.) LMC stars, to determine a PM of this nature. Interestingly, their result \[$\mu_{\alpha}$cos$\delta$,$\mu_{\delta}$\] $\sim$ \[+0.84,+4.32\] mas yr$^{-1}$ is in reasonable agreement with that of ALP. Combining the components given in the last entry of Table 9, we derive a total LMC PM of $\mu$ = ($+$2.0$\pm$0.1) mas yr$^{-1}$, with a position angle of $\theta$ = (62.4$\pm$3.1)$^\circ$, measured eastward from the meridian joining the center of the LMC to the north celestial pole. This result is compatible (particularly the PM’s absolute value) with theoretical models (Gardiner et al. 1994), which predict a PM for the LMC in the range 1.5$-$2.0 mas yr$^{-1}$, with a position angle of . SPATIAL VELOCITY OF THE LMC AND MASS OF THE GALAXY ================================================== Using the PM of the LMC determined in §3, and the radial velocity of the center of the LMC (adopted from the literature), we can calculate the radial and transverse components of the velocity for the LMC, as seen from the center of the Galaxy, along with other parameters described below. To do this we basically followed the procedure outlined by Jones et al. (1994). In the calculations we used as basic LMC parameters those given in Table 8 of ALP, and assumed a rotational velocity v$_{\Phi}$ = 50 km s$^{-1}$ and a radial velocity V$_r$ = 250 km s$^{-1}$ for the LMC.\ In order to determine, from our measured PM values, the space velocity components of the LMC, and its PM with respect to the Galactic Rest Frame (GRF), a series of steps were required. These include: 1. A correction to our measured PM values to account for the rotation of the plane of the LMC; 2. A transformation of the corrected PM into transverse velocity components with respect the the center of the LMC, the Sun, the LSR and the center of the Galaxy; both in the equatorial and galactic coordinate systems. These transverse velocities, in combination with the radial velocity of the center of the LMC (adopted from the literature), allowed us to derive the components of the space velocity of the LMC corrected for the Sun’s peculiar motion relative to the LSR, and also corrected for the velocity of the LSR itself, relative to the center of the Galaxy. The above calculations were made using an $ad-hoc$ computer program, developed by one of the authors (MHP), which generates results consistent with those from an independent software (Piatek, 2005; private communication).\ The results of the above procedure applied to our four quasar fields are presented in Table 10. In rows 1-2 we list the R.A. and Decl. corrections to our measured PM to account for the rotation of the plane of the LMC, and in rows 3-4, the corresponding corrected PM values, in equatorial coordinates, as viewed by an observer located at the center of the LMC. In rows 5-8 we give calculated PM values relative to the GRF, both in equatorial and galactic coordinates. These values correspond to the LMC’s PM as seen by an observer located at the Sun, with the contributions to the PM, from the peculiar solar motion and from the LSR’s motion, removed. In rows 9-11 we give the $\Pi$, $\Theta$ and $Z$ components of the space velocity in a rectangular cartesian coordinate system centered on the LMC (as defined by Schweitzer et al., 1995, for the Sculptor dSph). The $\Pi$ component is parallel to the projection onto the Galactic plane of the radius vector from the center of the Galaxy to the center of the LMC, and is positive when it points radially away form the Galactic center. The $\Theta$ component is perpendicular to the $\Pi$ component, parallel to the Galactic plane, and points in the direction of rotation of the Galactic disk. The $Z$ component points in the direction of the Galactic north pole. These three components are free from the Sun’s peculiar motion and LSR motion. In rows 12-13 we give the LMC’s radial and transverse space velocities, as seen by an hypothetical observer located at the center of the Galaxy, and at rest with respect to the Galactic center.\ All of the above calculations were carried out assuming a distance of 50.1 kpc of the LMC from the Sun, a distance of 8.5 kpc of the Sun from the Galactic center, a 220 km s$^{-1}$ circular velocity of the LSR and a peculiar velocity of the Sun relative to the LSR of (u$_{\sun}$,v$_{\sun}$,w$_{\sun}$) = ($-$10,5.25,7.17) km s$^{-1}$ (Dehnen & Binney 1998), These components are positive if u$_{\sun}$ points radially away from the Galactic center, v$_{\sun}$ points in the direction of Galactic rotation and w$_{\sun}$ is directed towards the Galactic north pole.\ Although the matter was not addressed here, the values presented in table 10 can be used to determine the orbit of the LMC and therefore study possible past and future interactions of the LMC with other Local Group galaxies.\ If we assume that the LMC is gravitationally bound to, and in an elliptical orbit, around the Galaxy, and that the mass of the Galaxy is contained within 50 kpc of the galactic center, we can make an estimate of the lower limit of its mass through the expression: $${\rm M_{G}} = ({\rm r_{LMC}} / 2 {\rm G})[{\rm V^2_{gc,~r}} + {\rm V^2_{gc,~t}}~(1 - {\rm r}_{\rm LMC}^2 / {\rm r^2_a})] / (1-{\rm r_{LMC}} / {\rm r_a})$$ where r$_{\rm a}$ is the LMC’s apogalacticon distance and r$_{\rm LMC}$ its present distance. For r$_{\rm a}$ = 300 kpc (Lin et al. 1995) we obtain $\rm M_{G}$ values of : (8.2 $\pm$ 1.3), (9.9$\pm$ 1.6), (3.0 $\pm$ 0.8) and (12 $\pm$ 2) $\times 10^{11} \cal M_{\sun}$, for the fields, Q0459-6427, Q0557-6713, Q0558-6707 and Q0615-6615, respectively. The above values result in a weighted average of: $\langle M_{G}\rangle = (5.9 \pm 0.6) \times 10^{11} \cal M_{\sun}$ for the estimated mass of our Galaxy enclosed within 50 kpc.\ To evaluate the effect of the rotational velocity of the LMC on the determination of the mass of our galaxy, we also carried out calculations using the extreme values v$_{\Phi}$ = 0 km s$^{-1}$ (zero rotation) and v$_{\Phi}$ = 90 km s$^{-1}$. The weighted mass averages for 0 and 90 km s$^{-1}$ resulted to be $(5.6 \pm 0.6)\times 10^{11}$ and $(6.3\pm 0.6)\times 10^{11} \cal M_{\sun}$, respectively. Our results are summarized in Table 11.\ It should be noted that (although slightly larger), all our values for $\rm M_{G}$ are compatible with the recent theoretical $5.5 \times 10^{11} \cal M_{\sun}$ upper mass limit of the Galaxy given by Sakamoto et al. (2003). They are also compatible with the assumption that the LMC is bound to to the Galaxy. DISCUSSION ========== The ALP-PAM Discrepancy ----------------------- Given the implications of the result obtained by ALP for the PM of the LMC, in relation to our understanding of the interactions between the Galaxy and the Magellanic Clouds (see, e.g, Momany & Zaggia, 2005), and the reality of streams of galaxies with similar orbits around the Galaxy (see, e.g, Piatek et al., 2005), the main objective of the present work was to clarify the discrepancy between the previous determinations of the PM of the LMC by our group: the “ALP-PAM Discrepancy”. In this section, we further elaborate on some of the thoughts originally proposed in PAM, in order to explain the discrepancy of ALP, originally with PAM, and now also with the new result presented in this paper. First, the fact that the observations used here were made with essentially the same equipment and instrumental set-up as those by ALP, precludes any arguments relating the observed discrepancy to the existence of systematic errors in the observational data. Such errors would affect our data in the same way way as those of ALP. Second (as explained in §2), in the reduction process of the ALP and PAM data incorporated in the present work we adopted the same QSO and reference stars centroid coordinates (x,y) used in those works. Furthermore, the new data included in the present calculations was processed using the same procedure used in ALP to obtain the (x,y) coordinates. Therefore, the centroid coordinates should not be a source of a systematic error either. The subsequent procedures to obtain the PM were also basically the same, the sole exception being the inclusion of a quadratic term in the transformation equations used for the registration (also included in PAM’s equations, but not in ALP’s). Tests carried out using ALP’s data alone showed however that the effect of including quadratic terms is marginal (as was suspected), and does not account for the observed discrepancy. Considering that our current result -which includes re-processed data from ALP- agrees quite well with measurements by other groups, we conclude that ALP’s results might be affected by an unidentified systematic error in Decl. Since in the present work we used ALP’s unmodified (x,y) coordinates, we believe that this error could have originated in the processing of the Decl. PM instead of the coordinates themselves. It should be pointed out that the UCAC2-based result from Momany & Zaggia (2005), which is consistent with that from ALP, is currently also considered to be affected by an as yet unidentified systematic error (Momany & Zaggia, 2005; Kallivayalil et al., 2005). We would finally like to note that our new result for field Q0459-6427 is consistent with PAM. Membership of the LMC to a Stream --------------------------------- Lynden-Bell & Lynden-Bell (1995), have proposed that the LMC, together with the SMC, Draco and Ursa Minor, and possibly Carina and Sculptor, define a stream of galaxies with similar orbits around our galaxy. Their models predict a PM for each of member of the stream, which can be compared to their measured PM to evaluate the reality of the stream. For the LMC they predict PM components of \[$\mu_{\alpha}$cos$\delta$,$\mu_{\delta}$\] = \[+1.5,0\] mas yr$^{-1}$, giving a total PM of $\mu$ = +1.5 mas yr$^{-1}$, with a position angle of $\theta$ = 90$^\circ$. A comparison of this prediction with our result \[$\mu$ = ($+$2.0$\pm$0.1) mas yr$^{-1}$, $\theta$ = (62.4$\pm$3.1)$^\circ$\], shows that our measured values of $\mu$ and $\theta$ are, respectively, 5.1$\sigma$ and 8.9$\sigma$ away from the predicted values. This result indicates that the LMC does not seem to be a member of the above stream (it is worth mentioning that Piatek et al. (2005), using HST data, have concluded that Ursa Minor is not a member of this stream).\ MHP is greatful of the support by the Universidad de Tarapacá research fund (project \# 4722-02). EC and RAM acknowledge support by the Fondo Nacional de Investigación Científica y Tecnológica (proyecto No. 1050718, Fondecyt) and by the Chilean Centro de Astrofísica FONDAP (No. 15010003). It is also a pleasure to thank T. Martínez for helping with data processing. We would like to thank the referee, Dr. S. Piatek, for his constructive comments. Anguita, C., Loyola, P., & Pedreros, M. H. 2000, , 120, 845 Dehnen, W., & Binney, J. J.  1998, , 298, 387 Drake, A. J., Cook, K. H., Alcock, C., Axelrod, T. S., Geha, M., & MACHO Collaboration 2001, Bulletin of the American Astronomical Society, 33, 1379 Gardiner, L. T., Sawa, T., & Fujimoto, M.  1994, , 266, 567 Jones, B. F., Klemola, A. R., & Lin, D. N. C. 1994, , 107, 1333 Kallivayalil, N., van der Marel, R. P., Alcock, C., Axelrod, T., Cook, K. H., Drake, A. J., & Geha, M. 2005, ArXiv Astrophysics e-prints, arXiv:astro-ph/0508457 Kroupa, P., & Bastian, U. 1997, New Astronomy, 2, 77 Kroupa, P., R[" o]{}ser, S., & Bastian, U. 1994, , 266, 412 Lin, D. N. C., Klemola, A. R., & Jones, B. F. 1995, , 439, 652 Lynden-Bell, D., & Lynden-Bell, R. M. 1995, , 275, 429 Momany, Y., & Zaggia, S. 2005, , 437, 339 Pedreros, M. H., Anguita, C., & Maza, J. 2002, , 123, 1971 Piatek, S., Pryor, C., Bristow, P., Olszewski, E. W., Harris, H. C., Mateo, M., Minniti, D., & Tinney, C. G. 2005, , 130, 95 R[" o]{}ser, S., & Bastian, U. 1993, Heidelberg: Spektrum, Akademischer Verlag, |c1993 Sakamoto, T., Chiba, M., & Beers, T. C. 2003, , 397, 889 Stetson, P. B. 1987, , 99, 191 Schweitzer, A. E., Cudworth, K. M., Majewski, S. R., & Suntzeff, N. B. 1995, , 110, 2747 Zacharias, N., Urban, S. E., Zacharias, M. I., Wycoff, G. L., Hall, D. M., Monet, D. G., & Rafferty, T. J. 2004, , 127, 3043 [**Figure Captions**]{} [**Figure 1.**]{} Residual proper motion maps for the reference stars listed in Tables 3-6. The dispersion around the mean is $\pm$ 0.34, $\pm$ 0.79, $\pm$ 0.54, and $\pm$ 0.41 mas yr$^{-1}$ in R.A., and $\pm$ 0.52, $\pm$ 0.71, $\pm$ 0.58, $\pm$ 0.62 mas yr$^{-1}$ in Decl., for Q0459-6427, Q0557-6713, Q0558-6707 and Q0615-6615, respectively.\ [**Figure 2a.**]{} Relative positions in Right Ascension ($\Delta\alpha$cos$\delta$) $vs.$ epoch of observation for the studied fields. The values of $\Delta\alpha$cos$\delta$ represent the individual positions of the QSO on different CCD frames relative to the barycenter of the SRF. Symbol sizes are proportional to the number of times the measurements yielded the same coordinate value for a particular epoch (extra small, small, medium, large, and extra large sizes indicate 1 through 5 measurements per epoch, respectively). The best-fit straight lines from linear regression analyses on the data are also shown.\ [**Figure 2b.**]{} Relative positions in declination ($\Delta\delta$) $vs.$ epoch of observation for the studied fields. The values of $\Delta\delta$ represent the individual positions of the QSO on different CCD frames relative to the barycenter of the SRF. Symbol sizes and best-fit straight lines as described in Fig 2a.\ [lcccc]{}\ \ & $\mu_{\alpha}$cos$\delta$ & [$\mu_{\delta}$]{}&[Weighted Mean from]{}\ & mas yr$^{-1}$ & mas yr$^{-1}$ & &\ ALP(LMC center) &$+$1.7 $\pm$ 0.2 &$+$2.9 $\pm$ 0.2 &3 fields\ PAM (LMC center) &$+$2.0 $\pm$ 0.2 &$+$0.4 $\pm$ 0.2 &1 field\ [cccccccc]{}\ \ Field & & & Old Data & & & New Data\ & Source & Epochs & \# Frames & Epoch Range & Epochs & \# Frames & Epoch Range\ Q0459-6427& PAM& 8& 44 & 1989.91$-$2000.01 & 1 & 3 & 2001.96\ Q0557-6713& ALP& 11& 61 & 1989.02$-$1996.86 & 2 & 11 & 1998.88$-$2001.96\ Q0558-6707& ALP& 6& 32 & 1992.81$-$1996.86 & 3 & 19 & 1998.88$-$2001.96\ Q0615-6615& ALP& 8& 32 & 1989.90$-$1997.19 & 3 & 18 & 1998.88$-$2001.96\ [cccccccc]{}\ \ Star$^{(a)}$& $\mu_{\alpha}$cos$\delta$ & $\sigma$ & $\mu_{\delta}$ & $\sigma$ & V &B$-$V &V$-$R\ ID &mas yr$^{-1}$&mas yr$^{-1}$ & mas yr$^{-1}$ & mas yr$^{-1}$ &mag & mag & mag\ 1& $ $  0.0 & 0.3& $+$0.9 & 0.2& 18.71& 0.95& 0.52\ 2& $-$0.7 & 0.3& $-$0.3 & 0.4& 19.01& 0.67& 0.38\ 3& $+$0.6 & 0.2& $-$0.4 & 0.2& 19.02& 0.86& 0.47\ 4& $ $  0.0 & 0.3& $ $  0.0 & 0.3& 18.88& 0.96& 0.52\ 5& $ $  0.0 & 0.3& $+$0.6 & 0.3& 18.71& 0.98& 0.54\ 6& $-$0.1 & 0.2& $-$0.4 & 0.2& 18.22& 1.03& 0.58\ 7& $+$0.1 & 0.2& $+$0.1 & 0.2& 18.08& 1.03& 0.57\ 8& $ $  0.0 & 0.5& $-$0.1 & 0.4& 17.98& 0.89& 0.52\ 9& $+$0.1 & 0.3& $-$0.6 & 0.4& 19.18& 0.84& 0.43\ 10& $+$0.3 & 0.1& $ $  0.0 & 0.2& 17.94& 1.15& 0.63\ 11& $ $  0.0 & 0.3& $+$0.4 & 0.3& 18.64& 0.91& 0.50\ 12& $-$0.3 & 0.3& $-$1.2 & 0.3& 19.03& 0.88& 0.48\ 13& $+$0.4 & 0.3& $+$0.6 & 0.3& 18.98& 0.86& 0.48\ 14& $-$0.7 & 0.3& $+$0.1 & 0.3& 18.66& 0.23& 0.03\ 15& $+$0.2 & 0.1& $-$0.1 & 0.2& 17.70& 1.08& 0.59\ 16& $+$0.4 & 0.2& $+$0.3 & 0.2& 16.70& 1.43& 0.82\ 17& $-$0.3 & 0.2& $+$0.3 & 0.2& 19.17& 0.95& 0.51\ \ [cccccccc]{}\ \ Star$^{(a)}$& $\mu_{\alpha}$cos$\delta$ & $\sigma$ & $\mu_{\delta}$ & $\sigma$ & V &B$-$V &V$-$R\ ID &mas yr$^{-1}$&mas yr$^{-1}$ & mas yr$^{-1}$ & mas yr$^{-1}$ &mag & mag & mag\ 1& $-$0.1 & 0.1& $-$0.1 & 0.2& 17.07& $-$0.07& $-$0.04\ 2& $+$0.5 & 0.2& $ $  0.0 & 0.1& 17.75& 1.14& 0.56\ 3& $-$0.1 & 0.2& $+$0.7 & 0.2& 18.35& 0.84& 0.45\ 4& $+$0.2 & 0.3& $+$2.0 & 0.3& 18.64& 0.68& 0.38\ 5& $-$0.2 & 0.4& $-$0.9 & 0.2& 16.93& 1.13& 0.55\ 6& $+$0.6 & 0.2& $-$0.5 & 0.2& 17.72& 1.22& 0.62\ 7& $-$0.9 & 0.3& $-$0.3 & 0.3& 18.73& 1.09& 0.48\ 8& $-$1.2 & 0.2& $-$0.3 & 0.2& 17.29& 0.83& 0.46\ 9& $+$1.5 & 0.3& $+$0.4 & 0.6& 18.52& 1.00& 0.52\ 10& $-$1.7 & 0.6& $-$1.2 & 0.3& 18.28& 0.00& $-$0.05\ 11& $+$0.4 & 0.2& $-$0.4 & 0.2& 17.34& 1.17& 0.56\ 12& $-$0.6 & 0.3& $+$0.3 & 0.2& 18.66& 1.00& 0.48\ 13& $-$0.4 & 0.2& $-$0.4 & 0.2& 18.23& 0.75& 0.37\ 14& $+$1.7 & 0.3& $+$0.8 & 0.3& 18.13& 0.82& 0.42\ 15& $+$0.8 & 0.3& $-$0.4 & 0.2& 18.48& 0.80& 0.43\ 16& $+$0.4 & 0.2& $-$0.1 & 0.2& 18.26& 1.09& 0.53\ 17& $+$0.1 & 0.2& $-$0.6 & 0.2& 17.78& 0.95& 0.51\ 18& $+$0.5 & 0.4& $-$0.4 & 0.3& 17.57& $-$0.12& $-$0.06\ 19& $+$0.1 & 0.1& $-$0.3 & 0.2& 17.21& 1.19& 0.63\ 20& $-$0.9 & 0.3& $+$0.1 & 0.3& 18.69& 0.99& 0.50\ 21& $-$0.3 & 0.1& $+$0.4 & 0.2& 17.30& 0.77& 0.38\ 22& $ $  0.0 & 0.3& $+$1.2 & 0.3& 18.05& $-$0.08& $-$0.08\ 23& $ $  0.0 & 0.1& $+$0.4 & 0.2& 16.23& $-$0.17& $-$0.09\ \ [cccccccc]{}\ \ Star$^{(a)}$& $\mu_{\alpha}$cos$\delta$ & $\sigma$ & $\mu_{\delta}$ & $\sigma$ & V &B$-$V &V$-$R\ ID &mas yr$^{-1}$&mas yr$^{-1}$ & mas yr$^{-1}$ & mas yr$^{-1}$ &mag & mag & mag\ 1& $+$0.8 & 0.5& $-$0.8 & 0.7& 18.94& 0.84& 0.44\ 2& $-$0.7 & 0.4& $-$1.1 & 0.4& 16.44& 1.78& 0.91\ 3& $+$0.2 & 0.3& $-$1.2 & 0.4& 17.88& 0.90& 0.46\ 4& $+$0.8 & 0.5& $-$0.4 & 0.6& 18.94& 0.85& 0.46\ 5& $-$0.9 & 0.3& $-$1.0 & 0.7& 19.01& 0.90& 0.44\ 6& $-$0.7 & 0.2& $+$0.6 & 0.2& 18.30& 0.88& 0.49\ 7& $+$0.5 & 0.2& $+$0.1 & 0.3& 17.78& 1.18& 0.62\ 8& $+$1.1 & 0.4& $+$0.8 & 0.4& 18.36& ....&$-$0.11\ 9& $+$0.2 & 0.2& $-$0.4 & 0.2& 17.39& 1.34& 0.70\ 10& $+$0.5 & 0.2& $+$0.1 & 0.3& 18.43& 0.86& 0.46\ 11& $-$0.8 & 0.2& $+$0.8 & 0.2& 17.79& 1.13& 0.59\ 12& $ $  0.0 & 0.2& $ $  0.0 & 0.3& 18.59& 0.88& 0.45\ 13& $-$0.2 & 0.3& $-$0.4 & 0.4& 18.34& -0.02&0.00\ 14& $ $  0.0 & 0.4& $+$0.7 & 0.4& 18.20& 0.01&$-$0.01\ 15& $-$0.6 & 0.2& $+$0.6 & 0.2& 17.44& 1.26& 0.66\ 16& $-$0.7 & 0.4& $+$0.7 & 0.5& 19.00& 0.91& 0.49\ 17& $-$0.7 & 0.3& $+$1.1 & 0.3& 18.48& 0.69& 0.40\ 18& $-$0.1 & 0.5& $+$0.6 & 0.6& 18.98& 0.90& 0.48\ 19& $+$0.2 & 0.3& $+$0.7 & 0.4& 18.32& -0.13&$-$0.02\ 20& $-$0.4 & 0.4& $+$0.8 & 0.5& 19.00& 0.87& 0.49\ 21& $+$0.4 & 0.3& $+$0.4 & 0.5& 18.84& 0.91& 0.48\ 22& $-$0.3 & 0.4& $+$0.3 & 0.4& 18.83& 0.91& 0.48\ 23& $-$0.4 & 0.4& $-$0.2 & 0.2& 16.29& 0.02& 0.17\ 24& $ $  0.0 & 0.2& $-$0.2 & 0.2& 17.56& 1.27& 0.67\ 25& $+$0.2 & 0.2& $-$0.1 & 0.3& 17.69& 1.15& 0.60\ \ [cccccccc]{}\ \ Star$^{(a)}$& $\mu_{\alpha}$cos$\delta$ & $\sigma$ & $\mu_{\delta}$ & $\sigma$ & V &B$-$V &V$-$R\ ID &mas yr$^{-1}$&mas yr$^{-1}$ & mas yr$^{-1}$ & mas yr$^{-1}$ &mag & mag & mag\ 26& $+$0.2 & 0.2& $+$0.1 & 0.4& 18.72& 1.20& 0.57\ 27& $-$0.1 & 0.2& $-$0.3 & 0.3& 18.66& 1.00& 0.54\ 28& $-$0.6 & 0.2& $+$0.3 & 0.2& 17.31& 1.25& 0.64\ 29& $+$0.6 & 0.4& $-$0.2 & 0.4& 18.92& 0.89& 0.47\ 30& $-$0.4 & 0.3& $-$0.1 & 0.3& 18.18& 1.25& 0.59\ 31& $+$0.8 & 0.3& $-$0.3 & 0.3& 18.55& 1.01& 0.54\ 32& $+$0.2 & 0.3& $-$0.4 & 0.3& 18.07& 1.12& 0.61\ 33& $+$0.6 & 0.2& $+$0.2 & 0.2& 17.12& 1.46& 0.76\ 34& $+$0.9 & 0.8& $+$0.3 & 0.8& 18.68& 0.84& 0.48\ 35& $+$0.2 & 0.2& $-$0.6 & 0.2& 17.42& 0.90& 0.47\ 36& $+$0.1 & 0.3& $-$0.7 & 0.4& 18.75& 0.83& 0.45\ 37& $-$0.4 & 0.3& $-$0.8 & 0.2& 18.65& 1.27& 0.56\ 38& $ $  0.0 & 0.2& $-$0.5 & 0.2& 17.89& 1.18& 0.60\ 39& $+$0.1 & 0.4& $-$0.1 & 0.4& 19.12& 0.92& 0.50\ 40& $-$0.7 & 0.4& $-$0.1 & 0.4& 19.05& 0.88& 0.50\ 41& $+$0.3 & 0.2& $-$0.2 & 0.4& 18.35& 0.01& 0.02\ 42& $-$0.2 & 0.2& $-$0.9 & 0.5& 18.58& 0.89& 0.50\ 43& $+$0.1 & 0.2& $ $  0.0 & 0.2& 17.45& 1.32& 0.68\ 44& $+$0.3 & 0.5& $ $  0.0 & 0.6& 19.01& 0.85& 0.49\ 45& $ $  0.0 & 0.3& $+$1.3 & 0.3& 18.46& 0.66& 0.43\ 46& $-$0.8 & 0.3& $+$0.5 & 0.4& 19.05& 0.86& 0.52\ 47& $-$0.4 & 0.4& $+$0.7 & 0.6& 19.04& 1.01& 0.54\ 48& $+$0.5 & 0.2& $+$0.6 & 0.3& 16.81& 0.08& 0.06\ 49& $+$1.2 & 0.4& $ $  0.0 & 0.3& 19.04& 0.83& 0.49\ 50& $-$1.0 & 0.3& $-$0.5 & 0.3& 17.76& 1.07& 0.57\ 51& $-$0.2 & 0.3& $-$0.8 & 0.3& 18.16& 0.83& 0.48\ 52& $-$0.3 & 0.4& $ $  0.0 & 0.5& 18.93& 0.89& 0.46\ \ [cccccccc]{}\ \ Star$^{(a)}$& $\mu_{\alpha}$cos$\delta$ & $\sigma$ & $\mu_{\delta}$ & $\sigma$ & V &B$-$V &V$-$R\ ID &mas yr$^{-1}$&mas yr$^{-1}$ & mas yr$^{-1}$ & mas yr$^{-1}$ &mag & mag & mag\ 1& $ $  0.0 & 0.2& $+$0.1 & 0.4& 18.95& 0.87& 0.53\ 2& $ $  0.0 & 0.2& $+$0.7 & 0.3& 18.29& 0.83& 0.47\ 3& $-$0.1 & 0.2& $-$0.4 & 0.3& 17.46& 0.75& 0.43\ 4& $ $  0.0 & 0.3& $-$1.0 & 0.4& 19.14& 0.61& 0.41\ 5& $-$0.2 & 0.2& $+$0.3 & 0.2& 18.23& 0.76& 0.45\ 6& $+$0.4 & 0.3& $+$0.9 & 0.3& 19.00& 0.98& 0.56\ 7& $-$0.6 & 0.2& $-$0.4 & 0.2& 19.07& 0.65& 0.42\ 8& $+$0.7 & 0.2& $+$0.1 & 0.2& 18.37& 0.89& 0.53\ 9& $-$0.2 & 0.4& $+$0.7 & 0.5& 18.98& 0.84& 0.49\ 10& $-$0.2 & 0.3& $-$0.9 & 0.4& 18.85& 0.85& 0.48\ 11& $ $  0.0 & 0.3& $-$0.3 & 0.4& 18.97& ....& 0.73\ 12& $ $  0.0 & 0.2& $-$0.7 & 0.2& 18.25& 1.07& 0.53\ 13& $-$0.1 & 0.2& $ $  0.0 & 0.3& 17.59& 1.04& 0.65\ 14& $+$0.7 & 0.2& $+$0.8 & 0.2& 18.33& 1.00& 0.57\ 15& $+$0.4 & 0.4& $-$0.6 & 0.5& 19.36& 0.90& 0.50\ 16& $-$0.9 & 0.4& $+$0.6 & 0.5& 19.29& 0.81& 0.47\ \ [lcccccccc]{}\ \ Epoch & & $\sigma$ & & $\sigma$& N &CCD chip\ &arcsec&mas&arcsec&mas & &\ Q0459-6427:\ 1989.907& 8.443& 1.4& $-$7.614& 1.1& 4& RCA No.5\ 1990.872& 8.434& 0.3& $-$7.615& 2.7& 3& Tek No. 4\ 1990.878& 8.438& 5.8& $-$7.624& 2.8& 2& RCA No.5\ 1993.800& 8.423& 2.0& $-$7.615& 2.6& 3& Tek1024 No.1\ 1993.953& 8.432& 1.4& $-$7.611& 0.7& 9& Tek1024 No.2\ 1994.916& 8.429& 2.0& $-$7.615& 1.1& 3& Tek1024 No.2\ 1996.860& 8.421& 1.9& $-$7.618& 2.0& 5& Tek 2048 No.4\ 1998.881& 8.422& 0.8& $-$7.615& 0.8& 6& Tek1024 No.2\ 2000.010& 8.422& 0.4& $-$7.618& 1.2& 9& Tek1024 No.2\ 2001.961& 8.416& 2.1& $-$7.612& 2.9& 3& Tek1024 No.2\ Q0557-6713:\ 1989.024& 0.045& 1.6& $-$2.768& 1.6& 5& RCA No.5\ 1989.905& 0.039& 1.8& $-$2.768& 1.9& 8& RCA No.5\ 1990.872& 0.037& 1.6& $-$2.772& 1.0& 4& Tek No. 4\ 1990.878& 0.040& 2.7& $-$2.766& 0.5& 3& RCA No.5\ 1991.938& 0.046& 2.5& $-$2.769& 0.7& 6& Tek1024 No.1\ 1992.812& 0.042& 0.7& $-$2.774& 0.6& 5& Tek2048 No.1\ 1993.055& 0.040& 2.2& $-$2.776& 1.3& 4& Tek1024 No.1\ 1993.800& 0.039& 1.8& $-$2.775& 0.7& 3& Tek1024 No.1\ 1993.953& 0.039& 1.5& $-$2.777& 1.1& 9& Tek1024 No.2\ 1994.119& 0.033& 1.2& $-$2.778& 0.9& 5& Tek1024 No.2\ 1994.918& 0.036& 0.8& $-$2.783& 0.7& 8& Tek1024 No.2\ 1996.862& 0.033& 0.9& $-$2.780& 0.3& 3& Tek 2048 No.4\ 1998.883& 0.031& 0.3& $-$2.786& 0.7& 6& Tek1024 No.2\ 2001.961& 0.030& 0.3& $-$2.785& 0.9& 3& Tek1024 No.2\ [lcccccccc]{}\ \ Epoch & & $\sigma$ & & $\sigma$& N &CCD chip\ &arcsec&mas&arcsec&mas & &\ Q0558-6707:\ 1992.813& $-$12.148& 1.3& $-$15.542& 1.3& 4& Tek2048 No.1\ 1993.058& $-$12.145& 0.8& $-$15.534& 1.8& 4& Tek1024 No.1\ 1993.953& $-$12.148& 1.2& $-$15.542& 3.4& 9& Tek1024 No.2\ 1994.118& $-$12.154& 1.6& $-$15.541& 1.0& 6& Tek1024 No.2\ 1994.918& $-$12.149& 0.6& $-$15.547& 0.9& 7& Tek1024 No.2\ 1996.863& $-$12.150& 1.6& $-$15.547& 2.0& 6& Tek 2048 No.4\ 1998.886& $-$12.152& 0.7& $-$15.540& 0.5& 3& Tek1024 No.2\ 1999.942& $-$12.159& 1.3& $-$15.552& 1.3& 6& Tek1024 No.2\ 2001.958& $-$12.158& 1.1& $-$15.541& 1.1& 6& Tek1024 No.2\ Q0615-6615:\ 1989.908& 7.248& 1.3& $-$8.255& 0.9& 3& RCA No.5\ 1993.058& 7.242& & $-$8.266& & 1& Tek1024 No.1\ 1993.953& 7.244& 0.8& $-$8.270& 2.1& 7& Tek1024 No.2\ 1994.120& 7.238& 1.3& $-$8.269& 3.9& 5& Tek1024 No.2\ 1994.920& 7.236& 1.2& $-$8.269& 1.4& 5& Tek1024 No.2\ 1995.178& 7.234& 0.2& $-$8.275& 3.3& 3& Tek1024 No.2\ 1996.864& 7.234& 2.9& $-$8.272& 1.5& 3& Tek 2048 No.4\ 1997.194& 7.233& 1.7& $-$8.274& 2.2& 5& Tek1024 No.2\ 1998.886& 7.230& 1.5& $-$8.276& 1.0& 3& Tek1024 No.2\ 1999.942& 7.227& 1.2& $-$8.278& 0.8& 3& Tek1024 No.2\ 2001.960& 7.226& 1.3& $-$8.277& 1.4& 12& Tek1024 No.2\ [ccccccccc]{}\ \ Field & $\mu_{\alpha}$cos($\delta$) & [$\mu_{\delta}$]{}&\# Frames& Epochs& Epoch Range\ ID & mas yr$^{-1}$ & mas yr$^{-1}$ &\ Q0459-6427&1.8 $\pm$ 0.2& 0.1 $\pm$ 0.2& 47& 9& 1989.91$-$2001.96\ Q0557-6713&1.1 $\pm$ 0.2& 1.9 $\pm$ 0.1& 72& 13& 1989.02$-$2001.96\ Q0558-6707&1.2 $\pm$ 0.2& 0.6 $\pm$ 0.3& 51& 9& 1992.81$-$2001.96\ Q0615-6615&1.9 $\pm$ 0.2& 1.4 $\pm$ 0.2& 50& 11& 1989.90$-$2001.96\ [lcccc]{}\ \     [Source]{}& $\mu_{\alpha}$cos($\delta$) & [$\mu_{\delta}$]{}& [Proper Motion System]{}\ & mas yr$^{-1}$ & mas yr$^{-1}$ &\ Kroupa, Röser & Bastian 1994 (Field) &$+$1.3 $\pm$ 0.6 &$+$1.1 $\pm$ 0.7 &PPM\ Jones et al. 1994 &$+$1.37 $\pm$ 0.28 &$-$0.18 $\pm$ 0.27 &Galaxies\ Kroupa & Bastian 1997 (Field) &$+$1.94 $\pm$ 0.29 &$-$0.14 $\pm$ 0.36 &Hipparcos\ ALP &$+$1.7 $\pm$ 0.2 &$+$2.9 $\pm$ 0.2 &Quasars\ PAM &$+$2.0 $\pm$ 0.2 &$+$0.4 $\pm$ 0.2 &Quasars\ Drake et al. 2001 &$+$1.4 $\pm$ 0.4 &$+$0.38 $\pm$ 0.25 &Quasars\ Kallivayalil et al. 2005 &$+$2.03 $\pm$ 0.08 &$+$0.44 $\pm$ 0.05 &Quasars\ This work (Field) &$+$1.5 $\pm$ 0.1 &$+$1.4 $\pm$ 0.1 &Quasars\ This work &$+$1.8 $\pm$ 0.1 &$+$0.9 $\pm$ 0.1 &Quasars\ \ [lrrrr]{}\ \     [Parameter]{} & [Q0459-6427]{} & [Q0557-6713]{} & [Q0558-6707]{} & [Q0615-6615]{}\ $\Delta \mu_{\alpha}\cos{\delta}$, rotation correction (mas yr$^{-1}$) &$+$0.17 &$+$0.11 &$+$0.11 &$+$0.12\ $\Delta \mu_{\delta}$, rotation correction (mas yr$^{-1}$) &$+$0.09& $-$0.18 & $-$0.18& $-$0.18\ $\mu^{LMC}_{\alpha}\cos{\delta}$, LMC centered (mas yr$^{-1}$) & 1.9 $\pm$ 0.2 & 1.5 $\pm$ 0.2 & 1.4 $\pm$ 0.2 & 2.2 $\pm$ 0.2\ $\mu^{LMC}_{\delta}$, LMC centered (mas yr$^{-1}$) & 0.5 $\pm$ 0.2 & 1.5 $\pm$ 0.1 & 0.2 $\pm$ 0.2 & 0.7 $\pm$ 0.2\ $\mu^{GRF}_{\alpha}\cos{\delta}$ (mas yr$^{-1}$) & 1.4 $\pm$ 0.1 & 1.0 $\pm$ 0.1 & 0.9 $\pm$ 0.1 & 1.7 $\pm$ 0.1\ $\mu^{GRF}_{\delta}$ (mas yr$^{-1}$) & 0.4 $\pm$ 0.2 & 1.3 $\pm$ 0.1 & 0.1 $\pm$ 0.3 & 0.6 $\pm$ 0.2\ $\mu^{GRF}_{l}\cos{b}$ (mas yr$^{-1}$) & $-$0.6 $\pm$ 0.2 & $-$1.5 $\pm$ 0.1 & $-$0.3 $\pm$ 0.3 & $-$0.9 $\pm$ 0.2\ $\mu^{GRF}_{b}$ (mas yr$^{-1}$) & 1.4 $\pm$ 0.1 & 0.7 $\pm$ 0.1 & 0.8 $\pm$ 0.1 & 1.5 $\pm$ 0.1\ $\Pi$, velocity component (km s$^{-1}$) & $ $252 $\pm$ 25 & $ $215 $\pm$ 23 & $ $171 $\pm$ 28 & $ $292 $\pm$ 23\ $\Theta$, velocity component (km s$^{-1}$) & $ $93 $\pm$ 41 & $ $319 $\pm$ 31 & $ $27 $\pm$ 63 & $ $160 $\pm$ 45\ $Z$, velocity component (km s$^{-1}$) & 234 $\pm$ 25 & 109 $\pm$ 24 & 135 $\pm$ 26 & 274 $\pm$ 22\ V$_{\rm {gc, r}}$, radial velocity (km s$^{-1}$) & 80 $\pm$ 23 &118 $\pm$ 22 &68 $\pm$ 24 &92 $\pm$ 20\ V$_{\rm {gc, t}}$, transverse velocity (km s$^{-1}$) & 347 $\pm$ 27 &382 $\pm$ 30 &209 $\pm$ 27 &421 $\pm$ 27\ [lcccc]{}\ \ & [Q0459-6427]{}& [Q0557-6713]{}& [Q0558-6707]{} & [Q0615-6615]{}\ v$_{\Phi}$ = 50 km s$^{-1}$ &\ V$_{\rm {gc, r}}$, radial velocity (km s$^{-1}$) &80 $\pm$ 23 &118 $\pm$ 22 &68 $\pm$ 24 &92 $\pm$ 20\ V$_{\rm {gc, t}}$, transverse velocity (km s$^{-1}$) &347 $\pm$ 27 &382 $\pm$ 30 &209 $\pm$ 27 &421 $\pm$ 27\ M$_{\rm G}$, mass of the Galaxy in $10^{11}\times \cal M_{\sun}$ & (8.2 $\pm$ 1.3) & (9.9 $\pm$ 1.6) & (3.0 $\pm$ 0.8) & (12 $\pm$ 2)\ v$_{\Phi}$ = 0 km s$^{-1}$ &\ V$_{\rm {gc, r}}$, radial velocity (km s$^{-1}$) &75 $\pm$ 23 &126 $\pm$ 22 &75 $\pm$ 24 &99 $\pm$ 20\ V$_{\rm {gc, t}}$, transverse velocity (km s$^{-1}$) &305 $\pm$ 26 &408 $\pm$ 30 &198 $\pm$ 33 &420 $\pm$ 30\ M$_{\rm G}$, mass of the Galaxy in $10^{11}\times \cal M_{\sun}$ & (6.3 $\pm$ 1.1) & (11 $\pm$ 2) & (2.7 $\pm$ 0.9) & (12 $\pm$ 2)\ v$_{\Phi}$ = 90 km s$^{-1}$ &\ V$_{\rm {gc, r}}$, radial velocity (km s$^{-1}$) &83 $\pm$ 23 &112 $\pm$ 22 &62 $\pm$ 24 &86 $\pm$ 20\ V$_{\rm {gc, t}}$, transverse velocity (km s$^{-1}$) &381 $\pm$ 27 &364 $\pm$ 29 &225 $\pm$ 26 &425 $\pm$ 25\ M$_{\rm G}$, mass of the Galaxy in $10^{11}\times \cal M_{\sun}$ & (9.9 $\pm$ 1.4) & (9.0 $\pm$ 1.5) & (3.4 $\pm$ 0.8) & (12 $\pm$ 2)\
--- abstract: 'In this paper, we give a basis for the derivation module of the cone over the Shi arrangement of the type $D_\ell$ explicitly.' title: 'The Shi arrangement of the type $D_\ell$' --- Introduction. ============= Let $V$ be an $\ell$-dimensional vector space. An [*affine arrangement of hyperplanes*]{} $\mathcal{A}$ is a finite collection of affine hyperplanes in $V$. If every hyperplane $H\in\mathcal{A}$ goes through the origin, then $\mathcal{A}$ is called to be [*central*]{}. When $\mathcal{A}$ is central, for each $H\in\mathcal{A}$, choose $\alpha_H\in V^*$ with $\ker(\alpha_H)=H$. Let $S$ be the algebra of polynomial functions on $V$ and let $\mathrm{Der}_{S}$ be the module of derivations $$\begin{aligned} &\mathrm{Der}_{S}:=\{\theta:S\rightarrow S\mid\theta(fg)=f\theta(g)+g\theta(f),f,g\in S,\\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~\theta~\mbox{is}~\mathbb{R}\mbox{-linear}\}.\end{aligned}$$ For a central arrangement $\mathcal{A}$, recall $$\begin{aligned} D(\mathcal{A}):=\{\theta\in\mathrm{Der}_{S}\mid\theta(\alpha_{H}) \in\alpha_{H}S \mbox{~for all} ~H\in\mathcal{A}\}.\end{aligned}$$ We say that $\mathcal{A}$ is a [*free arrangement*]{} if $D(\mathcal{A})$ is a free $S$-module. The freeness was defined in [@Ter1]. The Factorization Theorem[@Ter2] states that, for any free arrangement $\mathcal{A}$, the characteristic polynomial of $\mathcal{A}$ factors completely over the integers. Let $E=\mathbb{R}^\ell$ be an $\ell$-dimensional Euclidean space with a coodinate system $x_1,\ldots,x_\ell$, and $\Phi$ be a crystallographic irreducible root system. Fix a positive root system $\Phi^{+}\subset\Phi$. For each positive root $\alpha\in\Phi^{+}$ and $k\in\mathbb{Z}$, we define an affine hyperplane $$\begin{aligned} H_{\alpha,k}:=\{v\in V\mid (\alpha, v)=k\}.\end{aligned}$$ In [@Shi1], J.-Y. Shi introduced the [*Shi arrangement*]{} $${\mathcal S}(A_{\ell}) := \{H_{\alpha,k}\mid\alpha\in\Phi^{+},\ 0\leq k \leq 1\}$$ when the root system is of the type $A_{\ell}$. This definition was later extended to the [*generalized Shi arrangement*]{} (e.g., [@Edel]) $$\begin{aligned} {{\cal S}}(\Phi):= \{H_{\alpha,k}\mid\alpha\in\Phi^{+},\ 0\leq k \leq 1\}.\end{aligned}$$ Embed $E$ into $V=\mathbb{R}^{\ell+1}$ by adding a new coordinate $z$ such that $E$ is defined by the equation $z = 1$ in $V$. Then, as in [@OT], we have the cone $ \mathbf{c} {{\cal S}}(\Phi) $ of $ {{\cal S}}(\Phi) $ $$\begin{aligned} \mathbf{c} {{\cal S}}(\Phi) :=\{\mathbf{c}H_{\alpha,k} \mid\alpha\in\Phi^{+},\ 0\leq k \leq 1\}\cup\{\{z=0\}\}.\end{aligned}$$ In [@Yo], M. Yoshinaga proved that the cone $\mathbf{c} {{\cal S}}(\Phi) $ is a free arrangement with exponenets $(1, h, \dots, h)$ ($h$ appears $\ell$ times), where $h$ is the Coxeter number of $\Phi.$ (He actually verified the conjecture by P. Edelman and V. Reiner in [@Edel], which is far more general.) He proved the freeness without finding a basis. In [@Su1], for the first time, the authors gave an explicit construction of a basis for $D(\mathbf{c}{{\cal S}}(A_{\ell}))$. Then D. Suyama constructed bases for $D(\mathbf{c}{{\cal S}}(B_{\ell}))$ and $D(\mathbf{c}{{\cal S}}(C_{\ell}))$ in [@Su2]. In this paper, we will give an explicit construction of a basis for $D(\mathbf{c}\cal S(D_{\ell}))$. A defining polynomial of the cone over the Shi arrangement of the type $D_{\ell}$ is given by $$\begin{aligned} Q := z \prod_{1\leq s<t\leq \ell} \prod_{\epsilon\in\{-1, 1\}} (x_s + \epsilon x_t - z) (x_s + \epsilon x_t).\end{aligned}$$ Note that the number of hyperplanes in $\mathbf{c}\cal S(D_{\ell})$ is equal to $2\ell(\ell-1)+1$. Our construction is similar to the construction in the case of the type $B_{\ell} $. The essential ingredients of the recipe are the Bernoulli polynomials and their relatives. The basis construction. ======================= \[Prop2.1\] For $(p,q)\in\mathbb{Z}_{\geq -1}\times\mathbb{Z}_{\geq 0}$, consider the following two conditions for a rational function $B_{p,q}(x)$: $$\begin{aligned} &1.~B_{p,q}(x+1)-B_{p,q}(x)\\ &=\frac{(x+1)^p-(-x)^p}{(x+1)-(-x)}(x+1)^q(-x)^q,\\ &2.~B_{p,q}(-x)=-B_{p,q}(x).\end{aligned}$$ Then such a rational function $B_{p, q}(x) $ uniquely exists. Morever, the $B_{p, q} (x)$ is a polynomial unless $(p, q) = (-1, 0)$ and $B_{-1, 0}(x) = -(1/x) $. Suppose $(p, q) \neq (-1, 0)$. Since the right hand side of the first condition is a polynomial in $x$, there exists a polynomial $B_{p, q} (x)$ satisfying the first condition. Note that $B_{p,q}(x)$ is unique up to a constant term. Define a polynomial $ F(x)=B_{p,q}(x)+B_{p,q}(-x)$. Since $$\begin{aligned} &~~~~B_{p,q}(-x)-B_{p,q}(-x-1)\\ &=\frac{(-x)^p-(x+1)^p}{(-x)-(x+1)}(-x)^q(x+1)^q\\ &=\frac{(x+1)^p-(-x)^p}{(x+1)-(-x)}(x+1)^q(-x)^q\\ &=B_{p,q}(x+1)-B_{p,q}(x),\end{aligned}$$ we have $ F(x+1)=F(x) $ for any $x$. Therefore $F(x) $ is a constant function. Then the polynomial $B_{p, q}(x) - \left(F(0)/2\right)$ is the unique solution satisfying the both conditions. Next we suppose $(p,q)=(-1, 0)$. Then we compute $$\begin{aligned} &~~~~B_{-1, 0}(x+1) - B_{-1, 0}(x)\\ &= \frac{(x+1)^{-1} -(-x)^{-1} }{(x+1)-(-x)} =-\frac{1}{x+1}+\frac{1}{x}.\end{aligned}$$ Thus $ B_{-1, 0}(x)=-(1/x) $ is the unique solution satisfying the both conditions. Define a rational function $\overline{B}_{p,q}(x,z)$ in $x$ and $z$ by $$\overline{B}_{p,q}(x,z):=z^{p+2q}B_{p,q}(x/z).$$ Then $\overline{B}_{p,q}(x,z)$ is a homogeneous polynomial of degree $p+2q$ except the two cases: $\overline{B}_{-1, 0} (x, z) = -(1/x)$ and $\overline{B}_{0, q} (x, z) = 0$. For a set $I := \{y_{1} , \dots, y_{m} \}$ of variables, let $$\sigma^{I}_{n} := \sigma_{n} (y_{1} , \dots, y_{m} ), \,\, \tau^{I}_{2n} := \sigma_{n} (y^{2}_{1} , \dots, y^{2}_{m} ),$$ where $\sigma_{n} $ stands for the elementary symmetric function of degree $n$. Define derivations $$\begin{aligned} \varphi_j &:=(x_j-x_{j+1}-z) \sum\limits_{i=1}^\ell \sum\limits_{\substack{K_{1}\cup K_{2} \subseteq J\\ K_{1} \cap K_{2} = \emptyset}} \left(\prod K_{1} \right) \left(\prod K_{2}\right)^{2} \\ & (-z)^{|K_{1}|} \sum\limits_{\substack{0\leq n_1 \leq |J_{1}|\\ 0\leq n_2\leq |J_{2} |}}(-1)^{n_1+n_2}\sigma_{n_{1} }^{J_{1} } \tau_{2 n_{2}}^{J_{2} } \overline{B}_{k, k_{0}}(x_{i}, z) \frac{\partial}{\partial x_{i}}\end{aligned}$$ for $j = 1,\dots, \ell-1$ and $$\begin{aligned} \varphi_{\ell} &:=\sum\limits_{i=1}^\ell \sum\limits_{\substack{K_{1}\cup K_{2} \subseteq J \\ K_{1} \cap K_{2} = \emptyset}} \left(\prod K_{1} \right) \left(\prod K_{2}\right)^{2} (-z)^{|K_{1}|}\\ & ~~~~~~~~~~~ (-x_{\ell}) \overline{B}_{-1, k_{0}}(x_{i}, z) \frac{\partial}{\partial x_{i}}\end{aligned}$$ for $j=\ell$, where $$\begin{aligned} J &:= \{x_{1}, \dots, x_{j-1}\},\, J_{1} := \{x_{j}, x_{j+1}\},\\ J_{2} &:= \{x_{j+2}, \dots, x_{\ell}\},\\ \prod K_{p} &:=\prod_{x_{i} \in K_{p}} x_{i}\,\,\,(p=1, 2),\\ k_{0} &:= |J \setminus (K_{1} \cup K_{2}) |\geq 0, \\ k &:= (|J_{1} |-n_{1}) +2(|J_{2} |-n_{2}) -1\geq -1.\end{aligned}$$ Note that $\varphi_{j} (z)=0\,\,(1\leq j\leq\ell)$. In the rest of the paper, we will give a proof of the following theorem: \[main\] The derivations $ \varphi_{1}, \dots, \varphi_{\ell}, $ together with the Euler derivation $$\theta_{E} := z \frac{\partial}{\partial z} + \sum_{i=1}^{\ell} x_{i} \frac{\partial} {\partial x_{i}},$$ form a basis for $D({\mathbf c}{\mathcal S}(D_{\ell}) )$. Note that $\theta_{E} (x_{i} )=x_{i} \,\,(1\leq i\leq\ell)$ and $\theta_{E} (z)=z$. \[Lemma2.5\] Let $1\leq i\leq \ell$ and $1\leq j\leq \ell$. Suppose $\varphi_{j}(x_{i})$ is nonzero. Then $\varphi_{j}(x_{i} )$ is a homogeneous polynomial of degree $2(\ell-1)$. Define $$\begin{aligned} F_{ij}&:= (x_{j}-x_{j+1}-z) \left(\prod K_{1} \right) \left(\prod K_{2}\right)^{2} z^{|K_{1}|} \\ &~~~~~~ \sigma_{n_{1} }^{J_{1} } \tau_{2 n_{2}}^{J_{2} } \overline{B}_{k, k_{0}}(x_{i}, z)~~~~~~~~~(1\leq j\leq \ell-1),\\ F_{i\ell} &:= \left(\prod K_{1} \right) \left(\prod K_{2}\right)^{2} z^{|K_{1}|} x_{\ell} \overline{B}_{-1, k_{0}}(x_{i}, z)\end{aligned}$$ when $K_{1} , K_{2} , n_{1} , n_{2} $ are fixed. Then $\varphi_{j} (x_{i} )$ is a linear combination of the $F_{ij} $’s over ${\mathbb R}$. Note that $\overline{B}_{k, k_{0}} (x_{i}, z)$ is a polynomial unless $(k, k_{0}) = (-1, 0)$. Assume that $1\leq j\leq \ell-1$ and $(k, k_{0} )=(-1, 0)$. Then $J = K_{1} \cup K_{2} $, $n_{1} = |J_{1}|$, $n_{2} = |J_{2}|$, and $\overline{B}_{-1, 0} (x_{i}, z) =-1/x_{i}$. Therefore each $F_{i j}$ is a polynomial. Thus $\varphi_{j}(x_{i})$ is a nonzero polynomial and there exists a nonzero polynomial $F_{ij} $. Compute $$\begin{aligned} &~~~\deg \varphi_{j}(x_{i} )=\deg F_{ij} \\ &= 1+ |K_{1}| + 2|K_{2}| +|K_{1}| +n_{1} +2n_{2} \\ & ~~~~~~~~~~+ \deg \overline{B}_{k, k_{0}}(x_{i}, z)\\ &= 1+ 2|K_{1}| + 2|K_{2}| + n_{1} +2n_{2} + (2 k_{0}+k)\\ &= 1+ 2|K_{1}| + 2|K_{2}| +n_{1} +2n_{2}\\ &~~~+ 2 (|J|-|K_{1}| -|K_{2}|) +|J_{1} | -n_{1}\\ &~~~+2(|J_{2}|- n_{2}) -1\\ &= 2(|J|+|J_{1} |+|J_{2} |)-|J_{1} |= 2 \ell-2.\end{aligned}$$ Next consider $\varphi_{\ell}(x_{i})$. If $k_{0} = 0$, then $J = K_{1} \cup K_{2}$. Therefore each $F_{i \ell}$ is a polynomial. Thus so is $\varphi_{\ell}(x_{i})$. Compute $$\begin{aligned} &~~~\deg \varphi_{\ell}(x_{i} )\\ &= |K_{1}| + 2|K_{2}| +|K_{1}|+ 1+ \deg \overline{B}_{-1, k_{0}}(x_{i}, z)\\ &= 2(|K_{1}| + |K_{2}|) +1+ (2 k_{0}-1)\\ &= 2(|K_{1}|+|K_{2} |+k_{0})= 2(\ell-1).\end{aligned}$$ Let $<$ denote the [*pure lexicographic order*]{} of monomials with respect to the total order $$x_{1} > x_{2} >\dots > x_{\ell} > z.$$ When $f\in S={\mathbb C}[x_{1} , x_{2} , \dots, x_{\ell}, z ] $ is a nonzero polynomial, let $\mathrm{in} (f)$ denote the [*initial monomial*]{} (e.g., see [@Her]) of $f$ with respect to the order $<$. \[in\] Suppose $\varphi_{j} (x_{i} )$ is nonzero. Then \(1) $\mathrm{in}(\varphi_j(x_i)) \leq x_1^2\cdots x_{i-1}^2x_i^{2\ell-2i}$, \(2) $ \mathrm{in}(\varphi_j(x_i))< x_1^2\cdots x_{i-1}^2x_i^{2\ell-2i}$ for $i < j,$ \(3) $ \mathrm{in}(\varphi_i(x_i))=x_1^2\cdots x_{i-1}^2x_i^{2\ell-2i}$ for $1\leq i\leq \ell. $ Recall $F_{ij} \,(1\leq j\leq \ell-1) $ and $F_{i\ell} $ from the proof of Lemma \[Lemma2.5\] when $K_1,K_2,n_1,n_2$ are fixed. Let $\deg^{(x_i)}f$ denote the degree of $f$ with respect to $x_{i} $ when $f\neq 0$. \(1) Since, for every nonzero $F_{ij}$, we obtain $$\begin{aligned} \deg^{(x_{p})}F_{ij}\leq 2 \,\,(1\leq p < i), \,\,\,\,\,\, \deg(F_{ij})=2\ell-2.\end{aligned}$$ Hence we may conclude $$\begin{aligned} \mathrm{in}(F_{ij})\leq x_1^2\cdots x_{i-1}^2x_i^{2\ell-2i}\end{aligned}$$ and thus$$\begin{aligned} &\mathrm{in}(\varphi_j(x_i)) \leq\max\{\mathrm{in}(F_{ij})\} \leq x_1^2\cdots x_{i-1}^2x_i^{2\ell-2i}.\end{aligned}$$ \(2) Suppose $i<j<\ell$. Since $x_{i} > x_{j} > z$, one has $$\begin{aligned} &~~~\mathrm{in} (\sigma_{n_{1} }^{J_{1}} \tau_{2 n_{2}}^{J_{2}}\overline{B}_{k,k_0}(x_i,z))\\ &\leq x_{i}^{n_{1} + 2 n_{2} +2 k_{0}+k } = x_{i}^{2\ell-2j+2k_0-1}\end{aligned}$$ when $\overline{B}_{k,k_0}(x_i,z)$ is nonzero. The equality holds if and only if $n_1=n_2=0$. Suppose that $F_{ij} $ is nonzero. For $1\leq i<j\leq \ell-1$, we have $$\begin{aligned} &\mathrm{in}(F_{ij})\\ =&\mathrm{in}(x_j-x_{j+1}-z)\mathrm{in}\big((\prod K_1)(\prod K_2)^2(-z)^{|K_1|}\big)\\ &\mathrm{in}\big(\sigma_{n_{1} }^{J_{1}} \tau_{2 n_{2}}^{J_{2}} \overline{B}_{k,k_{0}}(x_{i}, z)\big)\\ \leq& x_j\, \mathrm{in}\big((\prod K_1)(\prod K_2)^2(-z)^{|K_1|}\big)x_i^{2\ell-2j+2k_0-1}\\ =& x_j\,\mathrm{in}\big((\prod K_1)(\prod K_2)^2(-z)^{|K_1|}x_i^{2k_0}\big)x_i^{2\ell-2j-1}\\ \leq&x_j(x_1^2\cdots x_{i-1}^2x_i^{2j-2i})x_i^{2\ell-2j-1}~~~(*)\\ =&x_1^2\cdots x_{i-1}^2x_i^{2\ell-2i-1}x_j <x_1^2\cdots x_{i-1}^2x_i^{2\ell-2i}.\end{aligned}$$ Thus $$\begin{aligned} \mathrm{in}(\varphi_j(x_i))<x_1^2\cdots x_{i-1}^2x_i^{2\ell-2i}.\end{aligned}$$ For $1\leq i<j=\ell$, $$\begin{aligned} &\mathrm{in}(F_{i\ell})\\ =&x_\ell\,\mathrm{in}\big((\prod K_1)(\prod K_2)^2(-z)^{|K_1|}\big)\mathrm{in}\big(\overline{B}_{-1,k_{0}}(x_{i}, z)\big)\\ =& x_\ell\,\mathrm{in}\big((\prod K_1)(\prod K_2)^2(-z)^{|K_1|}\big)x_i^{2k_0-1}\\ =&x_\ell\,\mathrm{in}\big((\prod K_1)(\prod K_2)^2(-z)^{|K_1|}x_i^{2k_0}\big)x_i^{-1}\\ \leq&x_\ell(x_1^2\cdots x_{i-1}^2x_i^{2\ell-2i})x_i^{-1}~~~~(**)\\ =&x_1^2\cdots x_{i-1}^2x_i^{2\ell-2i-1}x_\ell <x_1^2\cdots x_{i-1}^2x_i^{2\ell-2i}.\end{aligned}$$ This proves (2). Now we only need to prove (3). Let $i=j<\ell$ in $(*)$. Then the equality $$\begin{aligned} &\mathrm{in}(F_{ii}) =x_1^2\cdots x_{i-1}^2x_i^{2\ell-2i}\end{aligned}$$ holds if and only if $$\begin{aligned} K_1=\emptyset, \,K_2=J, \,n_1=n_2=k_{0} =0, \, k=2\ell-2i-1\end{aligned}$$ because the leading term of $\overline{B}_{2\ell-2i-1,0}(x_i,z)$ is equal to $$\frac{x_{i}^{2\ell-2i-1}}{2\ell-2i-1}.$$ Next let $i=\ell$ in $(**)$. Then the equality $$\begin{aligned} &\mathrm{in}(F_{\ell\ell}) =x_1^2\cdots x_{\ell-1}^2\end{aligned}$$ holds if and only if $$K_1=\emptyset, \,K_2=J=\{x_{1} ,\dots,x_{\ell-1}\}, \, k_{0} = 0.$$ Therefore, for $1\leq i\leq \ell$, $$\begin{aligned} \mathrm{in}(\varphi_i(x_i))=x_1^2\cdots x_{i-1}^2x_i^{2\ell-2i}.\end{aligned}$$ From Proposition \[in\], we immediately obtain the following Corollary: \[Coin\] (1) $$\mathrm{in}(\det\big[\varphi_{j}(x_i)\big]) =\prod_{i=1}^\ell\mathrm{in}(\varphi_i(x_i)) = \prod_{i=1}^{\ell-1} x_i^{4(\ell-i)}.$$ \(2) Moreover, the leading term of $\det\big[\varphi_{j}(x_i)\big]$ is equal to $$\frac{1}{(2\ell-3)!!} \prod_{i=1}^{\ell-1} x_i^{4(\ell-i)}.$$ \(3) In particular, $\det\big[\varphi_{j}(x_i)\big]$ does not vanish. Next, we will prove $\varphi_j\in D({\mathbb c}\mathcal{S}(D_\ell))$ for $1\leq j\leq \ell$. We denote ${\mathbb c}\mathcal{S}(D_\ell)$ simply by $\mathcal S_{\ell}$ from now on. Before the proof, we need the following two lemmas: \[Lem2.6\] Fix $1\leq j\leq \ell-1$ and $\epsilon\in\{-1, 1\}$. Then \(1) $$\begin{gathered} \prod_{x_{i}\in J }(x_i-x_s)(x_i-\epsilon x_t) =\sum\limits_{\substack{K_{1}\cup K_{2} \subseteq J\\ K_{1} \cap K_{2} = \emptyset}} \left(\prod K_{1} \right)\\ \times\left(\prod K_{2} \right)^{2} [-(x_s+\epsilon x_t)]^{|K_1|}(\epsilon x_sx_t)^{k_0}.\end{gathered}$$ (2) $$\begin{gathered} \sum\limits_{\substack{0\leq n_1 \leq |J_{1}|\\0\leq n_2\leq |J_{2}|}} (-1)^{|J_{1}|+|J_{2}|-n_{1} -n_2} \sigma_{n_1}^{J_{1}} \tau_{2n_2}^{J_{2}} (\epsilon x_s)^{k+1}\\ = \prod_{x_{i}\in J_{1}} (x_i- \epsilon x_s) \prod_{x_{i}\in J_{2}}(x_i^2-x_s^2).\end{gathered}$$ \(1) is easy because the left handside is equal to $$\prod\limits_{x_{i}\in J}(x_i^{2} - (x_s+\epsilon x_t)x_{i} + \epsilon x_{s} x_t).$$ (2) The left handside is equal to $$\begin{aligned} & \sum\limits_{0\leq n_1 \leq |J_{1}|} (-\epsilon x_s)^{|J_{1}|-n_1} \sigma_{n_1}^{J_{1} } \sum\limits_{0\leq n_2\leq |J_{2} |} (-x_s^2)^{|J_{2} |-n_2} \tau_{2n_2}^{J_{2} } $$ which is equal to the right handside. (1)The polynomial $$x_{s} \overline{B}_{k,k_{0}}(x_s,z)-x_{t} \overline{B}_{k,k_{0} }(x_t,z)$$ is divisible by $x_{s}^{2} - x_{t}^{2}, $ (2)For $\epsilon\in\{-1, 1\}$, the polynomial $$\begin{gathered} (x_{s}-\epsilon x_{t}) \epsilon x_{s} x_{t} \left[ \overline{B}_{k,k_{0}}(x_s,z)+ \epsilon \overline{B}_{k,k_{0}}(x_t,z)\right]\\ - (x_s+\epsilon x_t)(\epsilon x_{s}x_t)^{k_0} \left[\epsilon x_{t} x_s^{k+1}- x_{s}(\epsilon x_t)^{k+1}\right]\end{gathered}$$ is divisible by $x_s+ \epsilon x_t-z$. \[Lem2.7\] \(1) follows from the fact that $ -\overline{B}_{k,k_0}(x,z) =\overline{B}_{k,k_0}(-x,z) $ in Proposition \[Prop2.1\]. \(2) follows from the following congruence relation of polynomials modulo the ideal $(x_s+ \epsilon x_t-z)$: $$\begin{aligned} & (x_{s}-\epsilon x_{t}) \epsilon x_{s} x_{t} \left[ \overline{B}_{k,k_{0}}(x_s,z)+\epsilon \overline{B}_{k,k_{0}}(x_t,z)\right]\\ &= (x_{s}-\epsilon x_{t}) \epsilon x_{s} x_{t} z^{k+2k_0}\big[{B}_{k,k_0}(\displaystyle\frac{x_s}{z})- {B}_{k,k_0}(\displaystyle\frac{ -\epsilon x_t}{z})\big]\\ &\equiv (x_{s}-\epsilon x_{t}) \epsilon x_{s} x_{t} (x_s+ \epsilon x_t)^{k+2k_0}\\ &~~~~\big[{B}_{k,k_0}(\displaystyle\frac{x_s}{x_s+ \epsilon x_t}) - {B}_{k,k_0} (\displaystyle\frac{ -\epsilon x_t}{x_s+ \epsilon x_t}) \big]\\ &=(x_{s}-\epsilon x_{t}) \epsilon x_{s} x_{t} (x_s+ \epsilon x_t)^{k+2k_0}\\ &~~~~\displaystyle\frac{\big(\displaystyle\frac{x_s}{x_s+ \epsilon x_t}\big)^k - \big(\displaystyle\frac{\epsilon x_t}{x_s+ \epsilon x_t}\big)^k} {\big({\displaystyle\frac{x_s}{x_s+ \epsilon x_t}}\big)- \big({\displaystyle\frac{ \epsilon x_t}{x_s+\epsilon x_t}}\big)} ({\displaystyle\frac{ \epsilon x_t}{x_s+ \epsilon x_t}})^{k_0} ({\displaystyle\frac{x_s}{x_s+ \epsilon x_t}})^{k_0}\\ &=(x_s+ \epsilon x_t) (\epsilon x_{s}x_t)^{k_0} \left[\epsilon x_{t} x_s^{k+1}- x_{s}(\epsilon x_t)^{k+1}\right].\end{aligned}$$ Every $\varphi_j$ lies in $D(\mathcal{S}_\ell)$. \[Proposition2.10\] For $1\leq j\leq \ell-1,1\leq s< t\leq \ell,$ and $\epsilon\in\{-1, 1\}$, by Lemma \[Lem2.7\] and Lemma \[Lem2.6\], we have the following congruence relation of polynomials modulo the ideal $(x_{s} +\epsilon x_{t} -z)$: $$\begin{aligned} & (x_{s}-\epsilon x_{t}) \epsilon x_{s} x_{t} \left[\varphi_j(x_s+\epsilon x_t-z)\right]\\ &=(x_j-x_{j+1}-z) \sum\limits_{\substack{K_{1}\cup K_{2} \subseteq J\\ K_{1} \cap K_{2} = \emptyset}} \left(\prod K_{1} \right) \left(\prod K_{2}\right)^{2}\\ &~~~~~~~\times (-z)^{|K_{1}|} \sum\limits_{\substack{0\leq n_1 \leq |J_{1}|\\ 0\leq n_2\leq |J_{2} |}}(-1)^{n_1+n_2}\sigma_{n_{1} }^{J_{1} } \tau_{2 n_{2}}^{J_{2} }\\ &~~~~~~~\times(x_{s}-\epsilon x_{t}) \epsilon x_{s} x_{t} [\overline{B}_{k,k_0}(x_s,z)+\epsilon \overline{B}_{k,k_0}(x_t,z)] \\ &\equiv(x_j-x_{j+1}-z) \left(x_{s}+\epsilon x_{t}\right)\\ &~\times \sum\limits_{K_{1}, K_{2}} \left(\prod K_{1} \right) \left(\prod K_{2}\right)^{2} [-(x_s+\epsilon x_t)]^{|K_{1}|}(\epsilon x_{s}x_t)^{k_0}\\ &~\times \sum\limits_{n_{1}, n_2} (-1)^{n_{1} + n_{2} } \sigma_{n_1}^{J_{1} } \tau_{2n_2}^{J_{2} } \left[\epsilon x_{t} x_s^{k+1}- x_{s}(\epsilon x_t)^{k+1}\right]\\ &=(x_j-x_{j+1}-z)\left( x_s+\epsilon x_t\right) \prod_{x_{i} \in J}(x_i-x_s)(x_i-\epsilon x_t)\\ &~~\times(-1)^{|J_{2}|} \bigg[ \epsilon x_{t}\prod_{x_{i}\in J_{1} } (x_{i}-x_s) \prod_{x_{i}\in J_{2}}(x_i^2-x_s^2) \\ &~~~~~~~~~~~~~~ -x_{s} \prod_{x_{i}\in J_{1} } (x_{i}- \epsilon x_{t} ) \prod_{x_{i}\in J_{2} }(x_i^2-x_t^2) \bigg]\,\,\,\,\,\, (\dagger).\end{aligned}$$ [*Case 1.*]{} When $x_{s} \in J$, $(\dagger)=0.$ [*Case 2.*]{} When $x_{s} \in J_{2} $ and $x_{t} \in J_{2} $, $(\dagger)=0.$ [*Case 3.*]{} When $x_{s} \in J_{1} $ and $x_{t} \in J_{2} $, $(\dagger)=0.$ [*Case 4.*]{} When $x_{s} \in J_{1} $, $x_{t} \in J_{1} $ and $\epsilon=1$, $(\dagger)=0$. [*Case 5.*]{} If $x_{s} \in J_{1} $, $x_{t} \in J_{1} $ and $\epsilon=-1$, then $s=j<t=j+1$. So $(\dagger)$ is divisible by $x_{s} +\epsilon x_{t} -z$. We also have the following congruence relation of polynomials modulo the ideal $(x_{s} +\epsilon x_{t} -z)$: $$\begin{aligned} & (x_{s}-\epsilon x_{t}) \epsilon x_{s} x_{t} \left[\varphi_\ell(x_s+\epsilon x_t-z)\right]\\ &= \sum\limits_{\substack{K_{1}\cup K_{2} \subseteq J\\ K_{1} \cap K_{2} = \emptyset}} \left(\prod K_{1} \right) \left(\prod K_{2}\right)^{2} (-z)^{|K_{1}|} (-x_{\ell} ) \\ & (x_{s}-\epsilon x_{t}) \epsilon x_{s} x_{t} [\overline{B}_{-1,k_0}(x_s,z)+\epsilon \overline{B}_{-1,k_0}(x_t,z)] \\ &\equiv \left(x_{s}+\epsilon x_{t}\right) (-x_{\ell}) \left(\epsilon x_{t} - x_{s}\right) \\ & \sum\limits_{K_{1}, K_{2}} \left(\prod K_{1} \right) \left(\prod K_{2}\right)^{2} [-(x_s+\epsilon x_t)]^{|K_{1}|}(\epsilon x_{s}x_t)^{k_0}\\ &= \left( x_s^{2} - x_t^{2} \right) x_{\ell} \prod_{x_{i} \in J}(x_i-x_s)(x_i-\epsilon x_t) \,\,\,\,\,\,\,\,\,\,\,\, (\dagger\dagger).\end{aligned}$$ Since $s<t\leq \ell$, we have $x_{s}\in J = \{x_{1}, \dots, x_{\ell-1} \}$. Thus $(\dagger\dagger)=0.$ Therefore $\varphi_j(x_s+\epsilon x_t-z)$ is divisible by $x_{s}+\epsilon x_{t}-z$ for $1\leq j\leq \ell, 1\leq s< t\leq \ell.$ For $1\leq j\leq \ell$, $$\begin{aligned} \varphi_j(x_s^{2} - x_t^{2} ) = 2 x_{s} \varphi_j(x_s) - 2 x_t \varphi_j(x_t)\end{aligned}$$ is divisible either by $x_{s} \overline{B}_{k,k_{0}}(x_s,z)-x_{t} \overline{B}_{k,k_{0} }(x_t,z) $ or by $x_{s} \overline{B}_{-1,k_{0}}(x_s,z)-x_{t} \overline{B}_{-1,k_{0} }(x_t,z)$, we have $$\varphi_j( x_s^{2} - x_t^{2} ) \equiv 0 \,\, \mod (x_s^{2} - x_t^{2})$$ by Lemma \[Lem2.7\] (1). This implies $\varphi_{j} \in D(\cal S_{\ell} ).$ Applying Saito’s lemma [@Sai] [@OT Theorem 4.19], we complete our proof of Theorem \[main\] thanks to Lemma \[Lemma2.5\], Corollay \[Coin\] (3) and Proposition \[Proposition2.10\]. Theorem \[main\] implies that $\det[\varphi_{j} (x_{i})]$ is a nonzero multiple of $(Q/z)$. By Corollary \[Coin\] (2) one obtains $$\begin{gathered} \det[\varphi_{j}(x_{i} ) ]\\ = \frac{1}{(2\ell-3)!!} \prod_{1\leq s<t\leq \ell} \prod_{\epsilon\in\{-1, 1\}} (x_s + \epsilon x_t - z) (x_s + \epsilon x_t).\end{gathered}$$ [17]{} T. Abe, H. Terao, The freeness of Shi-Catalan arrangements. [*European J. Combin.*]{}(to appear). arXiv:1012.5884v1. Ch. Athanasiadis, Characteristic polynomials of subspace arrangements and finite fields. [*Adv. in Math.*]{}, [**122**]{} (1996), 193-233. Ch. Athanasiadis, On free deformations of the braid arrangement. [*European J. Combin.*]{}, [**19**]{} (1998), 7-18. P. H. Edelman and V. Reiner, Free arrangements and rhombic tilings. [*Discrete Comp. Geom.*]{}, [**15**]{} (1996), 307-340. P. Headley, On a family of hyperplane arrangements related to the affine Weyl groups. [*J. Algebraic Combin.*]{}, [**6**]{} (1997), 331-338. J. Herzog and T. Hibi, [*Monomial ideals.*]{} Graduate Texts in Mathematics, Springer-Verlag, London, 2011. P. Orlik and H. Terao, [*Arrangements of hyperplanes.*]{} Grundlehren der Mathematischen Wissenschaften, [**300**]{}, Springer-Verlag, Berlin, 1992. A. Postnikov, R. P. Stanley, Deformations of Coxeter hyperplane arrangements. [*J. Comb. Theory, Ser. A*]{}, [**91**]{} (2000), 544-597. K. Saito, Theory of logarithmic differential forms and logarithmic vector fields. [*J. Fac. Sci. Univ. Tokyo Sect. IA Math.*]{}, [**27**]{} (1980), 265-291. J.-Y. Shi, The Kazhdan-Lusztig cells in certain affine Weyl groups. [*Lecture Notes in Math.*]{}, [**1179**]{}, Springer-Verlag, 1986. J.-Y. Shi, Sign types corresponding to an affine Weyl group. [*J. Lond. Math. Soc.*]{}, [**35**]{} (1987), 56-74. L. Solomon and H. Terao, The double Coxeter arrangements. [*Comment. Math. Helv.*]{}, [**73**]{} (1998), 237-258. D. Suyama, H. Terao, The Shi arrangements and the Bernoulli polynomials. arXiv:1103.3214v3. D. Suyama, On the Shi arrangements of types $B_{\ell} $, $C_{\ell} $, $F_{4} $ and $G_{2}. $ (in preparation). H. Terao, Arrangements of hyperplanes and their freeness I, II. [*J. Fac. Sci. Univ. Tokyo*]{}, [**27**]{} (1980), 293-320. H. Terao, Generalized exponents of a free arrangement of hyperplanes and Shephard-Todd-Brieskorn formula. [*Invent. Math.*]{}, [**63**]{} (1981), 159-179. H. Terao, Multiderivations of Coxeter arrangements. [*Invent. Math.*]{}, [**148**]{} (2002), 659-674. M. Yoshinaga, Characterization of a free arrangement and conjecture of Edelman and Reiner. [*Invent. Math.*]{}, [**157**]{} (2004), 449-454.
--- abstract: 'As a future upgrade of the Frascati $\phi$ factory DA$\Phi$NE an increase of the center-of-mass energy of the accelerator up to $W=2$ GeV has been proposed (DAFNE2). In this case the hadronic cross section in the energy range between $1-2$ GeV can be measured with the KLOE detector. The feasibility of these measurements and the impact on the hadronic contribution to the anomalous magnetic moment of the muon, $a_{\mu}^{\rm hadr}$, are discussed. The possibilities for an energy scan are compared with the radiative return technique, in which the accelerator is running at a fixed center-of-mass energy and ISR-events are taken to lower the invariant mass of the hadronic system.' author: - | The KLOE collaboration[^1] presented by Achim G. Denig[^2]\ [Universität Karlsruhe, IEKP, Postfach 3640, 76021 Karlsruhe, Germany]{} title: 'KLOE PERSPECTIVES FOR R-MEASUREMENTS AT DAFNE2' --- The DAFNE2 Proposal =================== The future perspectives of the $e^+e^-$ collider DA$\Phi$NE are discussed at the Frascati laboratories (see also [@dafne][@superdafne]). Two projects have been proposed recently: an increase of the peak luminosity to $\approx 10^{34}cm^{-2}s^{-1}$ (DA$\Phi$NE-II) and an increase of the center-of-mass energy up to $2 {\rm GeV}$ (DAFNE2, [@dafne2]), where the second option might be realized either before or within the high-luminosity-solution. While at DA$\Phi$NE-II the main physics motivation is based on the investigation of the parameters of the kaon system (CP,CPT-violation), DAFNE2 provides the possiblity to measure the timelike nucleon form factors at threshold and to perform hadronic cross section measurements in the $1-2 {\rm GeV}$ energy range, which we will discuss in the following. Some of the components of the present machine are already designed for an energy increase. The main hardware modifications concern the dipole magnets, the splitter magnets and the low-$\beta$ quadrupoles. No crucial issues from the accelerator physics point of view can be seen at the moment. Peak luminosities of at least $10^{32}{\rm cm}^{-2}{\rm s}^{-1}$ are expected to be in reach for this machine, which allows to collect an integrated luminsity per year of at least $1$fb$^{-1}$. Importance of the 1-2 GeV energy range ====================================== Hadronic cross section data are of importance for the determination of the hadronic contribution to the anomalous magnetic moment of the muon, $a_{\mu}$, and for the fine structure constant at the $Z$ pole, $\alpha(m_Z^2)$. In the following we will discuss the impact of DAFNE2 on the muon anomaly. The hadronic contribution to this fundamental quantity, $a_\mu^{\rm hadr}$, which is given by the hadronic vacuum polarization, cannot be calculated at low energies using perturbative QCD. A dispersion relation can however be derived, giving $a_\mu^{\rm hadr}$ as an integral over the hadronic cross section, multiplied by an appropriate kernel. The dominant contribution to $a_{\mu}^{\rm hadr}$ ($90\%$) is given by low energy cross section measurements $<2{\rm GeV}$ [@davhoe2]. This is the region where DAFNE2 will operate. An improved measurement of hadronic cross sections for the various channels of interest could therefore considerably improve the knowledge on the hadronic contribution to $a_{\mu}$. This is needed for an interpretation of the recent new measurements [@e821a][@e821b] of the muon anomaly (E821 collaboration, BNL), showing a difference between the experimental and theoretical value of $a_\mu$ of up to $3\sigma$ (see ref. [@davhoe2] for details concerning the theory evaluation).\ In table \[tab:tbl2\] the contributions to $a_{\mu}^{\rm hadr}$ and to the squared error $\delta^2 a_{\mu}^{\rm hadr}$ are listed for different energy ranges and for different hadronic channels [^3]. It is interesting to notice that the $2\pi$ channel is contributing to $a_{\mu}^{\rm hadr}$ to $54\%$ around the $\rho$ peak ($0.6 - 1.0 {\rm GeV}$), while the contribution to the error $\delta^2 a_{\mu}^{\rm hadr}$ in the same energy interval is only $34\%$. This difference reflects the fact that the $2\pi$ channel around the $\rho$ peak is very well measured now by CMD-2 [@cmd2] [@cmd2b] with a systematic error of $0.6\%$. Soon also KLOE will publish its results of this channel and of the same energy interval (see these proceedings ref. [@smueller]). Precision measurements for this channel exist also from the analysis of hadronic $\tau$ decays which are related via the CVC-theorem to electron-positron data and can be used for the evaluation of $a_{\mu}^{\rm hadr}$ after appropriate isospin corrections. At low energies ($<0.6 {\rm GeV}$) and even more important at high energies ($>1 {\rm GeV}$) a considerable improvement for the two-pion channel is required. The contribution to the error $>1 {\rm GeV}$ is very large ($\approx 30\%$) while the absolute contribution to the integral $a_{\mu}^{\rm hadr}$ is rather small (only $10\%$). DAFNE2 can play an important role here. The threshold region ($<0.6$ GeV) will be measured already at the present DA$\Phi$NE machine in a complementary analysis to the one published now.\ Another interesting hadronic channel is the $4\pi$ channel where measurements with a precision not better than $10-20\%$ exist. The $4\pi$ channel becomes important only above $1 {\rm GeV}$ and is therefore a good candidate for DAFNE2. This will be discussed in more detail in the following. The relative contribution of the $4\pi$-channel to the error $\delta^2 a_{\mu}^{\rm hadr}$ is $7\%$. ![$\pi^+\pi^-\gamma$ event yield for an integrated luminosity of $1 {\rm fb}^{-1}$. Realistic acceptance cuts have been applied: $\Theta_{\pi}>30^o$, $\Theta_{\gamma}<20^o$ or $\Theta_{\gamma}>160^o$. The yield for radiative muon pair production is also shown. The statistics is sufficient to normalize the cross section measurement to muon pairs.[]{data-label="fig:fig2"}](2pich_eps.eps){height="50.00000%"} ![$\pi^+\pi^-\pi^+\pi^-\gamma$ event yield for an integrated luminosity of $1 {\rm fb}^{-1}$. The dependence of the event rate with the polar angle cut for the pion tracks is shown. The radiated photon is selected at small angles in order to decrease the relative contribution of FSR events: $\Theta_{\gamma}<20^o$ or $\Theta_{\gamma}>160^o$[]{data-label="fig:fig3"}](4pich_eps.eps){height="50.00000%"} Energy Scan versus Radiative Return =================================== Up to recently an energy scan has been considered as the only way to measure hadronic cross sections $e^+e^-\to {\rm hadrons}$ at electron-positron colliders. The KLOE and BABAR [@md_babar] collaborations have shown in the meanwhile that the use of Initial State Radiation (ISR) events has to be considered as a complementary and competitive approach at particle factories, which are actually designed for fixed center-of-mass energies W. In this new method - called also ’radiative return’ - hadronic events are taken, in which a photon (energy $E_\gamma$) is radiated before annihilation of the $e^+e^-$ pair. The invariant mass $M^2_{\rm hadr}$ of the hadronic system is given by: $M^2_{\rm hadr}=W^2-2WE_\gamma$. In general the cross sections $\sigma_{{\rm hadr}+\gamma}=\sigma(e^+e^-\to{\rm hadrons}+\gamma)$ and $\sigma_{\rm hadr}=\sigma(e^+e^-\to {\rm hadrons})$ are related through: $$M^2_{\rm hadr} \cdot \frac{d\sigma_{{\rm hadr}+\gamma}}{dM^2_{\rm hadr}}= \sigma_{\rm hadr} \cdot H(M^2_{\rm hadr}) \label{eq:H}$$ The radiator function $H$ is taken from theory. In the case of KLOE we use the PHOKHARA generator, designed specially for our purposes (see below). Channel Energy Range $a_\mu^{\rm hadr}$ (rel. contr.) $\delta^2 a_{\mu}^{\rm hadr}$ (rel. contr.) -------------- ------------------------- ---------------------------------- --------------------------------------------- $2\pi$ $2m_{\pi}-0.5{\rm GeV}$ $8\%$ $8\%$ $2\pi$ $0.6-1.0{\rm GeV}$ $54\%$ $34\%$ $2\pi$ Rest $<1.8{\rm GeV}$ $10\%$ $31\%$ $3\pi$ $<1.8{\rm GeV}$ $12\%$ $5\%$ $4\pi$ $<1.8{\rm GeV}$ $5\%$ $7\%$ $>4\pi$ $<1.8{\rm GeV}$ $3\%$ $5\%$ \[tab:tbl2\] \ The design of the DAFNE2 project foresees the possibility of a systematic variation of the center-of-mass energy in the $1-2$ GeV range [^4]. An energy scan is thus possible. However, also the radiative return is an option for DAFNE2 when the center-of-mass energy of the accelerator is kept fixed at e.g. $W=2$ GeV or close to the $N\bar{N}$ threshold region where measurements of the timelike nucleon form factor are planned. In the following we will briefly point out the advantages and possible issues of the radiative return method compared to an energy scan.\ One big advantage of the method is the fact that data comes as a by-product of the standard program of the machine (e.g. CP violation measurements in the case of KLOE/BABAR) and no dedicated experimental modifications are needed. Moreover, the method allows to measure the whole energy spectrum below the center-of-mass energy of the accelerator at a time. Systematic errors from luminosity, the knowledge of the machine energy, efficiencies and acceptances have to be determined only for one single energy point (as a function of $M^2_{\rm hadr}$ though) and not for each energy bin as it is needed in the case of an energy scan.\ There are on the other side a series of issues which need to be attacked, especially if the radiative return method is used for a high precision measurement on the level of $1\%$ or below. Clearly the method requires a precise theoretical knowledge of the ISR-process, i.e. of the radiator function $H$ in equation 1. A lot of progress has been obtained in the last years and calculations exist now up to NLO by means of the Monte Carlo generator PHOKHARA [@binner] [@german] [@czyz]  [@grzelinska] [@grzelinska2]. Another important issue is the suppression of FSR events, since FSR has to be considered as a background to the ISR-approach of the radiative return [@denig99]. Unfortunately, the radiation of photons from hadrons can only be calculated within a certain model dependence. Usually the model of scalar QED is chosen for the radiation of photons from e.g. pions. The actual KLOE analysis uses events in which the radiative photon is selected at small angles, which effectively suppresses the relative amount of FSR well below $1\%$, such that the model dependence becomes negligible. Moreover, the validity of the model for FSR can be tested from data by measuring the charge asymmetry [@binner] [@slac] [@hoefer] and by comparing the model prediction with data. ![$\pi^+\pi^-\pi^0\pi^0\gamma$ event yield for an integrated luminosity of $1 {\rm fb}^{-1}$ and for realistic selcetion cuts (see figure caption 1). The number of $\mu^+\mu^-\gamma$ events is again shown for comparison. []{data-label="fig:fig4"}](2pich2pi0_eps.eps){height="50.00000%"} Radiative Return at DAFNE2 ========================== The PHOKHARA Monte Carlo code has been used to study the event rates for ISR-events at DAFNE2. The machine is assumed to operate at $\sqrt{s}=2 {\rm GeV}$. We have investigated the two-pion-state $\pi^+\pi^-$ and the four-pion-states $\pi^+\pi^-\pi^+\pi^-$ and $\pi^+\pi^-\pi^0\pi^0$ in the $1-2 {\rm GeV}$ range due to their special importance for the hadronic contribution to the muon anomlay (see above). In figures \[fig:fig2\], \[fig:fig3\] and \[fig:fig4\] the $M^2_{\rm hadr}$ differential event rates for the states $\pi^+\pi^-\gamma$, $\pi^+\pi^-\pi^+\pi^-\gamma$ and $\pi^+\pi^-\pi^0\pi^0\gamma$ are shown, where $M^2_{\rm hadr}$ is the invariant mass of the hadronic (muonic) system. A bin width of $0.04 {\rm GeV}$ has been chosen. The plots show the event yield for an integrated luminosity of $1 fb^{-1}$ and for realistic acceptance cuts (see figure caption for more details). There are no limitations from statistics, since the event yield is in the order $10.000$ events in almost the entire energy range of interest. In fig. \[fig:fig2\] and \[fig:fig4\] in addition to the hadronic channels the yield of $\mu^+\mu^-\gamma$ events is shown, proving that a normalization to muon events is feasible from the statistical point of view. In the following we briefly present the main experimental issues to be studied: - [The KLOE drift chamber [@dc] allows a high resolution measurement of the invariant mass $M^2_{\rm hadr}$ for the fully charged hadronic channels. In the case of the $\pi^+\pi^-\pi^0\pi^0\gamma$ channel the experimental challenge is the correct $\pi^0$ reconstruction and a possible unfolding of the mass spectrum due to the limited resolution of the KLOE electromagneic calorimeter [@emc].]{} - [The suppression of FSR is of great importance for a successful application of the radiative return (see discussion above). Fortunately at $W=2 {\rm GeV}$ the pion form factor is very small such that the relative amount of FSR in the two-pion-channel will be reduced also.]{} - [In contrary to the present KLOE analysis there will be no background from $\phi$ decays (e.g. $\phi \to \pi^+\pi^-\pi^0$) and therefore a much reduced background contamination can be expected at DAFNE2. Moreover, also the Bhabha cross section is considerably reduced with respect to the present DA$\Phi$NE machine.]{} - [Above $M^2_{\rm hadr}=2$ GeV$^2$ the two-pion cross section is decreasing rapidly (see fig. \[fig:fig2\]) while the muonic cross section is high. An efficient separation of pions and muons might become critical in this region.]{} - [KLOE does not have experience in the measurements of channels where four tracks are originating from the interaction point. Special reconstruction software has to be developed for the analysis of the $\pi^+\pi^-\pi^+\pi^-\gamma$ channel.]{} In order to understand the final precision for these radiative return measurements, a dedicated feasibility study, including the KLOE detector simulation environment, is needed. We want to stress that no a-priori limitations for a measurement on the level of few percent can be seen at the moment. This is sufficient for a sizeable reduction of the contribution above $1$ GeV to the error on $a_\mu^{\rm hadr}$. The experimental issues discussed above, are similar in the case of an energy scan and do not represent a drawback of the radiative return method. Conclusions =========== DAFNE2 provides the possiblity to measure the hadronic cross section in the $1-2 {\rm GeV}$ energy range. The radiative return seems a feasible option for these cross section measurements. Special emphasis should be put on the two-pion and four-pion-channels $>1{\rm GeV}$ due to their importance for an improved evaluation of the hadronic contribution to the muon anomaly. The long term goal is a reduction of the error of the hadronic contribution to the muon anomaly to a value $\delta a_{\mu}^{\rm hadr}=2\cdot10^{\rm -10}$. DAFNE2 can make a considerable contribution to this goal. Competition comes from the radiative return activities at BABAR and possible future activities at VEPP-2000, BELLE and CLEO-c. [9]{} For further details see the DA$\Phi$NE homepage http://www.lnf.infn.it/acceleratori C. Biscari, [*Future Plans for $e^+e^-$ Factories*]{}, presented at the 2003 Particle Accelerator Conference (PAC2003), Portland/Oregon, May 12-16, 2003\ [**LNF-03/012(P)**]{} G. Benedetti, [*Feasibility Study of a $2$ GeV Lepton Collider at DAFNE*]{}, presented at the 2003 Particle Accelerator Conference (PAC2003), Portland/Oregon, May 12-16, 2003\ [**LNF-03/012(P)**]{} M. Davier, S. Eidelman, A. Höcker and Z. Zhang, hep-ph/0308213 G.W.Bennett [*et al.*]{} \[E821 collaboration\], Phys. Rev. Lett.[**89**]{} (2002) 101804 G.W.Bennett [*et al.*]{} \[E821 collaboration\]\ hep-ex/0401008 R.R. Akhmetshin et al., Phys. Lett. [**B527**]{} (2002) 161 R.R. Akhmetshin et al., Phys. Lett. [**B578**]{} (2004) 285 S. Müller for the \[ KLOE collaboration \], [*Measurement of $\sigma(e^+e^- \to \pi^+\pi^-)$ at DA$\Phi$NE with the radiative return*]{}, these proceedings, hep-ex/0312056 M. Davier for the \[ BABAR collaboration \], [*Progress on R-measurement through ISR with BaBar*]{} hep-ex/0312063 S. Binner, J.H. Kühn, K. Melnikov, Phys.Lett. [**B459**]{} (1999) 279 G. Rodrigo, A. Gehrmann-De Ridder, M. Guilleaume, J.H. Kühn, Eur. Phys. J. [**C22**]{} (2001) 81 G. Rodrigo, H. Czyż, J.H. Kühn and M. Szopa, Eur. Phys. J [**C24**]{} (2002) 71 H. Czyż, A. Grzelińska, J. H. Kühn and G. Rodrigo, Eur. Phys. J.  [**C27**]{} (2003) 563 H. Czyż, A. Grzelińska, J. H. Kühn and G. Rodrigo, hep-ph/0308312 A. Denig [*et al.*]{}, presented at the workshop DA$\Phi$NE99, Frascati/Italy, November 16-99,1999\ [**Frascati Physics Series XVI**]{} (2002) 569 A. Denig [*et al.*]{} \[KLOE collaboration\], presented at the workshop $e^+e^-$ Physics at Intermediate Energies, Stanford/California, April 30 - May 2, 2001, eConfC010430 (2001) T07, hep-ex/0106100 J .Gluza, A. Höfer, S. Jadach, F. Jegerlehner, Eur. Phys. J. [**C28**]{} (2003) 261 M. Adinolfi [*et al.*]{} \[KLOE collaboration\], Nucl. Instrum. Meth. A [**488**]{} (2002) 51 M. Adinolfi [*et al.*]{} \[KLOE collaboration\], Nucl. Instrum. Meth. A [**482**]{} (2002) 364 [^1]: The KLOE Collaboration: A. Aloisio, F. Ambrosino, A. Antonelli, M. Antonelli, C. Bacci, G. Bencivenni, S. Bertolucci, C. Bini, C. Bloise, V. Bocci, F. Bossi, P. Branchini, S. A. Bulychjov, R. Caloi, P. Campana, G. Capon, T. Capussela, G. Carboni, G. Cataldi, F. Ceradini, F. Cervelli, F. Cevenini, G. Chiefari, P. Ciambrone, S. Conetti, E. De Lucia, P. De Simone, G. De Zorzi, S. Dell’Agnello, A. Denig, A. Di Domenico, C. Di Donato, S. Di Falco, B. Di Micco, A. Doria, M. Dreucci, O. Erriquez, A. Farilla, G. Felici, A. Ferrari, M. L. Ferrer, G. Finocchiaro, C. Forti, A. Franceschi, P. Franzini, C. Gatti, P. Gauzzi, S. Giovannella, E. Gorini, E. Graziani, M. Incagli, W. Kluge, V. Kulikov, F. Lacava, G. Lanfranchi, J. Lee-Franzini, D. Leone, F. Lu, M. Martemianov, M. Matsyuk, W. Mei, L. Merola, R. Messi, S. Miscetti, M. Moulson, S. Müller, F. Murtas, M. Napolitano, A. Nedosekin, F. Nguyen, M. Palutan, E. Pasqualucci, L. Passalacqua, A. Passeri, V. Patera, F. Perfetto, E. Petrolo, L. Pontecorvo, M. Primavera, F. Ruggieri, P. Santangelo, E. Santovetti, G. Saracino, R. D. Schamberger, B. Sciascia, A. Sciubba, F. Scuri, I. Sfiligoi, A. Sibidanov, T. Spadaro, E. Spiriti, M. Testa, L. Tortora, P. Valente, B. Valeriani, G. Venanzoni, S. Veneziano, A. Ventura, S. Ventura, R. Versaci, I. Villella, G. Xu. [^2]: Invited talk given at the workshop ’$e^+e^-$ in the 1-2 GeV range: Physics and Accelerator Prospects’, Alghero/Sardinia, Sept.10-13,2003, Internet: http://www.lnf.infn.it/conference/d2/gener.html [^3]: Ref. [@davhoe2] has been used for this calculation [^4]: and the possibility to measure the center-of-mass of the machine very precisely using the resonant depolarization technique
--- abstract: 'We describe the physics of an articulated toy with an internal source of energy provided by a spiral spring. The toy is a funny low cost kangaroo which jumps and rotates. The study consists of a mechanical and a thermodynamical analysis which makes use of the Newton and center of mass equations, the rotational equations and the first law of thermodynamics. This amazing toy provides a nice demonstrative example how new physics insights can be brought about when links with thermodynamics are established in the study of mechanical systems.' --- 0.6cm plus 1pt minus 1pt =1500 [**The physics of articulated toys — a\ jumping and rotating kangaroo** ]{} 0.4cm [ J. Güémez$^{a,}$[^1], M. Fiolhais$^{b,}$[^2] ]{} 0.1cm [*$^a$ Departamento de Física Aplicada*]{}\ [*Universidad de Cantabria*]{}\ [*E-39005 Santander, Spain*]{}\ 0.1cm [*$^b$ Departamento de Física and Centro de Física Computacional*]{}\ [*Universidade de Coimbra*]{}\ [*P-3004-516 Coimbra, Portugal*]{} Introduction {#sec:intro} ============ Toys can be helpful in increasing students’ motivation in the classroom. In presentations for popularizing and communicating science to more general audiences, they can also help increasing the appreciation and interest in the physical science, sometimes in such a way that everyone (especially non-scientists) will probably grasp some fundamental concepts. However, they should be used with care: the physical description of some toys is not so easy [@guemezantigos], even in the framework of simplified models, and their usefulness is sometimes limited. But, at least for motivation purposes, they are always valuable [@guemez09]. In this paper we describe the motion of a toy that, due to an internal source of energy, jumps while rotates. The toy is a kangaroo but, as far as the physics description is concerned, being an object with the form of a kangaroo is just a detail, it could be something else (even a living being). Among the numerous possible objects suitable for illustration and demonstration purposes, a toy, performing on top of the instructor’s table during the classroom, is definitely more likely to attract the student’s attention. The accurate description of all steps of the toy’s motion is intricate but some simplified assumptions are possible and meaningful. This allows us to transform the real complicated problem into a feasible one, which is useful, in this particular case, for establishing a correspondence between the descriptions of translations and rotations, on the one hand, and, on the other hand, to bridge mechanics and thermodynamics. The jump of the kangaroo is funny, possibly even a bit mysterious, and our aim is to apply the pertinent physical laws to describe and understand the various phases of the motion. Though the mechanical description of rotations and translations is the result of the very same Newton’s second law, students have a clear preference for translations. Since our toy performs a movement that is a combination of a translation and a rotation, it can be useful for underlining the parallelism between the mechanical treatment of each type of motion [*per se*]{}. We shall assume constant forces and constant torques, therefore the real problem reduces to an almost trivial one. Nevertheless, there are some subtle points that are easy to emphasize with a simple example. In previous papers [@guemez13; @guemez14] we analyzed, from the mechanical and thermodynamical point of view, quite a few systems, essentially either in translation or in rotation. Here we combine both types of motion and, again, we stress the thermodynamical aspects in each phase of the motion, their similarities and asymmetries. The design and the manufacturing of the toy yields the kangaroo to perform a full rotation (360$^{\rm o}$) in the air while it jumps. This is because of its mass, of its shape (therefore of its moment of inertia), of the articulations between the legs and the body and also because of the power provided by the internal source of energy. The manufacturer should define and include an internal energy source suitable for the toy to perform a full turn around the centre of mass while its centre of mass raises sufficiently high and drops down in the air. If a rotation angle of $\sim 360^{\rm o}$ is not met, the toy doesn’t work. Our demonstration kangaroo is a plastic three euros toy, bought in a street vendor, whose source of internal energy is a spiral spring (so, it is a low cost and very ecological item — no batteries inside). ![\[fig:cang1\] The real toy performing a back somersault. From a movie, we extracted the pictures that represent the different phases of the motion: the preparation of the jump (a)-(b); the jump when the toy has no contact with the ground (c)-(h); and the final phase (i)-(j) when the toy stops after an initial contact with the ground. ](canguru1.eps){width="17cm"} After providing the necessary energy to the spiral spring, we put the toy on top of a table and release it. A back somersault by the toy immediately starts, as shown in figure \[fig:cang1\], and it comprises three phases: (1) the kangaroo, initially with the flexed legs, suddenly stretches them while rises its center of mass, increases its speed and starts rotating (a)-(b); (2) in this phase, (c)-(h) the toy has no contact with the ground, and rotates while its center of mass describes a parabola; (3) this is the “landing" phase that starts when the feet first come in contact with the ground, and it lasts until the toy completely stops (i)-(j). Mechanics and thermodynamics are two different branches of physics with many interrelations. However, interestingly enough, in most university physics curricula, as well as in the high school, thermodynamics and mechanics almost do not intercept. This is not the case in everyday life where both are strongly connected: a most common example is the automobile [@guemez13b], but there are many other examples [@guemez13c]. We shall see that our funny kangaroo also helps in illustrating this kind of bridging. There is no “physics surprise" in the interpretation of the motion, we just have to apply, in combination, basic laws of mechanics and thermodynamics. With reasonable simplifying assumptions, that do not spoil the essence of the physical description, we are able to reduce the real problem to a classroom example that certainly students enjoy while they learn how basic physics principles work. In section 2 we briefly introduce the general formalism that will be applied in the analysis of the motion of the toy. The discussion of the dynamics is presented in section 3 and it is essentially known. However, the subtle energetic issues related to the motion in phases 1 and 3, described in section 4, are probably less known or undervalued by instructors. In section 5 we present the conclusions. Mechanics and thermodynamics ============================ Let us briefly review the basic theoretical framework needed for the mechanical and thermodynamical analyzes. A more detailed presentation can be found in [@guemez13; @guemez14]. For a system of constant mass $m$, Newton’s second law can be expressed by p\_[cm]{}= \_[t\_0]{}\^t [F]{}\_[ext]{}  t ,\[eq1\] where ${\vec F}_{\rm ext}= \sum_i \vec f^{\rm \ ext}_i$ is the resultant external force acting upon the system and $\Delta \vec p_{\rm cm}=m \Delta \vec v_{\rm cm}$ is the variation of the system center of mass linear momentum in the time interval $\Delta t= t-t_0$. Equation (\[eq1\]) already incorporates the third law of mechanics which implies that the resultant of the internal forces vanishes, ${\vec F}_{\rm int}=\sum_j {\vec f}^{\rm \ int}_j = \vec 0$. The above equation, expressed in a vector form, can also be equivalently given in a scalar form by means of K\_[cm]{}= \_[ext]{}\_[cm]{}  \[eq2\] the so-called center of mass equation. On the left hand side, one has the variation of the center of mass kinetic energy ($K_{\rm cm}= {1 \over 2} \, m \, v^2_{\rm cm}$) and, on the right hand side, the pseudo work [@penchina78; @sherwood83; @mallin92; @mungan05; @jewett08v] performed by the resultant external force. For the pseudo-work, the resultant force and the center of mass displacement should be considered \[see the integral in (\[eq2\])\], whereas for the [*work*]{} it is each force and its own displacement that matters. Equation (\[eq1\]), which is an integral form of Newton’s equation, and the center of mass equation (\[eq2\]) are physically equivalent, though they use different physical magnitudes — they express the same fundamental law of mechanics, so they are general and do apply to all systems undergoing whatever process. When the mechanical systems perform rotations, other forms of Newton’s fundamental law are better suited such as [@varios; @tipler04] L=I= \_[t\_0]{}\^t \_[ext]{}  t           K\_[rot]{}= [1 2]{} I \^2 = \_[ext]{} \[eq3\] for the rotation of a system of constant moment of inertia $I$ around a principal axis of inertia containing the center of mass. The system is acted upon by an external torque, of magnitude $\tau_{\rm ext}$, whose direction is along the rotation axis. These equations are simplified versions of the most general ones, and in (\[eq3\]) one does not need to consider the vector character of the angular momentum, $\vec L$, of the angular velocity, $\vec \omega$, or of the torque, $\vec \tau_{\rm ext}$. The two equations (\[eq3\]) for the rotation, together with (\[eq1\]) and (\[eq2\]), for the translation, are the pertinent ones for the mechanical description of the motion presented in the next section. A different physical law, also applicable to any system and to any process, is the first law of thermodynamics which, incidentally, also involves typical mechanical quantities. That principle is a statement on energy conservation and it can be expressed by the equation K\_[cm]{}+ U= W\_[ext]{} + Q . \[totale\] On the left hand side, in addition to the variation of the center of mass kinetic energy, one has the variation of the internal energy and, on the right hand side, the energy fluxes to/from the system are expressed. In other words, the left hand side of equation (\[totale\]) refers to the total energy variation of the system, whereas the right hand side expresses the energy that crosses the system’s boundary i.e. the energy that enters or leaves the system through its boundary. This energy transfer is the sum of two contributions: the [*external work*]{} — i.e. the sum of the works performed by each external force, $w^{\rm ext}_i=\int \vec f^{\rm \ ext}_i\cdot \d \vec r_i$ — which is given by $W_{\rm ext}=\sum_i w^{\rm ext}_i$; and $Q$, that is the heat flow to/from the surroundings. If either $W_{\rm ext}$ or $Q$ are positive, that means an energy transfer to the system, leading to an increase of the left hand side of (\[totale\]); if any of them is negative, that means an energy flow to the surroundings with a corresponding decrease of the total energy of the system. It is worth noticing that, whereas the pseudo-work of the resultant external force leads to the variation of the centre of mass kinetic energy, as stated by equation (\[eq2\]), the real work, together with the heat, may change both that kinetic energy and/or the internal energy of the system, as stated by equation (\[totale\]). By expressing the first law of thermodynamics in terms of equation (\[totale\]) one implicitly assumes that $\Delta U$ includes [*all*]{} energy variations that may contribute to the total internal energy variation of the system. Those include the variations of rotational \[such as $\Delta K_{\rm rot}$ as given by equation (\[eq3\])\] and translational kinetic energies with respect to the center of mass, in addition to the variations due to temperature changes, variations of internal chemical energy (associated with chemical reactions), work performed by internal forces, etc. Of course, any process should also coply with the second law, besides the first law of thermodynamics. We stress that equations (\[eq2\]) and (\[totale\]) provide, in general, complementary information, since they correspond to two distinct fundamental laws of nature. The study of the movement of the toy described in the next sections illustrates that complementarity. Back somersault by a “kangaroo" {#sec:cangaroo} =============================== ![\[fig:cang2\] Diagrammatic description of the three phases of the motion. In the first phase (1) there is contact with the ground while the centre of mass moves horizontally and vertically; in phase (2) the centre of mass describes a parabola while the toy performs a 360$^{\rm o}$ turn around; the final phase (3) is the deceleration phase.](canguru2.eps){width="16cm"} We have shown, in figure \[fig:cang1\], the real toy performing the back somersault that comprises three phases. In figure \[fig:cang2\] we illustrate pictorially these three phases of the kangaroo’s motion. The forces exerted by the ground in phases 1 and 3 are surely time dependent [@haugland13]. As a simplifying assumption, we replace these variable forces by constant forces that produce, in each phase, exactly the same impulse as the real force [@guemez13c]. In phase 1, for instance, the force $\vec F (t)$ exerted by the ground is replaced by the [*constant*]{} force $\vec F$ such that $\int_{\Delta t_0} \vec F (t) \, \d t= \vec F \Delta t_0$, where $\Delta t_0$ is the duration of that phase, and similarly for phase 3. At the end of the initial phase, the center of mass velocity is $\vec v_0$ and, at the beginning of phase 3, the center of mass velocity is $\vec v{\,}'_0$. Regarding the rotation, the torque with respect to the center of mass, in phase 1, is also a time dependent function. The torque acts during the time interval $\Delta t_0$, producing a certain variation of the angular momentum of the system (note that now the moment of inertia of the toy, $I$, also slightly varies because of the legs’ articulations; however, this variation is rather small — the legs are very light in comparison with the rest of the body — and we can assertively adopt a constant $I$). Here, our approximation consists in assuming a constant torque such that $\int_{\Delta t_0} \vec \tau (t) \, \d t= \vec \tau \Delta t_0$, where $\vec \tau$ is a constant vector. The torque provides a clockwise angular acceleration, being $\omega_0$ the angular velocity at the end of phase 1. In figure \[fig:cang3\] we represent the [*constant*]{} forces and [*torques*]{} in phases 1 and 3. The constant torque $\vec \tau{\,}'$ in phase 3 leads to an counterclockwise angular acceleration that reduces to zero the initial angular velocity ($\omega_0$). Of course, in all phases, the weight, $\vec G$, is always acting, but its torque always vanishes with respect to the center of mass. ![\[fig:cang3\] Constant forces acting on the kangaroo during the initial (1) and the final (3) phases. Note that the torques produced by the contact forces are opposite; in phase (1) the torque leads to the rotation of the toy, whereas the torque at phase (3) produces an angular deceleration and eventually the rotation ceases. ](canguru3.eps){width="11cm"} The fact that we are considering constant forces and torques considerably simplifies the integrals in the general expressions presented in section 2. These forces and torques are time dependent and they also depend on the position of the centre of mass (the forces) and of the rotation angle, $\phi$ (the torques). The real forces and torques produce certain impulses and angular impulses (right hand sides of equation (\[eq1\]) and of the first equation (\[eq3\])). The simplifying assumption consists in replacing the real forces and torques by constant vectors that produce exactly the same impulses. In this way we are simplifying, but not oversimplifying, the problem, making it handy, even practicable in the classroom context. The constant forces and torques can be regarded as average ones, producing the same momentum (linear and angular) variations as the real ones. Of course we could use a probably more realistic force such as $F(t)=F_0\sin(\xi t)$, where $\xi$ is a parameter. This would complicate the approach because (1) would no longer be a trivial integration. With a sophisticated force sensor (the toy is very light) it would be possible, in principle, to figure out $\vec F(t)$ and $\vec F'(t)$ and adjust them with analytic functions. With known time dependent analytic functions the integral in (\[eq1\]) would be straightforward but the integral in (\[eq2\]) still would require the knowledge of $\vec F=\vec F(x_{\rm cm})$ (the same for $\vec F'$) and the integral in (\[eq3\]) would require the knowledge of $\vec \tau=\vec \tau(\phi)$ (the same for $\vec \tau '$). A more quantitative analysis of the motion is out of the scope of the present study and this is why we are using constant forces (and torques) to keep the problem within manageable limits. Phase 2 consists of a parabolic motion of the center of mass (neglecting air resistance, of course) combined with an uniform rotation. In figure \[fig:cang4\] we show the trajectory of the center of mass of the toy. Assuming constant forces, the trajectory in phase 1 is exactly a straight line. In fact, for a constant force along $x$ and $y$, and for a body that starts from rest, the accelerations $a_x$ and $a_y$ are constants. Therefore, $x={1\over 2} a_x t^2$ and $y={1\over 2} a_y t^2$, hence, by eliminating $t$ in both equations, this leads to $y=cx$ ($c$ is a constant), whose graph is a straight line. In phase 3 the function $y=y(x)$ is more complicated because there are initial velocities along $x$ and $y$ and $y(x)$ is, in general, not a linear function. However, since the center of mass only moves very little and for a very short time, the trajectory can be approximated by a straight line (the difference between the straight line — phase 3 in figure \[fig:cang4\] — and the actual trajectory is tiny, even indistinguishable within the precision of the drawing). ![\[fig:cang4\] Kangaroo’s center of mass trajectory. In the first phase it is exactly a straight line, whereas in the second phase it is a parabola. In the third phase it is almost indistinct from a straight line.](canguru4.eps){width="9cm"} In summary, regarding the center of mass motion, it is uniformly accelerated along both $x$ and $y$ in phase 1, it is a projectile motion in phase 2 and, finally, it is a uniformly retarded motion in phase 3 along $x$ as well as along $y$. Regarding the rotational motion, the angular acceleration is constant in phase 1, in phase 2 the angular velocity is constant, and in phase 3 the angular acceleration is again constant, producing an uniformly retarded angular motion. ![\[fig:cang5\] Angular displacement around the axis that passes through the kangaroo’s center of mass versus time. The relative durations of the initial and final phases are exaggerated. ](canguru5.eps){width="7.5cm"} Concerning the angular displacement, the toy performs a complete turn around its centre of mass. Most of the time, the toy is in the air, hence the 360$^{\rm o}$ turn is almost executed in phase 2. In phase 1, the supposedly constant torque produces a quadratic dependence with time of the angular displacement: $\phi(t) = {1\over 2} \alpha t^2$, where $\alpha$ is the angular acceleration; the function $\phi(t)$ varies linearly with time during phase 2: $\phi(t) = \Delta \phi_0 + \omega_0 t$, where $\omega_0$ is the angular velocity at the end of phase 1 and during phase 2; and it again varies quadratically in phase 3: $\phi(t) = 2\pi-\Delta \phi'_0 + \omega_0 t - {1\over 2} \alpha' t^2$, where $\alpha'$ is the magnitude of the angular acceleration in the third phase. The time, $t$, in the previous expressions starts at the beginning of each phase. In figure \[fig:cang5\] the function $\phi=\phi(t)$ is shown, assuming that the turn corresponds to exactly $2 \pi$ (in practice, this is only an approximate value) and exaggerating, to make the figure more clear, the durations of phases 1 and 3. In addition to our previous assumptions, we consider that the magnitude of the velocities $v_0$ and $v'_0$ is the same, i.e. the kangaroo touches the ground with its center of mass exactly at the level it occupies when the parabolic motion starts (such an assumption is convenient to simplify the analysis but it could be relaxed). For the sake of a general discussion we let the displacement $\Delta x_0$ be different from $\Delta x'_0$ (the same for the initial and final $y$ displacements). In such a case, the magnitudes of the forces $\vec F$ and $\vec F'$ are different and different are the time intervals during which they act. Indeed, we can stay more general, since such generality has no drastic consequence on the formalism or on its clarity. So, we assume that the contact constant forces, $\vec F$ and $\vec F'$, do not necessarily have the same magnitude. In figure \[fig:cang6\] we represent the vertical component of the resultant force, $R_y$, acting on the toy (part \[a\]) and the horizontal component of the same resultant force, $R_x$, (part \[b\]). Again the time intervals in the initial and final phases are exaggerated. The important point to notice is that, for each graph, the algebraic sum of the represented areas, i.e. of the impulses along $y$ and $x$ should add up to zero: the toy stars from rest and it comes to rest at the end of the jump, therefore the total variation of the linear momentum is zero in both the $x$ and $y$ directions. The same is true for the angular impulses: those in phases (1) and (3) cancel out and, in phase 2, the angular impulse is zero. ![\[fig:cang6\] Resultant force acting on the toy: vertical component \[a\], and horizontal component, \[b\]. The represented areas are the impulses. The impulses, in each graph, add up to zero. The relative durations of the initial and final phases are exaggerated. There is a picture similar to \[b\] for the angular impulse (positive in phase 1, negative in phase 3 and null during phase 2). ](canguru6.eps){width="16cm"} Each of the contact forces has a vertical component (normal reaction) and a horizontal component (static friction force). We may decompose the contact forces according to $\vec F= f\, \vec{\rm e}_x + N \, \vec{\rm e}_y$ and $\vec F'= -f'\, \vec{\rm e}_x + N' \, \vec{\rm e}_y$. To fix the notation we summarize in table \[tab1\] the displacements, velocity variations, accelerations and other magnitudes for each phase of the motion. 0.5cm ---------------------------- -------------------------------- -------------------- ------------------------------------ Phase 1 Phase 2 Phase 3 Duration $\Delta t_0$ $\Delta t$ $\Delta t'_0$ Force $(f, N - G)$ $(0, -G)$ $(-f', N'-G) $ Acceleration $(a_x,a_y) $ $ (0, -g)$ $(-a'_x,a'_y) $ Velocity variation $(v_{0x},v_{0y})$ $(0,-2v_{0y})$ $(-v_{0x},v_{0y})$ Displacement $(\Delta x_{0}, \Delta y_{0})$ $(D, 0)$ $(\Delta x'_{0}, - \Delta y'_{0})$ Torque $\tau$ $0$ $ -\tau'$ Angular acceleration $\alpha$ 0 $-\alpha '$ Angular velocity variation $\omega _0$ 0 $-\omega _0$ Angular displacement $\Delta \phi_0 \sim 0^{\rm o}$ $\sim 360^{\rm o}$ $\Delta \phi'_0\sim 0^{\rm o}$ ---------------------------- -------------------------------- -------------------- ------------------------------------ : \[tab1\] Magnitudes at each phase of the kangaroo’s motion. All quantities are positive, so the negative entries are explicitly written with a minus sign. The torque is along the $z$ axis, perpendicular to the $xy$ plane. Phases (1) and (3) are the most interesting ones, since phase (2) corresponds to the well known projectile motion. Equation (\[eq1\]) and the first equation (\[eq3\]), on the one hand, and equation (\[eq2\]) and the second equation in (\[eq3\]), on the other hand, lead to      { [rl]{} m v\_[0x]{} & = f t\_0\ m v\_[0y]{} & = (N -G) t\_0\ I \_0 & = t\_0 .    [and ]{}     { [rl]{} [1 2]{} m v\_0\^2 &= f x\_0 + (N-G)y\_0\ &\ [1 2]{} I \_0\^2 &= \_0 , . \[phase1a\] where $m$ is the mass of the toy and $I$ its moment of inertia (that may slightly vary with time but we assume it is constant). Moreover, equation(\[totale\]), leads to m v\_0\^2 + [1 2]{} I \_0\^2 + U\_ = - G y\_0 , \[phase1b\] where $\Delta U_{\kappa}$ is the part of the internal energy variation due to the spiral spring (this is the potential elastic energy delivered by the spring). The other part of the variation of the internal energy of the system is the kinetic rotational energy in (\[phase1b\]). The corresponding equations for phase 3 are    { [rl]{} m v\_[0x]{} & = f’ t’\_0\ m v\_[0y]{} & = (N’ -G) t’\_0\ I \_0 & = ’ t’\_0 .    [and ]{}     { [rl]{} [1 2]{} m v\_0\^2 &= f’ x’\_0 + (N’-G)y’\_0\ &\ [1 2]{} m \_0\^2 &= ’ ’\_0 . \[phase3a\] and -[1 2]{} m v\_0\^2 - [1 2]{} I \_0\^2 = G y’\_0 + Q , \[phase3b\] where $Q$ is the heat transfer to the surroundings during the process (in this phase $\Delta U_\kappa =0$). We stress that neither $\vec F$ nor $\vec F'$ perform any work because (ideally) their application points do not move. As already mentioned, phase 2 corresponds to a projectile motion combined with a uniform rotation. In this part only a conservative force is acting, hence the sum of the translational kinetic energy, of the rotational kinetic energy and of the potential gravitational energy remains constant (the rotational kinetic energy, ${1\over 2} I \omega_0^2$, is, itself, constant). For this phase, the center of mass equation (\[eq2\]) and the first law of thermodynamics (\[totale\]) provide exactly the same information, namely m (v\^2 - v\^2\_0) = - G (y-y\_0) (note that now, in equation (\[totale\]), $\Delta U =0$ and $Q=0$, the process is a purely mechanical one, because we have neglected the air friction). Regarding kinematical aspects, the maximum height reached by the center of mass, the horizontal distance traveled by the center of mass and the time of flight are given by h\_[max]{} = y\_0+[v\_[0y]{}\^2 2 g]{},      D= [2 v\_[0x]{} v\_[0y]{} g]{},     t = [2 v\_[oy]{} g]{} . \[rts\] For this phase, we may write $\Delta \phi = \omega_0 \, \Delta t$. If we take $\Delta \phi \sim 2 \pi$, for the time of flight given in (\[rts\]), one finds a relation between the vertical component of the velocity at the end of phase 1 and the angular velocity at that very same moment: $\omega_0={\pi g \over v_{0y}}$. Energetic issues ================ In this section we explicitly show that there are asymmetric energetic issues related to the motion in phases 1 and 3, though the mechanical description of those phases is symmetric in the sense that in the first and in the third phases the impulses of the resultant forces are equal in magnitude with opposite directions. Let us go back to phase 1. The required energy for the kangaroo to perform the back somersault is obviously provided by the spiral spring and ultimately by the person that winds it. When the spring is released, we assume the elastic energy decrease to be given by $\Delta U_{\kappa}= {1\over 2} \kappa (\theta^2 - \theta^2_0)$, with ${1\over 2}\kappa \theta_0^2$ the initial stored energy and $\kappa$ the elastic constant characterizing the spring (the angle $\theta$, with initial value $\theta_0$, is a decreasing function of time). As this energy decreases, the articulated toy starts the jump. Combining equations (\[phase1a\]) and (\[phase1b\]), we may express the internal energy variation by U\_ & = & - ( [1 2]{} m v\_0\^2 + [1 2]{} I \_0\^2 + Gy\_0 )\ & = & - &lt;0 . The first line explicitly shows that the internal (elastic) energy is converted into another form of mechanical energy: translational and rotational kinetic energies plus potential gravitational energy. This phase is reversible — it evolves with no variation of the entropy of the universe [@leff12-iv]. In the second line, the kinetic energy terms are expressed by the pseudo works related to the contact force and to its torque. Regarding phase 3, equations (\[phase3a\]) and (\[phase3b\]) now lead to Q & = & - ( [1 2]{} m v\_0\^2 + [1 2]{} I \_0\^2 + Gy’\_0 )\ & = & - &lt;0 . This is the heat released in phase 3 of the motion. This energy is simply lost in the sense that it flows from the system to the surroundings, where it is dissipated, and the entropy of the universe increases [@leff12-v]. The relation between the original internal energy variation and this heat can be found combining the previous equations, or simply by applying directly equation (\[totale\]) to the whole process: on the left hand side of that equation the variation of the center of mass kinetic energy is zero and the variation of the internal energy is solely $\Delta U_{\kappa}$ (to simplify the discussion we are assuming no temperature variations in the toy); on the right hand side of equation (\[totale\]), the work is only due to the weight (again we stress that neither the normal forces, nor the static friction forces perform any work); finally, there is the heat flow to the surroundings. Altogether, equation (\[totale\]) yields for the overall process Q=U\_ + G (y\_0-y’\_0) . \[caloor\] For $\Delta y_0=\Delta y'_0$ the energy initially stored in the spring totally dissipates in the surroundings, which behaves as a heat reservoir at temperature $T$. If the final position of the center of mass lies at an higher level than the initial one, part of the initial stored energy is used to raise the center of mass of the kangaroo, as expressed in equation (\[caloor\]). In that case, the magnitude of the heat is smaller than the magnitude of the elastic potential energy initially stored in the toy. The overall process is clearly irreversible and, for $\Delta y_0=\Delta y'_0$, the entropy variation of the universe is $\Delta S_{\rm U}= -{Q \over T}= -{\Delta U_\kappa \over T}>0$ (the heat should now be considered from the point of view of the reservoir, and this is the reason for the minus sign). When $\Delta x_0=\Delta x'_0$ and $\Delta y_0=\Delta y'_0$, even though the mechanical analysis becomes quite symmetric for the initial and final phases, there is a clear thermodynamical asymmetry: in phase 1, there is mechanical energy production at the expend of the internal energy of the system; in phase 3, the mechanical energy is “lost" in the sense that it dissipates as heat in the surroundings and it cannot be recovered in an useful way. We may say that, at the end of phase 1, the system still keeps the energy in an “organized form" as mechanical energy (hence, no entropy increase), whereas in the final phase there is no mechanism to recover the decrease of mechanical energy: in particular that energy cannot be stored back in the spiral spring. If that could be possible, the kangaroo would perform back somersaults continuously... We all know that this is not the case: after one jump one has to rewind the spiral spring for the next jump. In a recent publication we discuss the same type of asymmetries in the context of human movements, like jumping and walking [@guemez14b]. As in the example discussed in this paper, in one part of the process there is a direct conversion of internal energy into mechanical energy but, in the other part, the mechanical energy dissipates as heat in the surroundings. Conclusions =========== The motion of the kangaroo studied in this paper can be used in the classroom to explicitly show the correspondence between translations and rotations. The example also serves to demonstrate that, for certain mechanical systems performing movements in which one part is symmetric with respect to the other, the thermodynamical behavior of both parts may be different. More precisely, in one part there may be production of mechanical energy, due to an internal energy source, a process that does not increase the entropy, whereas in the other part of the motion there occurs dissipation of mechanical energy, which is transferred to the surroundings as heat, and there is an unescapable entropy increase. We should emphasize that the first phase of the problem studied in this paper is somehow similar to the springboard diver discussed in ref. [@walker11] — the main difference is that the internal (elastic) energy variation should then be replaced by the variation of the Gibbs free energy in the person’s muscles [@guemez14b; @atkins10]. There are of course many other examples but the point we would like to emphasize here is the fact that energetic aspects are usually a bit underrated in textbooks [@walker11 p. 70]. The discussion presented in this paper comes in the sequence of a series of papers [@guemez13; @guemez14; @guemez13b; @guemez13c; @guemez14b] devoted to the link between mechanics and thermodynamics, a tendency that, fortunately, is already present in modern textbooks [@chabay11x]. Our aim is to contribute to fill the gap one encounters in most treatments of classical mechanics, and we do hope that our discussion, motivated by a “kangaroo", is relevant for physics education. [99]{} J. Güémez, C. Fiolhais, M. Fiolhais [*The Cartesian Diver and the fold catastrophe*]{}, Am. J. Phys. [**70**]{} 710-714 (2002); J. Güémez, R. Valiente, C. Fiolhais, M. Fiolhais, [*Experiments with the drinking bird*]{}, Am. J. Phys. [**71**]{} 1257-1263 (2003) J. Güémez, C. Fiolhais, M. Fiolhais, [*Toys in physics lectures and demonstrations — a brief review*]{}, Phys. Edu. [**44**]{} 53-64 (2009); J. Güémez, C. Fiolhais, M. Fiolhais, [*Juguetes en clases y demostraciones de Física*]{}, Revista Iberoamericana de Física [**6**]{} 45-56 (2010) J. Güémez, M. Fiolhais, [*From mechanics to thermodynamics: analysis of selected examples*]{}, Eur. J. Phys. [**34**]{} 345-357 (2013) J. Güémez, M. Fiolhais, [*Thermodynamics in rotating systems – analysis of selected examples*]{}, Eur. J. Phys. [**35**]{} (2014) 015013 (14pp) J. Güémez, M. Fiolhais, [*Forces on wheels and fuel consumption in cars*]{}, Eur. J. Phys. [**34**]{} 1005-1013 (2013) J. Güémez, M. Fiolhais, [*The physics of a walking robot*]{}, Phys. Educ. [**48**]{} 455-458 (2013) C M Penchina, [*Pseudowork-energy principle*]{}, Am. J. Phys. [**46**]{} 295-296 (1978) B A Sherwood, [*Pseudowork and real work*]{}, Am. J. Phys. [**51**]{} 597-602 (1983) A J Mallinckrodt, H. S. Leff, [*All about work*]{}, Am. J. Phys. [**60**]{} 356-365 (1992) C E Mungan, [*A primer on Work-Energy relashionships for introductory Physics*]{}, Phys. Teach. [**43**]{} 10-16 (2005) J W Jewett, Jr. [*Energy and the Confused Student V: The Energy/momentum approach to problems involving rotating and deformable systems*]{}, Phys. Teach. [**46**]{} 269-274 (2008) M Alonso, E J Finn [*University Physics Vol. I, Mechanics*]{} Addison Wesley Publishing Company (1967); D Kleppner, R Kolenkow, [*An introduction to mechanics*]{}, Mc Graw-Hill Publishing Company (1973) P A Tipler, G Mosca, [*Physics for Scientists and Engineers*]{} 5th Ed. W H Freeman and Co. New York 2004, Sec. 9-4 O. A. Haugland, [*Walking through the impulse - momentum theorem*]{}, Phys. Teach. [**51**]{} 78-79 (2013) H. S. Leff, [*Removing the mystery of entropy and thermodynamics - Part IV*]{}, Phys. Teach. [**50**]{} 215-217 (2012) H. S. Leff, [*Removing the mystery of entropy and thermodynamics - Part III*]{}, Phys. Teach. [**50**]{} 170-172 (2012) J. Güémez, M. Fiolhais, [*Thermodynamics asymmetries in whirling, jumping and walking*]{}, Eur. J. Phys. [**35**]{} (2014) in print J Walker, [*Halliday and Resnick Principles of Physics. International Student Version*]{} John Wiley and Sons, Hoboken 2011. p. 291 P. Atkins, [*The Laws of Thermodynamics. A very Short Introduction*]{}, Oxford U. Press, Oxford UK 2010, pp. 65-72 R W Chabay and B A Sherwood, [*Matter and Interactions*]{}, 3th ed. Wiley and Sons, Hoboken 2011. Sec. 11.6 [^1]: [email protected] [^2]: [email protected]
--- abstract: 'We study numerically the late-time tails of linearized fields with any spin $s$ in the background of a spinning black hole. Our code is based on the ingoing Kerr coordinates, which allow us to penetrate through the event horizon. The late time tails are dominated by the mode with the least multipole moment $\ell$ which is consistent with the equatorial symmetry of the initial data and is equal to or greater than the least radiative mode with $s$ and the azimuthal number $m$.' author: - 'Lior M. Burko' - and Gaurav Khanna date: 'February 20, 2003' title: Radiative falloff in the background of rotating black holes --- The late time dynamics of black hole perturbations has been studied for over three decades. Complete understanding of the late-time dynamics is available for a Schwarzschild background: Generic perturbation fields of either scalar, electromagnetic, or gravitational fields decay at late times along an $r={\rm const}$ curve as an inverse power of time. Specifically, linearized fields (the scalar field itself, or the Teukolsky function $\psi$ in the gravitational case) decay as $t^{-(2\ell+3)}$ (assuming that the initial data have compact support and are not time-symmetric), where $\ell$ is the multipole moment of the perturbation field [@price; @barack; @gundlach1]. This behavior was confirmed also for fully nonlinear collapse of spherical scalar fields [@gundlach2; @burko-ori]. The mechanism which is responsible for this behavior is the scattering of the field off the curvature of spacetime asymptotically far from the black hole. Because it is only the asymptotically far geometry which determines the behavior of the late-time tails, it is natural to expect similar behavior also when the black hole is rotating [@poisson]. Because spacetime is not spherically symmetric, however, spherical-harmonic modes do not evolve independently. Specifically, taking the initial data of the perturbation field to be a pure $Y^{\ell m}$ mode, other modes are excited. Intuitively, all the modes which are not disallowed \[by symmetry requirements (such as the equatorial symmetry of the initial data) or dynamical considerations (such as that only modes with $-\ell\le m\le\ell$ are allowed)\] will be excited. In particular, modes with $\ell$ values [*smaller*]{} than the original $\ell$ will be excited, and will dominate at late times. (Notice that because the background is axially symmetric, modes with different values of $m$ are not excited when [*linearized*]{} perturbation theory is applied.) Accordingly, the late-time dynamics is dominated by the mode with the least $\ell$ which is excited, namely the smallest $\ell$ which is not disallowed. That is, all modes $\ell$ which are not smaller than $|m|$ and $|s|$, where $s$ is the spin weight of the field, and which respect the equatorial symmetry of the initial data, will be excited. The falloff rate is then $t^{-(2\ell_{\rm min}+3)}$, where $\ell_{\rm min}$ is the smallest mode which can be excited. Despite the simplicity of this intuitive picture, recent papers report conflicting results. An analytical analysis by Hod — in which the author attempted to find the [*asymptotic*]{} behavior of the fields in the spacetime of a Kerr black hole — yielded results which are more complicated: The decay rate for a scalar field is predicted by Hod to be [@hod-scalar] $t^{-(2\ell^*+3)}$ if $\ell^*=m$ or $\ell^*=m+1$, $t^{-(\ell^*+m+1)}$ if $\ell^*-m\ge 2$ is even, and $t^{-(\ell^*+m+2)}$ if $\ell^*-m\ge 2$ is odd, where $\ell^*$ is the [*initial*]{} value of $\ell$. For gravitational perturbations Hod’s formula is [@hod-prl] $t^{-(\ell^*+\ell_0+3-q)}$ (for axisymmetric perturbations), where $\ell_0$ is the radiative mode with the least value of $\ell$, and $q={\rm min}(\ell^*-\ell_0,2)$. \[Different, apparently conflicting results were reported by Barack and Ori [@barack-ori]. Those authors assumed that the mode $\ell,m=0$ is present in the initial data (for $s=0$), as a result of which it is not straightforward to confront their predictions with Hod’s.\] Although Hod’s results could be relevant for an [*intermediate*]{} regime for carefully chosen parameters, they make only little sense for describing the intended asymptotic late-time behavior. These eerie conclusions imply that some sort of a “memory effect” takes place: the field somehow “remembers” its initial configuration, despite being a linearized field. We do not believe that such a “memory effect” is reasonable: Take the initial data at the time $t_0$ to be those of the pure mode $\ell^*$, such that $\ell^*$ is significantly larger than $\ell_{\rm min}$. At the time $t_1>t_0$ the field includes, in addition to the mode $\ell^*$, also contributions from modes $\ell<\ell^*$ because of the excitation of other $\ell$ modes. Now the fields at $t=t_1$ can be construed as the initial data of a new evolutionary problem. In the new problem the initial data are a mixture of modes, such that modes $\ell$ smaller than $\ell^*$ are present [@poisson]. Because the mode with the smallest existing $\ell$ value dominates at late times and determines the decay rate of the tail, we can see no way in which the $\ell^*$ mode can determine the asymptotic late-time tail, unless $\ell^*$ determines which modes can and which modes cannot be excited. As in the spacetime of a Kerr black hole it is hard to see how a scenario in which modes which are not disallowed can still be excluded, we conclude that “memory effects” are not to be expected. Hod’s results, if correct, suggest to us that an hitherto unsuspected mechanism of selection rules inhibits the excitation of otherwise allowed modes. Such a counter-intuitive theoretical reasoning must have strong numerical support in order not to be discarded. Conclusions which apparently are similar to Hod’s were obtained more recently by Poisson [@poisson], who analyzed the scalar-field tails in a general weakly-curved, stationary, asymptotically flat spacetime. We emphasize that unlike Hod’s analysis — which is an attempt to find the asymptotic late-time behavior in the spacetime of a spinning black hole — Poisson’s analysis aims at finding the behavior in a spacetime in which curvature is weak everywhere. While Poisson’s analysis and results are correct for the spacetime he studies, one should use caution when infering from Poisson’s results on the asymptotic late-time behavior in a Kerr geometry: Although the asymptotically-far geometries are similar, the near-field geometries are very different. As we discuss below, that is a crucial element in understanding the late-time behavior. Hod’s surprising predictions agree with some reported numerical simulations. In particular, for the case $s=0$, $\ell^*=0,m=0$ Hod’s formula predicts a decay rate of $t^{-3}$, which is indeed found [@KLP96]. For the case $s=0$, $\ell^*=4,m=0$, however, Hod’s formula predicts a decay rate of $t^{-5}$, whereas the intuitive picture predicts a decay rate of $t^{-3}$. This case was simulated numerically by Krivan [@krivan], who found a decay rate with a non-intergal index close to $-5.5$. Like Hod, Krivan too tried to find the asymptotic late-time behavior in the Kerr spacetime. Some view this as a loose confirmation of Hod’s prediction [@poisson], with numerical accuracy of $10\%$, and as an invalidation of the intuitive picture. In this Rapid Communication we present results from independent numerical simulations for linearized perturbation fields over a Kerr background. Our simulations show a clear falloff rate of $t^{-3}$ for the initial data of $s=0$, $\ell^*=4$, $m=0$. The quality of our results invalidates Hod’s prediction for the asymptotic decay rate, and points at difficulties with Krivan’s simulations or their interpretation. In all the cases we have checked, for either a scalar or a gravitational field, we find that the intuitive picture is correct: the late time behavior is dominated by the mode with the lowest value of $\ell$ which can be excited. In particular, no spooky memory effects occur. We used the penetrating Teukolsky code (PTC) [@ptc], which solves the Teukolsky equation for linearized perturbations over a Kerr background in the ingoing Kerr coordinates $({\tilde t},r,\theta, {\tilde \varphi})$. The Kerr metric is given by $$\begin{aligned} \label{metric} \,ds^2=\left(1-\frac{2Mr}{\Sigma}\right)\,d{\tilde t}^2- \left(1+\frac{2Mr}{\Sigma}\right)\,dr^2-\Sigma\,d\theta^2\nonumber \\ - \sin^2\theta\left(r^2+a^2+\frac{2Ma^2r}{\Sigma}\sin^2\theta\right) \,d{\tilde \varphi}^2-\frac{4Mr}{\Sigma}\,d{\tilde t}\,dr\nonumber \\ + \frac{4Mra}{\Sigma}\sin^2\theta\,d{\tilde t}\,d{\tilde \varphi}+ 2a\sin^2\theta\left(1+\frac{2Mr}{\Sigma}\right)\,dr\,d{\tilde \varphi}\, ,\end{aligned}$$ where $\Sigma=r^2+a^2\cos^2\theta$, and $M,a$ are the mass and the specific angular momentum, respectively. These coordinates are related to the Boyer-Lindquist coordinates $(t,r,\theta,\varphi)$ through ${\tilde \varphi}=\varphi+\int a\Delta^{-1}\,dr$ and ${\tilde t}=t-r+r_*$, where $\Delta=r^2+a^2-2Mr$ and $r_*=\int(r^2+a^2)\Delta^{-1}\,dr$. Notice that ${\tilde t}$ is linear in $t$, so that along $r={\rm const}$, $\partial /\,\partial{\tilde t}= \partial /\,\partial t$. The Teukolsky equation for the function $\psi$ in the ingoing Kerr coordinates can be obtained by implementing black hole perturbation theory (with a minor rescaling of the Kinnersley tetrad [@ptc]). It is given by $$\begin{aligned} \label{teukolsky} && (\Sigma + 2Mr){{\partial^2 \psi}\over {\partial \tilde t^2}} - \Delta {{\partial^2 \psi}\over {\partial r^2}} + 2(s-1)(r - M){{\partial \psi}\over {\partial r}} \nonumber \\ && -{{1}\over {\sin \theta}}{{\partial}\over {\partial \theta}} \left ( \sin \theta {{\partial \psi}\over {\partial \theta}}\right ) -{{1}\over {\sin^2 \theta}}{{\partial^2 \psi}\over {\partial \tilde \varphi^2}} -4Mr{{\partial^2 \psi}\over {\partial \tilde t \partial r}}\nonumber\\ && -2a {{\partial^2 \psi}\over {\partial r\partial \tilde \varphi}} - i {2s\cot\theta \over \sin\theta} {{\partial \psi}\over {\partial \tilde \varphi}} + (s^{2}\cot^{2}\theta+s)\psi \nonumber\\ && + 2\left[{sr+ias\cos\theta+(s-1)M}\right] {{\partial \psi}\over {\partial \tilde t}} = 0\, .\end{aligned}$$ Equation (\[teukolsky\]) has no singularities at the event horizon, and therefore is capable of evolving data across it. The PTC implements the numerical integration of Eq. (\[teukolsky\]) by decomposing it into azimuthal angular modes and evolving each such mode using a reduced 2+1 dimensional linear partial differential equation. The results obtained from this code are independent of the choice of boundary conditions, because the inner boundary is typically placed inside the horizon, whereas the outer boundary is placed far enough that it has no effect on the evolution. The PTC has been tested in various different situations. First, it yields the correct complex frequencies for the quasi-normal modes of a Kerr black hole for a wide range of values of $a/M$. Second, it has also been shown to yield equivalent results in the context of the close limit collision of two equal mass, non-spinning, non-boosted black holes (to ones obtained from the Zerilli formalism) [@gaurav]. It is stable, and exhibits second-order convergence. We next set $a/M=0.9$, $s=0$, and $\ell^*=4$, $m=0$. The initial gaussian perturbation is taken to be a mixture of ingoing and outgoing waves, and centered about $r=20M$ with a width of $4M$. As discussed above, our expectations are that all the even $\ell$ modes are excited (respecting the equatorial symmetry of the initial data). The least $\ell$ mode which is excited is the $\ell=0$ mode, so that the decay rate we expect is $t^{-3}$. In contrast, the prediction of Hod is for a decay rate of $t^{-5}$. Figure \[fig1\] shows the Teukolsky function $\psi$ for these initial data for $\theta=\pi/2$ (the equatorial plane) for three different resolutions. The data clearly indicate stability and second-order convergence. epsf 8.0cm A decay rate of about $t^{-3}$ is already clear from Fig. \[fig1\]. Evaluating the decay rate from the slope of the field is very inaccurate: The slope then depends on the interval one chooses, and also on the presence of subdominant modes. The first difficulty can be handled by considering the [*local power index*]{} $n$ [@burko-ori], which we define as $n\equiv -({\tilde t}/\psi) \,\partial_{\tilde t}\psi$. The second difficulty can be handled by extrapolating $n$ to timelike infinity. Figure \[fig2\]A shows $n$ as a function of $M/{\tilde t}$. Timelike infinity is at zero, and both the regime where the field is dominated by the quasi-normal ringing and the regime where the field is dominated by the power-law tails are shown. The local power index $n=2.9846$ at ${\tilde t}=1500M$. Figure \[fig2\]B shows the behavior of $3-n$ as a function of $M/{\tilde t}$. Clearly, $n$ gets closer with time to the expected value of $3$. In fact, extrapolating $n$ to ${\tilde t}\to\infty$ using Richardson’s deferred approach to the limit, we find the asymptotic value of $n$ to be $n_{\infty}=3.0003\pm 0.0011$. Our results suggest that the late-time field is dominated by the $\ell=0$ mode. We checked this by plotting $\psi$ as a function of $\theta$ in Fig. \[fig3\] for different values of ${\tilde t}$. We indeed find that $\psi$ quickly loses any dependence on $\theta$, such that at late times it is indeed described by the $\ell=0$ mode. Any dependence of $\psi$ on $\theta$ is smaller than 3 parts in $10^6$ at ${\tilde t}=1000M$. epsf 8.0cm epsf 8.0cm Next, we present results for the behavior of fields with higher spins. We set the parameters to $s=2$, $a/M=0.3$, and initial $l^*=6$, $m=0$. The pulse is again centered about $r=20M$ with a width of $4M$. The prediction of Hod’s formula for this case is a decay rate of $t^{-9}$. In this case our expectations are that the least $\ell$ mode to be excited is the $\ell=2$ mode. Consequently, we expect the decay rate to be $t^{-7}$. This is indeed confirmed in Fig. \[fig4\]A, which shows the local power index $n$ as a function of ${\tilde t}/M$, and in Fig. \[fig4\]B which displays $7-n$ as a function of $M/{\tilde t}$. At ${\tilde t}=1500M$, we find that $n=6.8646$. Extrapolating $n$ to timelike infinity, we find that $n_{\infty}=7.01\pm 0.03$, in agreement with our expectations. epsf 8.0cm Our results clearly show that starting with a pure mode $\ell^*,m$, the late-time decay rate is dominated by the least mode $\ell_{\rm min}$ which is consistent with the equatorial symmetry of the initial data and is equal to or greater than the least radiative mode $\ell_0={\rm max}(|s|,|m|)$. The late-time decay rate is given by $t^{-(2\ell_{\rm min}+3)}$. Our conclusions are in sharp disagreement with the recent predictions by Hod [@hod-prl; @hod-scalar]. Hod’s analysis is in the frequency domain, and carried to leading order in $\omega$, the angular frequency. That approach is very successful in the background of a Schwarzschild black hole, where it reproduces the known results [@andersson]. The understanding that the power-law tails result from scattering of the field at asymptotically large distances implies that it is only the small $\omega$ which are responsible for the tails. That is indeed the case with a Schwarzschild black hole. We conjecture that it would also be the case for a Kerr black hole, if there were no excitations of dominating modes which are not present in the initial data. For example, in the case of a scalar field with $\ell^*=m=0$, the dominating mode is already present in the initial data. Considering only the small $\omega$ contributions indeed produces a result in agreement with numerical simulations. When the dominating mode is not present in the initial data, however, it needs first to be excited. If it is excited (with any nonzero amplitude), the small $\omega$ approximation may produce the correct result for the decay rate. However, mode-excitation is an effect which is nonlinear in the gravitational potentials, and is strongest in the near zone. This suggests to us that a leading order (in $\omega$) analysis will not, in general, get all the excited modes right. It might be the case that higher orders in $\omega$ are necessary in order to get all the modes which are excited. Our numerical results indeed show, that when the least mode which can be excited $\ell_{\rm min}$ is “far” from the initial $\ell^*$, that technique does not produce the former: For example, for initial $\ell^*=4$ and $m=0$, the leading order in $\omega$ analysis was able to get the $\ell=1$ mode excited (as is manifested by Hod’s decay rate of $t^{-5}$), but not the $\ell=\ell_{\rm min}=0$ mode (which implies a decay rate of $t^{-3}$). We suggest, that although a frequency-domain analysis is capable of getting the decay rate right, it should include an expansion to higher orders in $\omega$. Such an expansion would be a formidable endeavor. In a similar way, by taking spacetime to be weakly curved everywhere, Poisson tacitly assumed that it is just the far-zone part of the field which is important. (In Poisson’s case, we emphasize, this assumption is well justified, because in the spacetime studied by Poisson spacetime is nowhere strongly curved. Incidentally, Poisson suggests a selection-rule mechanism in the spacetime he studied, which is related to the remarkable vanishing of terms in the initial data in the transformation from spheroidal to spherical coordinates. The mechanism suggested by Poisson demonstrates how indeed Hod’s results could be correct in that context. However, no such mechanism is offered for a Kerr spacetime.) That assumption is equivalent to taking the large-$r$ approximation, or the small-$\omega$ approximation. Consequently, Poisson and Hod make, in fact, the same kind of approximation, such that it is not surprising that they obtain the same results. We emphasize, that Poisson acknowledges that effects which are nonlinear in the gravitational potentials may produce modes with $\ell$ values which are smaller than those obtained by him. Poisson then remarks, that no such effects have been reported on in the literature. Evidence for such an effect is precisely what we find here. Although the late-time expansion method [@barack-ori] does not seem to suffer from similar weaknesses, it is hard to apply for the problem of interest. Starting with an initial $\ell^*$ which is “far” from the least mode $\ell_{\rm min}$ to be excited, the method of Ref. [@barack-ori] requires many iterations in order to find the excited mode $\ell_{\rm min}$. Specifically, three iterations are required in order to find the $\ell=0$ mode starting with $\ell^*=4,m=0$. Carrying this iterative scheme in practice seems like a daunting task. We would like to repeat, that while Hod’s method fails to obtain the correct [*asymptotic*]{} decay rate, it may still be useful in determining an [*intermediate*]{} behavior for carefully chosen parameters. Lastly, our results are in disagreement also with the numerical results of Krivan [@krivan], who reported on a fractional power-law index which is about $-5.5$ for the case of initial $s=0$, $\ell^*=4$ and $m=0$. While we cannot point with certainty to the reason why Krivan’s simulations produce a result for the asymptotic late-time behavior which is at odds with ours, we would like to mention some of the factors which may be responsible: Krivan takes the black hole to spin exceedingly fast. In fact, Krivan takes $a/M=0.9999$. The high spin of the black hole may act in two ways: First, it slows down the decay rate of the quasi-normal ringing, such that longer integration times are required in order to obtain the tails. Second, the numerical solution of the Teukolsky equation is more sensitive and harder when the spin is very high. Another factor is related to the location and the direction of Krivan’s initial perturbation. Krivan takes the perturbation to be centered around $r_*/M=100$, and to have a very large width (of $100M$). Also, the perturbation is purely outgoing on the initial slice. We thus conjecture that the dominating $\ell=0$ mode is excited only with a very low amplitude, because most of the perturbation field does not probe the strong-field region. This, in addition to the great distance and width of the initial perturbation, may combine into late-time tails whose asymptotic behavior becomes evident only at very late times, to which Krivan’s simulations have not arrived. The picture which arises for linearized perturbations in the background of a spinning black hole is simpler than that which is implied by. However, we expect the picture to be even simpler than that for fully nonlinear perturbations: When the initial perturbation is not axially symmetric, the evolving spacetime will not be axially symmetric either. Consequently, the $m$ value of the field will not be conserved, and different values of $m$ will also be excited, preserving only the equatorial symmetry of the initial data. We therefore expect a fully nonlinear evolution to yield results which are simpler than those obtained from a linearized analysis: Because $m$ is no longer fixed, the restriction of $\ell_0$ is no longer so strict: $\ell_0=|s|$, and the dominating mode is simply the least $\ell$ mode which is consistent with the equatorial symmetry which is equal to or greater than $\ell_0$. We thus expect generic tails to always have a decay rate of $t^{-(2|s|+3)}$. The more complicated results of this Rapid Communication then are an artifact of the linearization: the full theory is simpler. We thank Eric Poisson and Richard Price for discussions. This research was supported by NSF grants PHY-9734871 and PHY-0140236. Initial work on this research was done while LMB was at the California Institute of Technology, where it was supported by NSF grant PHY-0099568. We thank the Center for Gravitational Physics and Geometry at Penn State for computational facilities. [99]{} R.H. Price, Phys. Rev. D [**5**]{}, 2419 (1972) C. Gundlach, R.H. Price, and J. Pullin, Phys. Rev. D [**49**]{}, 883 (1994). L. Barack, Phys. Rev. D [**59**]{}, 044017 (1999). C. Gundlach, R.H. Price, and J. Pullin, Phys. Rev. D [**49**]{}, 890 (1994). L.M. Burko and A. Ori, Phys. Rev. D [**56**]{}, 7820 (1997). E. Poisson, Phys. Rev. D [**66**]{}, 044008 (2002). S. Hod, Phys. Rev. D [**61**]{}, 024033 (2000); [**61**]{} 064018 (2000). S. Hod, Phys. Rev. Lett. [**84**]{}, 10 (2000). L. Barack and A. Ori, Phys. Rev. Lett. [**82**]{}, 4388 (1999). W. Krivan [*et al*]{}, Phys. Rev. D [**54**]{}, 4728 (1996). W. Krivan, Phys. Rev. D [**60**]{}, 101501 (1999). M. Campanelli [*et al*]{}, Class. Quantum Grav. [**18**]{}, 1543 (2001). G. Khanna, Phys. Rev. D [**65**]{}, 124018 (2002). N. Andersson, Phys. Rev. D [**55**]{}, 468 (1997).
--- bibliography: - 'Stellato\_etal\_1409.bib' --- biblabel\[1\][\#1]{} makefntext\[1\][\#1]{} {#section .unnumbered} Introduction {#sec:INTRO} ============ Zinc is the second (after iron) most abundant transition metal in living organisms. It is known to carry out a number of important different tasks in complex with proteins. A detailed knowledge of zinc ion chemistry is essential for understanding its role in biology and for designing either complexes that deliver zinc to proteins or chelating agents that instead remove zinc from proteins [@krkezel2016biological]. A necessary premise to all these investigations is of course the identification of the Zn(II) coordination mode in water. The structure of this system is in principle quite simple, and yet it is still under debate in the literature. The most commonly observed Zn(II)-water coordination mode is the hexa-coordinated octahedral one [@mhin1992zn; @mink2003infrared; @rudolph1999zinc]. However, penta- and tetra-coordinated structures have been also proposed [@arumuganathan2008two; @bock1995hydration; @krkezel2016biological; @Dudev00] and are observed when Zn is in complex with proteins or in the presence of solvents different from pure water [@auld2001zinc; @jacquamet1998x; @d2002total; @giannozzi2012zn]. The relevance of non-octahedral coordination geometries is still a matter of debate (see Ref.  for a survey). Moreover, the presence of a small fraction of aqua-ions is possible, even at physiological conditions, because at pH=7.4 one expects that in about 1% of the cases the Zn(II) ion is coordinated to an OH$^-$ ion. Depending on the nature of the anion X in the salt that is dissolved in water and of its concentration, the presence of , $n=1,2,3$, species has been proposed since a long time [@Maeda96]. We see from this discussion that, even the coordination mode of a simple system like Zn(II) in water solution can be affected by the existence of several other species, all probed by spectroscopy. Thus a self-consistent computational tool able to directly model the spectrum and to include all these effects is necessary to be able to reliably settle the question. Various experimental [@mink2003infrared; @migliorati2012influence] and computational [@sanchez1996examining; @mohammed2005quantum] techniques have been used to study the coordination of Zn(II) ions in water. Among the experimental techniques, X-ray absorption spectroscopy (XAS) is the ideal tool for probing the local environment around a selected atom in a disordered system. XAS works, in fact, for systems in any state of aggregation and can therefore be used for structural investigations of metal ions in solution. Another remarkable feature of this technique is that the choice of the XAS absorption edge allows for chemical and orbital selectivity. This “tunability” has led to the development of X-ray microscopy as a powerful analytical technique, with the possibility of detailed space resolved measurements (imaging) even for trace concentrations, as it is often the case in biology [@Collingwood2017]. A XAS spectrum is commonly divided into two regions according to the kinetic energy of the extracted electron: the so called XANES (X-ray Absorption Near Edge Structure) region that conventionally extends from a few eV before the edge energy (in the case of the K-edge the latter is the ionization energy of the 1s electron of the absorbing atom) and few tenths of eV above it, and the EXAFS (Extended X-ray Absorption Fine Structure) region that starts after the XANES portion of the spectrum and extends 500-1000 eV after the edge. The XANES region, in which the extracted electron undergoes mainly multiple scattering events with the surrounding atoms, in principle contains detailed information about the local atomic structure around the absorbing atom [@filipponi1995x; @benfatto1986multiple; @rehr2005progress; @morante2014metals]. However, in order to extract geometrical information from the XANES, a quite accurate guess of the actual geometrical arrangement of the atoms surrounding the absorber must be available. This is necessary because, given the low kinetic energy of the extracted electron, its behaviour strongly depends on the details of lattice and core-hole potentials in the vicinity of the absorbing atom. Various approaches aimed at extracting valuable structural information from the XANES spectrum have been developed and are routinely used  [@benfatto2003mxan; @bunuau2009self; @rehr2010parameter]. Some of them (e.g. the one implemented in the popular MXAN code [@benfatto2003mxan]), make use of the so-called “muffin-tin” approximation, which consists in assuming a spherical scattering potential centered on each atom and a constant value in the interstitial region among atoms. A different class of approaches is based on first principles electronic structure computations either via the calculation of the real-space Green function [@rehr2010parameter] or exploiting plane waves basis sets and pseudopotentials [@taillefumier2002x; @gougoussis2009intrinsic; @gougoussis2009first; @Bunau2013]. The approach we shall present and discuss in the work belongs to this last class of methods. On the computational side, classical and quantum-mechanical methods have been employed either alone [@mohammed2005quantum; @riahi2013qm; @sanchez1996examining; @merkling2001combination; @merkling2002exploring; @minicozzi2008role] or in combination with experimental information to determine the structures of ions in solution. Synergic methods of this kind rely on the idea of producing via molecular dynamics (MD) simulations a pool of individual configurations of the system representative of static disorder effects. The average over the XANES spectra computed for each one of the collected configurations is in the end compared to experimental data. This kind of strategy was also employed in Ref. , where Zn(II) structures in water are constructed from classical MD simulations guided by a force field in turn fitted on an underlying density-functional theory (DFT) calculation. The MXAN software, employing the muffin-tin potential, is finally used to compute the theoretical spectrum. In this paper we present a parameter-free first principles calculation of the XANES spectrum of Zn(II) in water similar to the one that has been already successfully developed in Ref.  for Cu(II) in water. In the present case we improved our XANES calculation strategy by taking care of very relevant aspects, such as the core-level shift calculation (see section \[sec:CXS\] for the details), that were not dealt with in the previous work [@la2015first]. We proceed by first generating by classical MD equilibrated configurations of Zn(II) in a large box of water molecules. From the large system snapshots, configurations of a smaller subsystem including the metal site and two solvation layers are cut out. The latter are then relaxed within the DFT formalism suitable for large (100-1000) atomic assemblies. The relaxation step is intended to drive to acceptable values the forces resulting from the extraction of a sub-set of atoms from the larger system and the quantum mechanical inclusion of valence electrons (that we represent in terms of plane waves) and core pseudopotentials. Our calculations were performed in a plane wave formalism. The main advantages of this method with respect to local basis sets are twofold. First, structural optimization is more robust in a plane waves formalism as there are no Pulay forces and it is not necessary to displace the localized sets. Thus, even for very large atomic displacement and optimized structures very different from the initial guess, the optimization step remains very reliable. This is crucial in liquids, where the atoms are substantially free to move and large statistic is required to compare calculations with experiments [@liang2011ab]. Second, atomic orbitals are more inaccurate as the energy of the XAS spectrum increases (far-edge region) and the final state is closer to a scattering state (i.e. a plane wave). Finally, from the computational point of view, our approach relies on the Lanczos method and continuous fractions which only require the knowledge of the electronic charge density and avoid the cumbersome explicit calculation of excited states. This represents a substantial computational gain as neither the band energies nor the wavefunctions of the empty electronic state are needed. This aspect is important when dealing with relatively large cells (about 100 atoms), which, as observed in the literature [@mo2000ab], are required to properly calculate XANES spectra. The XANES spectrum is therefore here computed by performing a fully self-consistent DFT calculation based on plane waves and the use of pseudopotentials [@taillefumier2002x; @gougoussis2009intrinsic; @gougoussis2009first; @Bunau2013]. The comparison of the computed spectrum with the experimental Zn K-edge XANES data unambiguously shows that among the different [*a priori*]{} possible geometries, Zn(II) in water lives in an octahedral coordination mode Methodologically the computational strategy we present in this work is an important step in the direction of an unbiased, quantum-mechanical framework aiming at a parameter-free calculation of the XANES spectra of disordered systems. The simplicity and the success of the strategy we have illustrated and tested in this work make us to believe that it can be exported to more complicated situations like the one we encounter in disordered systems and in systems of biological relevance. Materials and Methods {#sec:MandM} ===================== Experimental data ----------------- The experimental spectrum which we will compare our theoretical calculation with was measured at the GILDA beamline at ESRF [@d1998gilda]. The monochromator was equipped with two Si(311) crystals and was run in the sagittally focusing configuration. Collimation and harmonic rejection was achieved via two Pd coated mirrors working at 3 mrads (cutoff energy = 22 keV). XAS data were collected in fluorescence mode using a 7 elements high-purity-germanium detector. Zn solution was obtained by dissolving 2 mM of ZnCl$_2$ salt in deionized water. Data were collected at room temperature. Building empirical Zn(II) coordination geometries ------------------------------------------------- The first step of our computational strategy consists in the construction of representatives of the [*a priori*]{} possible coordination geometries of the metal ion in water. In the case of Zn(II) the relevant metal coordination modes we need to focus on are the octahedral, tetrahedral and square-planar configurations. It should be noticed that the somewhat unusual square-planar coordination was considered to model the approximate Zn penta-coordination arising when one water molecule approaches the plane formed by the 4 nearest Zn ligand water molecules from an axial direction (see the discussion in Sect. \[sec:Res\]). A practical way to build the desired geometrical water arrangements around the metal site is to make recourse to the “dummy atoms” method [@pang2000successful]. The method consists in placing at fixed positions positive charges (referred to as dummy atoms) around the metal ion center taken as neutral. The location and the charges of the dummy atoms are chosen so as to match the selected geometry, octahedral, tetrahedral and square-planar, respectively, with the Zn charge distributed over them [^1]. By running a sufficiently long MD simulations (1 ns turns out to be a sufficiently long time) dummy charges have the effect of “gently” pushing the water molecules surrounding the metal site to take their positions in the desired geometry. In the case of the octahedral coordination 6 dummy atoms each of charge $e/3$ ($e$ is the electron charge) are located at the vertices of an octahedron with its center on the Zn site. In the case of the square-planar coordination, we have 4 dummy atoms of charge $e/3$ each sitting at the vertices of a square with its center on the Zn site. The tetrahedral coordination is construct in a way similar to the square-planar one, but with the dummy atoms lying at the vertices of a tetrahedron with Zn at the center [^2]. The total charge of the cell was kept equal to zero by adding an appropriate number of Cl$^-$ counterions [@Aqvist90]. In all the geometries we have considered the dummy atoms are located at a distance of 0.9 Å from the Zn site. The interactions between Zn and TIP3P water molecules are described by a Lennard–Jones potential. The Lennard-Jones parameters of the Zn-O interaction are taken to be $\sigma=2.49$ Å and $\epsilon=2.77$ cal/mol. The simulation box was obtained starting from a cubic box (with side $L=18.774$ Å) of TIP3P water molecules [@jorgensen1983comparison] endowed with periodic boundary conditions and equilibrated by Monte Carlo methods at normal conditions ($T=300$ K and $p=1$ bar), where 3 water molecules were replaced by a Zn atom and two chloride anions (for charge neutralization). The resulting system is composed by one Zn atom, two chloride anions and 213 water molecules. Owing to periodic boundary conditions electrostatic interactions were computed employing the smooth particle-mesh Ewald (SPME) method [@essmann1995smooth]. We took a cut-off of 9 Å, a mesh grid with 0.1 Å spacing and a PME direct space energy tolerance of $10^{-4}$ kcal/mol. Classical MD simulations in the $N,V,T$ [*ensemble*]{} were run with the help of the NAMD code [@phillips2005scalable] with a time step of 2 fs. The temperature was kept fixed at $T= 300$ K by using the stochastic Berendsen thermostat [@berendsen1984molecular]. Bond length constraints for bonds involving H atoms was enforced with the help of the SHAKE algorithm [@Ryckaert77]. Density functional theory calculations -------------------------------------- As we discussed in the previous section, in order to sample the system configuration space, we have proceeded by selecting for each Zn coordination geometry (octahedral, tetrahedral and square-planar) an adequately large number of configurations along the collected MD trajectories. These will be used for the successive calculation of the XANES spectrum. We have found that 20 configurations per each one of the three Zn coordination modes we are considering are enough. As we shall see, in fact, with this number we are able to capture the system thermal and structural fluctuations as they are probed by the XANES spectrum. Adding further configurations on the other hand improves neither the quality of the theoretical spectrum nor its statistical significance. In practice we have selected one configuration every 50 ps along the 1 ns long MD trajectories. For the actual first principles XANES calculation we are aiming at, the systems we have constructed are too big. Taking advantage of the fact that for the determination of the XANES spectrum only features within the second Zn solvation shell are relevant, we have cut out from the whole system a sphere around the metal ion including up to the second solvation shell. An analysis of the Zn-O radial distribution function on the configurations collected along the simulated MD trajectories shows that on average 29 water molecules are contained in this sphere. Noting that none of the selected configurations displays a chloride anion within 6 Å from Zn, the final systems we will be considering for the successive analysis are just made by the Zn ion plus the 29 water molecules closest to Zn. These reduced system are inserted in a cubic super-cell box with a side of 2.2 nm with the “empty” space filled with a uniform dielectric mimicking the bulk liquid water at standard conditions [@andreussi2012revised]. At this point all the configurations need to be relaxed to eliminate unphysical strains and contacts. This is done with the help of the Broyden–Fletcher–Goldfarb–Shanno quasi-Newton algorithm [@Fletcher00] by minimizing the system potential energy. The latter is quantum-mechanically evaluated in the DFT formalism using the [QuantumESPRESSO]{} (v. 5.0.2) suite of programs [@giannozzi2009quantum; @Giannozzi2017]. We use Vanderbilt ultra-soft pseudopotentials [@Vanderbilt90] with inclusion of semicore states for Zn and the PBE exchange-correlation functional [@Perdew96]. Electronic wave functions were expanded in plane waves up to an energy cutoff of 50 Ry, while a 500 Ry energy cutoff was used for the expansion of the charge density. The $\Gamma$-point sampling in reciprocal space was adopted in all the electronic structure calculations. It should be noted that the relaxation step which all the selected configuration are subjected to is not intended for bringing the system to its absolute (global) minimum which strictly speaking would represent the uninteresting $T=0$ system configuration [^3]. The decrease in potential energy is mostly due to relaxation of bond distances towards their equilibrium values with the non-bonding interactions assisting these local changes. Within the number of relaxation steps that we have employed we witness a change in the Zn coordination mode only in the case of the metastable square-planar coordination that in a number of instances turn into a square base pyramidal or an octahedral coordination. The octahedral and tetrahedral Zn coordination modes look instead quite stable local minima of the system configuration space. In Table \[tab:Tab1\] we compare the structural parameters before (i.e.  in the configurations extracted from the MD simulation) and after the DFT relaxation step for the three bunches of configurations of the reduced system we have considered, The Table is divided in two parts. The left (columns 1 to 5) and right (columns 6 to 10) part refer to the structures before and after the DFT relaxation, respectively. In the parts of the Table, in the first column we indicate the Zn coordination mode at the beginning and at the end of the relaxation procedure with in parentheses the number of configurations having the same geometry we have considered. As we remarked above, only in the case of the initial square-planar geometry the Zn coordination mode evolves into different structures. In 12 cases Zn ends up in a square base pyramidal coordination, in 5 cases in an octahedral one and in the remaining 3 cases the Zn coordination remains unmodified. In the second and third column of each half of the Table we report the radius of the first and second Zn solvation shell averaged over the set of configurations with the same geometry. The error is taken as half the dispersion of the values computed over each set of configurations. It should be noted that the dispersion of the values of the mean radius of the first solvation shell ($\Delta r^{(1)}$) is definitely smaller than for the dispersion of the mean radius of the second solvation shell ($\Delta r^{(2)}$), both before and after the relaxation step. This should be interpreted as a sign of the variability of the second solvation shell even among configurations having the same geometry. One also notices that the errors obtained after the DFT relaxation are always larger than those coming directly out from MD simulations indicating a larger geometrical variability of configurations after DFT relaxation. In the fourth column of each part we show the value of the Bond Valence Sum (BVS) [@brown1981bond] averaged over the configurations of each type of Zn coordination. BVS is an empirical parameter that, when charges and structural parameters of nearby atoms are correctly assigned, should come out to be of the order of magnitude of the nominal ion oxydation state, which is 2 in the case at hand. It is worth pointing out that, while the BVS values obtained for the MD structures are always higher than the expected value of 2, those calculated for the DFT relaxed structures are in agreement with the expected one. This confirms that the DFT relaxation is a relevant step to obtain reliable structural parameters. Finally, in the last column we give the values of the Zn charge computed according to the Natural Orbital Population (NOP) analysis [@Reed85]. The calculation was performed with the help of the Gaussian code [@Gaussian16; @G16manual] using the hybrid M06-2X exchange functional [@Zhao08] for Zn. The DFT set-up emploies a localized basis-set of type 6-31+G(d) for N, O, C and H atoms, and LANL2DZ for Zn, the latter including a pseudopotential approximation for Zn core electrons. The NOP analysis is little sensitive to the choice of the basis-set. Solvent effects are modelled by means of the implicit polarizable continuum model of Ref. . Fixed atomic coordinates corresponding to the MD and DFT relaxed structures which Table \[tab:Tab1\] refers to are used in the NOP electronic calculations. The Zn charges obtained from the present analysis are consistent with previous calculations of Zn(II) in water in octahedral coordination performed in [@Pokherel2016]. As observed, the amount of positive charge transfer from the metal ion to ligands decreases when water molecules are added in the second shell at fixed coordination geometry. Indeed, in Ref.  it was found that the charge on Zn increases from 1.4 to 1.7, upon adding up to three water molecules to the second shell. In our calculation, where the whole second shell is included, the positive charge is, instead, more delocalized as a consequence of using the PBE exchange functional. Since the decrease of charge transfer occurs mainly because the number of ligands decreases, as expected, the increase of the Zn charge in our case is smaller than in the case considered in Ref. . This is in accordance with the fact that in the latter case only a few water molecules (up to three) were added to the second shell, while in our case a fully occupied second shell of water molecules is always part of the system. ---------------- ------------------------------ ------------------------------ --------------- --------------- ---------------- ------------------------------ ------------------------------ --------------- --------------- Geometry $r^{(1)} \pm \Delta r^{(1)}$ $r^{(2)} \pm \Delta r^{(2)}$ BVS Zn charge Geometry $r^{(1)} \pm \Delta r^{(1)}$ $r^{(2)} \pm \Delta r^{(2)}$ BVS Zn charge [*oct*]{} (20) 1.93 $\pm$ 0.05 4.2 $\pm$ 0.2 3.2 $\pm$ 0.2 1.51$\pm$0.01 [*oct*]{} (20) 2.12 $\pm$ 0.08 4.2 $\pm$ 0.3 2.0 $\pm$ 0.1 1.51$\pm$0.01 [*tth*]{} (20) 1.86 $\pm$ 0.06 4.1 $\pm$ 0.2 2.6 $\pm$ 0.2 1.62$\pm$0.01 [*tth*]{} (20) 1.96 $\pm$ 0.06 4.1 $\pm$ 0.3 2.0 $\pm$ 0.1 1.62$\pm$0.01 [*sqp*]{} (3) 2.01 $\pm$ 0.04 4.2 $\pm$ 0.3 1.8 $\pm$ 0.1 1.68$\pm$0.02 [*sqp*]{} (20) 1.95 $\pm$ 0.07 4.2 $\pm$ 0.2 2.4 $\pm$ 0.1 1.58$\pm$0.05 [*sqp*]{} (12) 2.06 $\pm$ 0.09 4.2 $\pm$ 0.3 1.9 $\pm$ 0.1 1.58$\pm$0.01 [*oct*]{} (5) 2.13 $\pm$ 0.09 4.2 $\pm$ 0.3 1.9 $\pm$ 0.1 1.52$\pm$0.01 ---------------- ------------------------------ ------------------------------ --------------- --------------- ---------------- ------------------------------ ------------------------------ --------------- --------------- In Fig. \[fig:fig1\] we show examples of the geometry of the Zn(II)-water environment in the case of the octahedral (panel a)), tetrahedral (panel b)) and penta-coordinated mode (panel c)). ![A [*Ball&Stick*]{} representation of Zn environment in water as it results after the DFT relaxation step for the octahedral (panel a)) tetrahedral (panel b)) and penta-coordinated (panel c)) mode.[]{data-label="fig:fig1"}](Figure1.pdf){width="30.00000%"} Computation of the XANES spectrum {#sec:CXS} --------------------------------- We have computed the XANES spectrum of the 20 relaxed octahedral and tetrahedral configurations listed in Table \[tab:Tab1\] using the [XSPECTRA]{} [@taillefumier2002x; @gougoussis2009intrinsic; @gougoussis2009first; @Bunau2013] package of the [QuantumESPRESSO]{} suite. As for the square-planar coordination, the calculation of the XANES spectrum was carried out only for the (more numerous) 12 final square based pyramidal (penta-coordinated) configurations resulting from the relaxation of the square planar geometry. The theoretical XANES spectra that will be finally compared with the experimental data are the averages taken over the set of configurations belonging to each coordination geometry (octahedral, tetrahedral and square based pyramidal). The XANES calculation was performed in the dipole approximation. Core-hole effects were taken into account by generating a Zn pseudopotential with a hole in the 1s state. The spectra are convoluted [@krause1979natural; @bunuau2013projector] with a Lorentzian having an arctangent-like, energy-dependent width $\Gamma$. The minimum of $\Gamma$, attained at low energy and up to the Fermi energy, is taken to be 1.7 eV, while its maximum, reached at high (infinite) energy is 4 eV. The inflection point is located at 30 eV. In the presence, as it is the case here, of inequivalent absorbing sites in the unit cell, the value of the energy, $E_i$, of the initial state depends on the geometry around the absorbing site due to the core-level shift. Thus, in order to have a sensible comparison among the theoretically computed XANES spectra for each geometry so as to be able to correctly match the energies in performing the average, the spectra of each set need to be “re-aligned” by explicitly calculating the core level shift as explained in Refs. . More precisely, we first align the energies of the lowest unoccupied level of each configuration, $\epsilon_{LUB}$, and then we displace the whole spectrum by the difference $E_{TOT}^{*}-E_{TOT}$, where $E_{TOT}^{*}$ and $E_{TOT}$ are the total energies of the given configuration in the presence and in the absence of a core-hole, respectively [@nemausat2015phonon] [^4]. This correction is often claimed to be negligible and ignored. An explicit calculation for the three sets of configurations we considered shows that this is not the case here. For example, in the case of octahedral configurations, the core-level shift can be as large as about 1 eV, as shown in the histogram of Fig. \[fig:fig2\]. ![Histogram of the core-level shifts magnitude for the 20 octahedral configurations of Table \[tab:Tab1\] after the relaxation step.[]{data-label="fig:fig2"}](Figure2.pdf){height="6cm"} Results {#sec:Res} ======= As we said, in Table \[tab:Tab1\] we summarize the geometrical modifications occurring at the end of the DFT relaxation step to the three (octahedral, tetrahedral and square-planar) sets of 20 configurations extracted from the 1 ns classical MD simulations described in Sect. \[sec:MandM\]. The first very clear (and not unexpected) result is that the square-planar configurational appears to be quite unstable. In fact, only 3 (out of 20) configurations maintain the square-planar geometry after the DFT relaxation step. Note also that in these three cases the BVS value is significantly lower than its nominal value. Among the other square-planar initial configurations, in 12 cases a fifth water molecule is attracted within the Zn coordination sphere and in 5 cases a further extra water molecule approaches the ion. Our simulation data show that the instability of the square-planar geometry is not to be ascribed to a too small number of ligands, but rather to the requirement of a spherically symmetric ligand arrangement, as forced by the spherical symmetry of the d$^{10}$ electronic structure of . We see, in fact, that starting from a tetrahedral configuration, which features the same number of ligands as the square-planar configuration, not only this coordination mode is maintained, but the corresponding BVS value is like what we expect it to be. However, as already mentioned in the previous section, we stress that none of the initially tetrahedral coordinations changes its coordination number. This is due to the interactions between the first and the second solvation layers (see also below) that, despite the low energy involved in hydrogen bonds, do not allow the displacement of water molecules from the second to the first Zn coordination sphere. This lack of mobility is a usual effect in energy relaxation in systems containing many water molecules extracted from a larger liquid sample. The situation for the octahedral geometry is similar to what we see in the tetrahedral case. The octahedral geometry looks very stable and the only visible effect of the DFT relaxation is a slight increase of the ligands–Zn mean distance. It is worth noticing that including the second solvation shell of Zn in the DFT model allows to properly take into account the interactions between the Zn-bound water molecules and the environment of the complex in solution. For the average distance between O belonging to water molecules in the Zn first solvation shell and O belonging to water molecules in the Zn second solvation shell, $\langle r^{(1,2)} \rangle$, we find in the octahedral geometry case of $\langle r^{(1,2)} \rangle|_{Zn\,in\,water} = 2.7 \pm 0.3$ Å. This value is smaller than the value obtained by averaging the O-O distances contributing to the first peak ($r \le 3.5$ Å) in the radial distribution function of the TIP3P water molecules, for which one finds $\langle r^{(1,2)} \rangle|_{pure\,water} = 3.0\pm0.3$ Å. As already observed in other calculations  [@migliorati2012influence; @sanchez1996examining; @Smirnov2013], the hydrogen bond between Zn-bound water molecules and water molecules in the second solvation shell is stronger than the hydrogen bonds of liquid water. This difference confirms that, as expected, the Zn-bound water molecules are interacting via activated hydrogen bonds with nearby water molecules. These interactions affect the electron ground state around the Zn center and, thanks to the quantum mechanical DFT treatment used in our approach, they are fully included in the simulation of the XANES spectra. The [XSPECTRA]{} code is employed to compute the XANES spectrum of each one of the configurations belonging to the three geometries listed in the second column of Table \[tab:Tab1\] (20 configurations for the octahedral and tetrahedral geometry, and 12 for the square base pyramidal one (penta-coordinated)). As described in Sect. \[sec:CXS\], each spectrum computed in this way is appropriately shifted and convoluted with a Lorentzian. Then, an average of the shifted and convoluted spectra is performed within each class of coordination geometry. The three average spectra are finally compared to experimental data taken at the GILDA beamline at ESRF [@d1998gilda]. The comparison is shown in Fig. \[fig:fig3\] where in red we have drawn the experimental curve and in blue the theoretical ones. ![image](Figure3.pdf){height="5.0cm"} We clearly see that the octahedral geometry is in very good agreement with the experimental XANES spectrum. On the contrary, the simulated spectra from the tetrahedral and the penta-coordinated geometries strongly deviate from the experimental one. This conclusion can be made more quantitative by computing and comparing of the $R$-factors of the three sets of data [^5]. They are collected in Table \[tab:tab3\]. Coordination mode $R$-factor ------------------- ------------ octahedral 0.06 tetrahedral 0.19 penta-coordinated 0.09 : []{data-label="tab:tab3"} One gets $R= 0.06$ for the octahedral coordination mode, $R = 0.19$ for the tetrahedral one and $R = 0.09$ for the penta-coordinated mode. These numbers confirm the qualitative observation already drawn by looking at Fig. \[fig:fig3\] that the coordination mode of Zn in water is octahedral. This conclusion is certainly not unexpected. But the key point of the strategy we have presented is that this result has been obtained in a fully first principles approach with no free parameters and no fitting. Conclusions =========== Extending the general strategy we have developed in the case of Cu in water [@la2015first], we have been able to accurately reproduce the XANES spectrum of Zn(II) in water, proving that among different [*a priori*]{} plausible geometries, Zn in water lives in an octahedral coordination. The main virtue of the approach we present in this paper is that the calculation of the XANES spectrum is performed from first principles in a completely parameter-free way. Rather good fits of the XANES spectrum of Zn in water are already available in the literature [@migliorati2012influence; @d2002combined]. However, in those works, some kind of fit against a variable number of nonstructural parameters was performed. A parameter-free approach, like the one advocated in this paper, is instead aimed at calculating XANES spectra of complex systems of interest, not only in biology, but also in material science, without [*ad hoc*]{} assumptions or fitting ans[ä]{}tze. Conflicts of interest {#conflicts-of-interest .unnumbered} ===================== There are no conflicts to declare. Acknowledgements {#acknowledgements .unnumbered} ================ The calculations have been performed under the agreement between INFN and the National Supercomputing Consortium of Italy CINECA. The authors thank Y. Joly for useful discussions. This work was partly supported by INAIL grant BRiC 2016 ID17/2016. [^1]: We note incidentally that the charge smearing provided by dummy atoms helps avoiding possible spurious effects due to a too high charge density concentration at the metal site. [^2]: It should be observed that in the cases of the square-planar and tetrahedral geometries the total charge of the dummy atoms is smaller than the nominal Zn ion charge, $Q_{Zn(II)}=2|e|$. This is not a problem as we are here only preparing initial configurations of Zn in water with the metal ion in the desired coordination mode that will be then relaxed by DFT methods (see next section) with each atom having its correct electron number. [^3]: If any, one would like to minimize the free energy of the system and not its potential energy. [^4]: The core level shift is calculated according to the formula (see eq. (22) of Ref. ) $$\begin{aligned} E \rightarrow E - \epsilon_{LUB} + (E_{TOT}^{*} - E_{TOT})\, .\nonumber $$ In this shift the energy of the lowest unoccupied electronic band, $\epsilon_{LUB}$, is subtracted and the total energy difference between the system with one 1s core hole plus one electron in the first available electronic state ($E_{TOT}^{*}$) and that of the ground state ($E_{TOT}$) is added. [^5]: The $R$-factor is an estimate of the similarity between two sets of data. It is defined by the formula $$\begin{aligned} R= \frac{\sum_{i=1}^N \vert \mu ^{ex} (E_i) - \mu ^{th} (E_i) \vert}{\sum_{i=1}^N \vert \mu ^{ex} (E_i)\vert} \nonumber\end{aligned}$$ where $\mu^{ex} (E_i)$ and $\mu^{th} (E_i)$ are the two sets of data, typically $\mu^{ex} (E_i)$ and $\mu^{th} (E_i)$ are the experimental and the simulated/theoretical data, respectively.
--- abstract: 'Preconditioned Krylov subspace (KSP) methods are widely used for solving very large and sparse linear systems arising from PDE discretizations. For modern applications, these linear systems are often nonsymmetric due to the nature of the PDEs, the boundary or jump conditions, or the discretization methods. While a number of preconditioned KSP methods and their implementations are readily available, it is often unclear to users which ones are the best choices for different classes of problems. In this work, we present a systematic comparison of some representative KSP methods and preconditioners. We consider four KSP methods, namely restarted GMRES, TFQMR, BiCGSTAB, and QMRCGSTAB, coupled with three preconditioners, namely Gauss-Seidel, incomplete LU factorization (ILU), and algebraic multigrid (AMG). We assess these preconditioned KSP methods using large, sparse, nonsymmetric linear systems arising from PDE discretizations in 2D and 3D. We compare the convergence and timing results of these different combinations, and assess the scalability of these methods with respect to the number of unknowns. Our results show that GMRES tends to deliver the best performance when coupled with AMG preconditioners, but it is far less competitive than the other three methods due to restarts when using Gauss-Seidel or ILU preconditioners. Our results also show that the smoothed-aggregation AMG delivers better performance and exhibits better scalability than classical AMG, but it is less robust, especially for ill-conditioned systems. The study helps establish some practical guidelines for choosing preconditioned KSP methods. It also motivates the further development of more effective and robust multigrid preconditioners for large, sparse, nonsymmetric, and potentially ill-conditioned linear systems.' author: - Aditi Ghai - Cao Lu - Xiangmin Jiao  bibliography: - 'multigrid.bib' - 'refs.bib' title: | A Comparison of Preconditioned Krylov Subspace\ Methods for Nonsymmetric Linear Systems --- Introduction ============ Preconditioned Krylov subspace (KSP) methods are widely used for solving large sparse linear systems, especially those arising from discretizations of partial differential equations. For most modern applications, these linear systems are nonsymmetric due to various reasons, such as the multiphysics nature of the PDEs, some sophisticated boundary or jump conditions, or the discretization methods themselves. Although for symmetric systems, conjugate gradient (CG) [@Hestenes52CG] and MINRES [@Paige75MINRES] are well recognized as the best KSP methods [@FS12CG], the situation is far less clear for nonsymmetric systems. Various KSP methods have been developed, such as GMRES [@Saad86GMRES], CGS [@Sonneveld89CGS], QMR [@FN91QMR], TFQMR [@Freund93TFQMR], BiCGSTAB [@vanderVorst92BiCGSTAB], QMRCGSTAB [@CGS94QMRCGS], etc. Most of these methods are described in detail in textbooks such as [@BBC94Templates; @Saad03IMS; @Van-der-Vorst:2003aa], and their implementations are readily available in software packages such as PETSc [@petsc-user-ref] and MATLAB [@MATLAB]. However, each of these methods has its own advantages and disadvantages. Therefore, it is difficult for practitioners to choose the proper methods for their specific applications. To make the matter worse, a KSP method may perform well with one preconditioner but poorly with another preconditioner. As a result, users often spend a significant amount of time on trial and error to find a reasonable combination of the KSP solvers and preconditioners, and yet the final choice may still be far from optimal. Therefore, a systematic comparison of the preconditioned KSP methods is an important subject. In the literature, various comparisons of KSP methods have been reported previously. In [@comparison_trefethen], Nachtigal, Reddy and Trefethen presented some theoretical analysis and comparison of the convergence properties of CGN, GMRES, and CGS, which were the leading methods for nonsymmetric systems in the early 1990s. They showed that the convergence of CGN is governed by singular values, whereas that of GMRES and CGS by eigenvalues and pseudo-eigenvalues, and each of these methods may significantly outperform the others for different matrices. Their work did not consider preconditioners. The work is also outdated because newer methods have been introduced since then, which are superior to CGN and CGS. In Saad’s textbook [@Saad03IMS], some comparisons of various KSP methods, including GMRES, BiCGSTAB, QMR, and TFQMR, were given in terms of computational cost and storage requirements. The importance of preconditioners was emphasized, but no detailed comparison for the different combinations of the KSP methods and preconditioners was given. The same is also true for other textbooks, such as [@Van-der-Vorst:2003aa]. In terms of empirical comparison, Meister reported a comparison of a few preconditioned KSP methods for several inviscid and viscous flow problems [@MEISTER1998311]. His study focused on incomplete LU factorization as the preconditioner. Benzi and coworkers [@Benzi99CSS; @BENZI02PTL] also compared a number of preconditioners, also with a focus on incomplete factorization and their block variants. What is notably missing in these previous studies is the multigrid preconditioners, which have advanced significantly in recent years. The goal of this paper is to perform a systematic comparison and in turn establish some practical guidelines in choosing the best combinations of the preconditioned KSP solvers. Our study is in spirit similar to the recent work of Feng and Saunders in [@FS12CG], which compared CG and MINRES for symmetric systems. However, we focus on nonsymmetric systems with a heavy emphasis on preconditioners. We consider four KSP solvers, GMRES, TFQMR, BiCGSTAB and QMRCGSTAB. Among these, the latter three enjoy three-term recurrences. We also consider three preconditioners, namely Gauss-Seidel, incomplete LU factorization (ILU), and algebraic multigrid (AMG). Each of these KSP methods and preconditioners has its advantages and disadvantages, so theoretical analysis alone is insufficient in establishing their suitability for different types of problems. We compare the methods empirically in terms of convergence and timing results for linear systems constructed from four different numerical discretization methods for PDEs in both 2D and 3D. The sizes of these linear systems range from $10^{5}$ to $10^{7}$ unknowns, which are representative of modern industrial applications. We also assess the scalability of different preconditioned KSP solvers as the number of unknowns increases. To the best of our knowledge, this is the most comprehensive comparison of the preconditioned KSP solvers to date for large, sparse, nonsymmetric linear systems. Our results show that the smoothed-aggregation AMG typically delivers better performance and exhibits better scalability than classical AMG, but it is less robust, especially for ill-conditioned systems. These results help establish some practical guidelines for choosing preconditioned KSP methods. They also motivate the further development of more effective, scalable, and robust multigrid preconditioners for large, sparse, nonsymmetric, and potentially ill-conditioned linear systems. The remainder of the paper is organized as follows. In Section \[sec:background\], we review some background knowledge of KSP methods and preconditioners, and compare these KSP methods in terms of their Krylov subspaces and the iteration procedures in computing their basis vectors. In Section \[sec:Analysis-KSP\], we outline a few KSP methods and compare their main properties in terms of asymptotic convergence, number of operations per iteration, and the storage requirement. This theoretical background will help us predict the relative performance of the various methods and interpret the numerical results. In Section \[sec:PDE-Discretization-Methods\], we summarize the PDE discretization methods, with an emphasis on the various sources of the nonsymmetry of the linear systems. In Section \[sec:Results\], we present the empirical comparisons of the preconditioned KSP methods for a number of test problems. Finally, Section \[sec:Conclusions-and-Future\] concludes the paper with a discussion on future work. Background\[sec:background\] ============================ In this section, we give a general overview of Krylov subspace methods and preconditioners for solving a linear system $$\vec{A}\vec{x}=\vec{b},\label{eq:linear_system}$$ where $\vec{A}\in\mathbb{R}^{n\times n}$ is large, sparse and nonsymmetric, and $\vec{b}\in\mathbb{R}{}^{n}$. We consider only real matrices, because they are the most common in applications. However, all the methods are applicable to complex matrices, by replacing the matrix transposes with the conjugate transposes. We focus on the Krylov subspaces and the procedure in constructing the basis vectors of the subspaces, which are often the determining factors in the overall performance of different types of KSP methods. We defer more detailed discussions and analysis of the individual methods to Section \[sec:Analysis-KSP\]. Krylov Subspaces ---------------- Given a matrix $\vec{A}\in\mathbb{R}^{n\times n}$ and a vector $\vec{v}\in\mathbb{R}^{n}$, the $k$th *Krylov subspace* generated by them, denoted by $\mathcal{K}_{k}(\vec{A},\vec{v})$, is given by $$\mathcal{K}_{k}(\vec{A},\vec{v})=\mbox{span}\{\vec{v},\vec{A}\vec{v},\vec{A}^{2}\vec{v},\dots,\vec{A}^{k-1}\vec{v}\}.\label{eq:Krylov}$$ To solve the linear system (\[eq:linear\_system\]), let $\vec{x}_{0}$ be some initial guess to the solution, and $\vec{r}_{0}=\vec{b}-\vec{A}\vec{x}_{0}$ is the initial residual vector. A Krylov subspace method incrementally finds approximate solutions within $\mathcal{K}_{k}(\vec{A},\vec{v})$, sometimes through the aid of another Krylov subspace $\mathcal{K}_{k}(\vec{A}^{T},\vec{w})$, where $\vec{v}$ and $\vec{w}$ typically depend on $\vec{r}_{0}$. To construct the basis of the subspace $\mathcal{K}(\vec{A},\vec{v})$, two procedures are commonly used: the (restarted) *Arnoldi iteration* [@Arnoldi51PMI], and the *bi-Lanczos iteration* [@Lan50; @Van-der-Vorst:2003aa] (a.k.a. Lanczos biorthogonalization [@Saad03IMS] or tridiagonal biorthogonalization [@TB97NLA]). ### The Arnoldi Iteration The Arnoldi iteration is a procedure for constructing orthogonal basis of the Krylov subspace $\mathcal{K}(\vec{A},\vec{v})$. Starting from a unit vector $\vec{q}_{1}=\vec{v}/\Vert\vec{v}\Vert$, it iteratively constructs $$\vec{Q}_{k+1}=[\vec{q}_{1}\mid\vec{q}_{2}\mid\dots\mid\vec{q}_{k}\mid\vec{q}_{k+1}]\label{eq:Arnoldi_basis}$$ with orthonormal columns by solving $$h_{k+1,k}\vec{q}_{k+1}=\vec{A}\vec{q}_{k}-h_{1k}\vec{q}_{1}-\cdots-h_{kk}\vec{q}_{k},\label{eq:Arnoldi_core}$$ where $h_{ij}=\vec{q}_{i}^{T}\vec{A}\vec{q}_{j}$ for $j\leq i$, and $h_{k+1,k}=\Vert\vec{A}\vec{q}_{k}-h_{1k}\vec{q}_{1}-\cdots-h_{kk}\vec{q}_{k}\Vert$, i.e., the norm of the right-hand side of (\[eq:Arnoldi\_core\]). This is analogous to Gram-Schmidt orthogonalization. If $\mathcal{K}_{k}\neq\mathcal{K}_{k-1}$, then the columns of $\vec{Q}_{k}$ form an orthonormal basis of $\mathcal{K}_{k}(\vec{A},\vec{v})$, and $$\vec{A}\vec{Q}_{k}=\vec{Q}_{k+1}\tilde{\vec{H}}_{_{k}},$$ where $\tilde{\vec{H}}_{_{k}}$ is a $(k+1)\times k$ upper Hessenberg matrix, whose entries $h_{ij}$ are those in (\[eq:Arnoldi\_core\]) for $i\leq j+1$, and $h_{ij}=0$ for $i>j+1$. The KSP method GMRES [@Saad86GMRES] is based on the Arnoldi iteration, with $\vec{v}=\vec{r}_{0}$. If $\vec{A}$ is symmetric, the Hessenberg matrix $\tilde{\vec{H}}_{_{k}}$ reduces to a tridiagonal matrix, and the Arnoldi iteration reduces to the Lanczos iteration. The Lanczos iteration enjoys a three-term recurrence. In contrast, the Arnoldi iteration has a $k$-term recurrence, so its computational cost increases as $k$ increases. For this reason, one almost always need to restart the Arnoldi iteration in practice, for example after every 30 iterations, to build a new Krylov subspace from $\vec{v}=\vec{r}_{k}$ at restart. Unfortunately, the restart may undermine the convergence of the KSP methods. ### The Bi-Lanczos Iteration The bi-Lanczos iteration, also known as Lanczos biorthogonalization or tridiagonal biorthogonalization, offers an alternative for constructing the basis of the Krylov subspaces of $\mathcal{K}(\vec{A},\vec{v})$. Unlike Arnoldi iterations, the bi-Lanczos iterations enjoy a three-term recurrence. However, the basis will no longer be orthogonal, and we need to use two matrix-vector multiplications per iteration, instead of just one. The bi-Lanczos iterations can be described as follows. Starting from the vector $\vec{v}_{1}=\vec{v}/\Vert\vec{v}\Vert$, we iteratively construct $$\vec{V}_{k+1}=[\vec{v}_{1}\mid\vec{v}_{2}\mid\dots\mid\vec{v}_{k}\mid\vec{v}_{k+1}],\label{eq:nonorth_basis}$$ by solving $$\beta_{k}\vec{v}_{k+1}=\vec{A}\vec{v}_{k}-\gamma_{k-1}\vec{v}_{k-1}-\alpha_{k}\vec{v}_{k},\label{eq:beta_k}$$ analogous to (\[eq:Arnoldi\_core\]). If $\mathcal{K}_{k}\neq\mathcal{K}_{k-1}$, then the columns of $\vec{V}_{k}$ form a basis of $\mathcal{K}_{k}(\vec{A},\vec{v})$, and $$\vec{A}\vec{V}_{k}=\vec{V}_{k+1}\tilde{\vec{T}}_{_{k}},\label{eq:biorth_A}$$ where $$\tilde{\vec{T}}_{_{k}}=\begin{bmatrix}\alpha_{1} & \gamma_{1}\\ \beta_{1} & \alpha_{2} & \gamma_{2}\\ & \beta_{2} & \alpha_{3} & \ddots\\ & & \ddots & \ddots & \gamma_{k-1}\\ & & & \beta_{k-1} & \alpha_{k}\\ & & & & \beta_{k} \end{bmatrix}$$ is a $(k+1)\times k$ tridiagonal matrix. To determine the $\alpha_{i}$ and $\gamma_{i}$, we construct another Krylov subspace $\mathcal{K}(\vec{A}^{T},\vec{w})$, whose basis is given by the column vectors of $$\vec{W}_{k+1}=[\vec{w}_{1}\mid\vec{w}_{2}\mid\dots\mid\vec{w}_{k}\mid\vec{w}_{k+1}],\label{eq:biorth_basis_W}$$ subject to the biorthogonality condition $$\vec{W}_{k+1}^{T}\vec{V}_{k+1}=\vec{V}_{k+1}^{T}\vec{W}_{k+1}=\vec{I}_{k+1}.\label{eq:biorthogonal}$$ Since $$\vec{W}_{k+1}^{T}\vec{A}\vec{V}_{k}=\vec{W}_{k+1}^{T}\vec{V}_{k+1}\tilde{\vec{T}}_{_{k}}=\tilde{\vec{T}}_{_{k}}.$$ It then follows that $$\alpha_{k}=\vec{w}_{k}^{T}\vec{A}\vec{v}_{k}.\label{eq:alpha_k}$$ Suppose $\vec{V}=\vec{V}_{n}$ and $\vec{W}=\vec{W}_{n}=\vec{V}^{-T}$ form complete basis vectors of $\mathcal{K}_{n}(\vec{A},\vec{v})$ and $\mathcal{K}_{n}(\vec{A}^{T},\vec{w})$, respectively. Let $\vec{T}=\vec{V}^{-1}\vec{A}\vec{V}$ and $\vec{S}=\vec{T}^{T}$. Then, $$\vec{W}^{-1}\vec{A}^{T}\vec{W}=\vec{V}^{T}\vec{A}^{T}\vec{V}^{-T}=\vec{T}^{T}=\vec{S},$$ and $$\vec{A}^{T}\vec{W}_{k}=\vec{W}_{k+1}\tilde{\vec{S}}_{_{k}},\label{eq:biorth_At}$$ where $\tilde{\vec{S}}_{k}$ is the leading $(k+1)\times k$ submatrix of $\vec{S}$. Therefore, $$\gamma_{k}\vec{w}_{k+1}=\vec{A}^{T}\vec{w}_{k}-\beta_{k-1}\vec{w}_{k-1}-\alpha_{k}\vec{w}_{k}.\label{eq:gamma_k}$$ Starting from $\vec{v}_{1}$ and $\vec{w}_{1}$ with $\vec{v}_{1}^{T}\vec{w}_{1}=1$, and let $\beta_{0}=\gamma_{0}=1$ and $\vec{v}_{0}=\vec{w}_{0}=\vec{0}$. Then, $\alpha_{k}$ is uniquely determined by (\[eq:alpha\_k\]), and $\beta_{k}$ and $\gamma_{k}$ are determined by (\[eq:beta\_k\]) and (\[eq:gamma\_k\]) by up to scalar factors, subject to $\vec{v}_{k+1}^{T}\vec{w}_{k+1}=1$. A typical choice is to scale the right-hand sides of (\[eq:beta\_k\]) and (\[eq:gamma\_k\]) by scalars of the same modulus [@Saad03IMS p. 230]. If $\vec{A}$ is symmetric and $\vec{v}_{1}=\vec{w}_{1}=\vec{v}/\Vert\vec{v}\Vert$, then the bi-Lanczos iteration reduces to the classical Lanczos iteration for symmetric matrices. Therefore, it can be viewed as a different generalization of the Lanczos iteration to nonsymmetric matrices. Unlike the Arnoldi iteration, the cost of bi-Lanczos iteration is fixed per iteration, which may be advantageous in some cases. Some KSP methods, in particular BiCG [@fletcher1976conjugate] and QMR [@FN91QMR], are based on bi-Lanczos iterations. A potential issue of bi-Lanczos iteration is that it suffers from *breakdown* if $\vec{v}_{k+1}^{T}\vec{w}_{k+1}=0$ or *near breakdown* if $\vec{v}_{k+1}^{T}\vec{w}_{k+1}\approx0$. These can be resolved by a *look-ahead* strategy to build a block-tridiagonal matrix $\vec{T}$. Fortunately, breakdowns are rare in practice, so look-ahead is rarely implemented. A disadvantage of the bi-Lanczos iteration is that it requires the multiplication with $\vec{A}^{T}$. Although $\vec{A}^{T}$ is in principle available in most applications, multiplication with $\vec{A}^{T}$ leads to additional difficulties in performance optimization and preconditioning. Fortunately, in bi-Lanczos iteration, $\vec{V}_{k}$ can be computed without forming $\vec{W}_{k}$ and vice versa. This observation leads to the transpose-free variants of the KSP methods, such as TFQMR [@Freund93TFQMR], which is a transpose-free variant of QMR, and CGS [@Sonneveld89CGS], which is a transpose-free variant of BiCG. Two other examples include BiCGSTAB [@vanderVorst92BiCGSTAB], which is more stable than CGS, and QMRCGSTAB [@CGS94QMRCGS], which is a hybrid of QMR and BiCGSTAB, with smoother convergence than BiCGSTAB. These transpose-free methods enjoy three-term recurrences and require two multiplications with $\vec{A}$ per iteration. Note that there is not a unique transpose-free bi-Lanczos iteration. There are primarily two types, used by CGS and QMR, and by BiCGSTAB and QMRCGSTAB, respectively. We will address them in more detail in Section \[sec:Analysis-KSP\]. ### Comparison of the Iteration Procedures [c|&gt;p[2cm]{}|c|c|c]{} & & & [\ ]{} & & [A]{}**$^{T}$** & [A]{} & [\ ]{} [GMRES [@Saad86GMRES]]{} & [Arnoldi]{} & [0]{} & [1]{} & [$k$]{}[\ ]{} BiCG [@fletcher1976conjugate] & & & & [\ ]{} [QMR [@FN91QMR]]{} & & & & [\ ]{} CGS [@Sonneveld89CGS] & & & & [\ ]{} [TFQMR [@Freund93TFQMR]]{} & & & & [\ ]{} [BiCGSTAB [@vanderVorst92BiCGSTAB]]{} & & & & [\ ]{} [QMRCGSTAB [@CGS94QMRCGS]]{} & & & & [\ ]{} Both the Arnoldi iteration and the bi-Lanczos iteration are based on the Krylov subspace $\mathcal{K}(\vec{A},\vec{r}_{0})$. However, these iteration procedures have very different properties, which are inherited by their corresponding KSP methods, as summarized in Table \[tab:KrylovSubspaces\]. These properties, for the most part, determine the cost per iteration of the KSP methods. For KSP methods based on the Arnoldi iteration, at the $k$th iteration the residual $\vec{r}_{k}=\mathcal{P}_{k}(\vec{A})\vec{r}_{0}$ for some degree-$k$ polynomial $\mathcal{P}_{k}$, so the asymptotic convergence rates depend on the eigenvalues and the generalized eigenvectors in the Jordan form of $\vec{A}$ [@comparison_trefethen; @Saad03IMS]. For methods based on transpose-free bi-Lanczos, in general $\vec{r}_{k}=\hat{\mathcal{P}}_{k}(\vec{A})\vec{r}_{0}$, where $\hat{\mathcal{P}}_{k}$ is a polynomial of degree $2k$. Therefore, the convergence of these methods also depend on the eigenvalues and generalized eigenvectors of $\vec{A}$, but at different asymptotic rates. Typically, the reduction of error in one iteration of a bi-Lanczos-based KSP method is approximately equal to that of two iterations in an Arnoldi-based KSP method. Since the Arnoldi iteration requires only one matrix-vector multiplication per iteration, compared to two per iteration for the bi-Lanczos iteration, the cost of different KSP methods are comparable in terms of the number of matrix-vector multiplications. Theoretically, the Arnoldi iteration is more robust because of its use of orthogonal basis, whereas the bi-Lanczos iteration may break down if $\vec{v}_{k+1}^{T}\vec{w}_{k+1}=0$. However, the Arnoldi iteration typically requires restarts, which can undermine convergence. Therefore, the methods based on bi-Lanczos are often more robust than GMRES with restarts. In general, if the iteration count is small compared to the average number of nonzeros per row, the methods based on the Arnoldi iteration may be more efficient; if the iteration count is large, the cost of orthogonalization in Arnoldi iteration may become higher than that of bi-Lanczos iteration. For these reasons, conflicting results are often reported in the literature. However, the apparent disadvantages of each KSP method may be overcome by effective preconditioners: For Arnoldi iterations, if the KSP method converges before restart is needed, then it may be the most effective method; for bi-Lanczos iterations, if the KSP method converges before any break down, it is typically more robust than the methods based on restarted Arnoldi iterations. We will review the preconditioners in the next subsection. Note that some KSP methods use a Krylov subspace other than $\mathcal{K}(\vec{A},\vec{r}_{0})$. The most notable examples are LSQR [@Paige92LSQR] and LSMR [@Fong11LSMR], which use the Krylov subspace $\mathcal{K}(\vec{A}^{T}\vec{A},\vec{A}^{T}\vec{r}_{0})$. These methods are mathematically equivalent to applying CG or MINRES to the normal equation, respectively, but with better numerical properties. An advantage of these methods is that they are applicable to least squares systems without modification. However, they are not transpose free, they tend to converge slowly for square linear systems, and they require special preconditioners. For these reasons, we do not include them in this study. Preconditioners --------------- The convergence of KSP methods can be improved significantly by the use of preconditioners. Various preconditioners have been proposed for Krylov subspace methods over the past few decades. It is virtually impossible to consider all of them. For this comparative study, we focus on three preconditioners, which are representative of the state of the art: Gauss-Seidel, incomplete LU factorization, and algebraic multigrid. ### Left and Right Preconditioners Roughly speaking, a preconditioner is a matrix or transformation $\vec{M}$, whose inverse $\vec{M}^{-1}$ approximates $\vec{A}^{-1}$, and $\vec{M}^{-1}\vec{v}$ can be computed efficiently. For nonsymmetric linear systems, a preconditioner may be applied either to the left or the right of $\vec{A}$. With a left preconditioner, instead of solving (\[eq:linear\_system\]), one solves the linear system $$\vec{M}^{-1}\vec{A}\vec{x}=\vec{M}^{-1}\vec{b}$$ by utilizing the Krylov subspace $\mathcal{K}(\vec{M}^{-1}\vec{A},\vec{M}^{-1}\vec{b})$ instead of $\mathcal{K}(\vec{A},\vec{b})$. For a right preconditioner, one solves the linear system $$\vec{A}\vec{M}^{-1}\vec{y}=\vec{b}$$ by utilizing the Krylov subspace $\mathcal{K}(\vec{A}\vec{M}^{-1},\vec{b})$, and then $\vec{x}=\vec{M}^{-1}\vec{y}$. The convergence of a preconditioned KSP method is then determined by the eigenvalues of $\vec{M}^{-1}\vec{A}$, which are the same as those of $\vec{A}\vec{M}^{-1}$. Qualitatively, $\vec{M}$ is a good preconditioner if $\vec{M}^{-1}\vec{A}$ is not too far from normal and its eigenvalues are more clustered than those of $\vec{A}$ [@TB97NLA]. However, this is more useful as a guideline for developers of preconditioners, rather than for practitioners. Although the left and right-preconditioners have similar asymptotic behavior, they can behave drastically differently in practice. This is because the termination criterion of a Krylov subspace method is typically based on the norm of the residual of the preconditioned system. With a left preconditioner, the preconditioned residual $\Vert\vec{M}^{-1}\vec{r}_{k}\Vert$ may differ significantly from the true residual $\Vert\vec{r}_{k}\Vert$ if $\Vert\vec{M}^{-1}\Vert$ is far from 1, which unfortunately is often the case. This in turn leads to erratic behavior, such as premature termination or false stagnation of the preconditioned KSP method, unless the true residual is calculated explicitly at the cost of additional matrix-vector multiplications. In contrast, a right preconditioner does not alter the residual, so the stopping criteria can use the true residual with little or no extra cost. Unless $\vec{M}$ is very ill-conditioned, $\vec{M}^{-1}\vec{y}$ would not lead to large errors in $\vec{x}$. For these reasons, we consider only right preconditioners in this comparative study. Note that a preconditioner may also be applied to both the left and right of $\vec{A}$, leading to the so-called *symmetric preconditioners*. Such preconditioners are more commonly used for preserving the symmetry of symmetric matrices. Like left preconditioners, they also alter the norm of the residual. Therefore, we do not consider symmetric preconditioners either. ### Gauss-Seidel Gauss-Seidel is one of the simplest preconditioners. Based on stationary iterative methods, Gauss-Seidel is relatively easy to implement, so it is often the choice if one must implement a preconditioner from scratch. Consider the partitioning $\vec{A}=\vec{D}+\vec{L}+\vec{U}$, where $\vec{D}$ is the diagonal of $\vec{A}$, $\vec{L}$ is the strict lower triangular part, and $\vec{U}$ is the strict upper triangular part. Given $\vec{x}_{k}$ and $\vec{b}$, the Gauss-Seidel method computes a new approximation to $\vec{x}_{k+1}$ as $$\vec{x}_{k+1}=(\vec{D}+\vec{L})^{-1}(\vec{b}-\vec{U}\vec{x}_{k}).$$ Gauss-Seidel is a special case of SOR, which computes $\vec{x}_{k+1}$ as $$\vec{x}_{k+1}=(\vec{D}+\omega\vec{L})^{-1}\left(\omega\left(\vec{b}-\vec{U}\vec{x}_{k}\right)+(1-\omega)\vec{D}\vec{x}_{k}\right).$$ When $\omega=1$, the SOR reduces to Gauss-Seidel; when $\omega>1$ and $\omega<1$, it corresponds to over-relaxation and under-relaxation, respectively. We choose to include Gauss-Seidel instead of SOR in our comparison, because it is parameter free, and an optimal choice of $\omega$ in SOR is problem dependent. Another related preconditioner is the Jacobi or diagonal preconditioner, which is less effective than Gauss-Seidel. A limitation of Gauss-Seidel, also shared by Jacobi and SOR, is that the diagonal entries of $\vec{A}$ must be nonzero. Fortunately, this condition is typically satisfied for linear systems arising from PDE discretizations. ### Incomplete LU Factorization Incomplete LU factorization (ILU) is one of the most widely used “black-box” preconditioners. It performs an approximate factorization $$\vec{A}\approx\vec{L}\vec{U},$$ where $\vec{L}$ and $\vec{U}$ are far sparser than those in the true LU factorization of $\vec{A}$. In its simplest form, ILU does not introduce any fill, so that $\vec{L}$ and $\vec{U}$ preserve the sparsity patterns of the lower and upper triangular parts of $\vec{A}$, respectively. In this case, the diagonal entries of $\vec{A}$ must be nonzero. The ILU may be extended to preserve relatively large fills, and to use partial pivoting. These improve the stability of the factorization, but also increases the computational cost and storage. The ILU factorization is available in software packages, such as PETSc and MATLAB. Their default options are typically no-fill, which is what we will use in this study. ### Algebraic Multigrid Multigrid methods, including geometric multigrid (GMG) and algebraic multigrid (AMG), are the most sophisticated preconditioners. These methods are typically based on stationary iterative methods, and they accelerate the convergence by constructing a series of coarser representations. Compared to Gauss-Seidel and ILU, multigrid preconditioners, especially GMG, are far more difficult to implement. Fortunately, AMG preconditioners are more easily accessible through software libraries. There are primarily two types of AMG methods: the classical AMG, and smoothed aggregation, which are based on different coarsening strategies and different prolongation and restriction operators. An efficient implementation of the former is available in Hyper [@falgout2002hypre], and that of the latter is available in ML [@GeeSie06ML], both accessible through PETSc. Computationally, AMG is more expensive than Gauss-Seidel and ILU in terms of both setup time and cost per iteration, but they are also more scalable in problems size. Therefore, they are beneficial only if they can accelerate the convergence of KSP methods significantly, and the problem size is sufficiently large. In general, the classical AMG is more expensive than smoothed aggregation in terms of cost per iteration, but it tends to converge much faster. Depending on the types of the problems, the classical AMG may outperform smoothed aggregation and vice versa. The classical AMG may require more tuning to achieve good performance. Therefore, in this work we primarily use the smoothed aggregation, but we will also present a comparison between ML and Hyper in Section \[sub:ML-VS-HYPRE\]. Analysis of Preconditioned KSP Methods\[sec:Analysis-KSP\] ========================================================== In this section, we discuss a few Krylov subspace methods in more detail, especially the preconditioned GMRES, TFQMR, BiCGSTAB, and QMRCGSTAB with right preconditioners. In the literature, these methods are typically given either without preconditioners or with left preconditioners. We present their high-level descriptions with right preconditioners. We also present some theoretical results in terms of operation counts and storage, which are helpful in interpreting the numerical results. The cost of the preconditioners are independent of the KSP methods, so we do not include them in the comparison. GMRES ----- Developed by Saad and Schultz [@Saad86GMRES], GMRES, or generalized minimal residual method, is one of most well-known iterative methods for solving large, sparse, nonsymmetric systems. GMRES is based on the Arnoldi iteration. At the $k$th iteration, it minimizes $\Vert\vec{r}_{k}\Vert$ in $\mathcal{K}_{k}(\vec{A},\vec{b})$. Equivalently, it finds an optimal degree-$k$ polynomial $\mathcal{P}_{k}(\vec{A})$ such that $\vec{r}_{k}=\mathcal{P}_{k}(\vec{A})\vec{r}_{0}$ and $\Vert\vec{r}_{k}\Vert$ is minimized. Suppose the approximate solution has the form $$\vec{x}_{k}=\vec{x}_{0}+\vec{Q}_{k}\vec{z},\label{eq:GMRES_sol}$$ where $\vec{Q}_{k}$ was given in (\[eq:Arnoldi\_basis\]). Let $\beta=\Vert\vec{r}_{0}\Vert$ and $\vec{q}_{1}=\vec{r}_{0}/\Vert\vec{r}_{0}\Vert$. It then follows that $$\vec{r}_{k}=\vec{b}-\vec{A}\vec{x}_{k}=\vec{b}-\vec{A}(\vec{x}_{0}+\vec{Q}_{k}\vec{z})=\vec{r}_{0}-\vec{A}\vec{Q}_{k}\vec{z}=\vec{Q}_{k+1}(\beta\vec{e}_{1}-\tilde{\vec{H}}_{k}\vec{z}),\label{eq:GMRES_residual}$$ and $\Vert\vec{r}_{k}\Vert=\Vert\beta\vec{e}_{1}-\tilde{\vec{H}}_{k}\vec{z}\Vert$. Therefore, $\vec{r}_{k}$ is minimized by solving the least squares system $\tilde{\vec{H}}_{k}\vec{z}\approx\beta\vec{e}_{1}$ using QR factorization. In this sense, GMRES is closely related to MINRES for solving symmetric systems [@Paige75MINRES]. Algorithm 1 gives a high-level pseudocode of the preconditioned GMRES with a right preconditioner; for a more detailed pseudocode, see e.g. [@Saad03IMS p. 294]. For nonsingular matrices, the convergence of GMRES depends on whether $\vec{A}$ is close to normal, and also on the distribution of its eigenvalues [@comparison_trefethen; @TB97NLA]. At the $k$th iteration, GMRES requires one matrix-vector multiplication, $k+1$ axpy operations (i.e., $\alpha\vec{x}+\vec{y}$), and $k+1$ inner products. Let $\ell$ denote the average number of nonzeros per row. In total, GMRES requires $2n(\ell+2k+2)$ floating-point operations per iteration, and requires storing $k+5$ vectors in addition to the matrix itself. Due to the high cost of orthogonalization in the Arnoldi iteration, GMRES in practice needs to be restarted periodically. This leads to GMRES with restart, denoted by GMRES($r$), where $r$ is the iteration count before restart. A typical value of $r$ is 30. **[<span style="font-variant:small-caps;">Algorithm</span>]{}** **1:** **Preconditioned GMRES** **input**: $\vec{x}_{0}$: initial guess $\vec{r}_{0}$: initial residual **output**: $\vec{x}_{*}$: final solution $\vec{q}_{1}\leftarrow\vec{r}_{0}/\Vert\vec{r}_{0}\Vert$; $\beta\leftarrow\Vert\vec{r}_{0}\Vert$ **for** $k=1,2,\dots$     obtain $\tilde{\vec{H}}_{k}$ and $\vec{Q}_{k}$ from Arnoldi iteration s.t. $\vec{r}_{k}=\mathcal{P}_{k}(\vec{A}\vec{M}^{-1})\vec{r}_{0}$     solve $\tilde{\vec{H}}_{k}\vec{z}\approx\beta\vec{e}_{1}$     $\vec{y}_{k}\leftarrow\vec{Q}_{k}\vec{z}$     check convergence of $\Vert\vec{r}_{k}\Vert$ **end for** $\vec{x}_{*}\leftarrow\vec{M}^{-1}\vec{y}_{k}$ **[<span style="font-variant:small-caps;">Algorithm</span>]{}** **2:** **Preconditioned TFQMR** **input**: $\vec{x}_{0}$: initial guess $\vec{r}_{0}$: initial residual **output**: $\vec{x}_{*}$: final solution $\vec{v}_{1}\leftarrow\vec{r}_{0}/\Vert\vec{r}_{0}\Vert$; $\beta\leftarrow\Vert\vec{r}_{0}\Vert$ **for** $k=1,2,\dots$     obtain $\tilde{\vec{T}}_{k}$ and $\vec{V}_{k}$ from bi-Lanczos s.t. $\vec{r}_{k}=\mathcal{\tilde{P}}_{k}^{2}(\vec{A}\vec{M}^{-1})\vec{r}_{0}$     solve $\tilde{\vec{T}}_{k}\vec{z}\approx\beta\vec{e}_{1}$     $\vec{y}_{k}\leftarrow\vec{V}_{k}\vec{z}$     check convergence of $\Vert\vec{r}_{k}\Vert$ **end for** $\vec{x}_{*}\leftarrow\vec{M}^{-1}\vec{y}_{k}$ QMR and TFQMR ------------- Proposed by Freund and Nachtigal [@FN91QMR], QMR, or quasi-minimal residual method, minimizes $\vec{r}_{k}$ in a pseudonorm within the Krylov subspace $\mathcal{K}(\vec{A},\vec{r}_{0})$. At the $k$th step, suppose the approximate solution has the form $$\vec{x}_{k}=\vec{x}_{0}+\vec{V}_{k}\vec{z},\label{eq:22}$$ where $\vec{V}_{k}$ was the same as that in (\[eq:nonorth\_basis\]). Let $\beta=\Vert\vec{r}_{0}\Vert$ and $\vec{v}_{1}=\vec{r}_{0}/\Vert\vec{r}_{0}\Vert$. It then follows that $$\vec{r}_{k}=\vec{b}-\vec{A}\vec{x}_{k}=\vec{b}-\vec{A}(\vec{x}_{0}+\vec{V}_{k}\vec{z})=\vec{r}_{0}-\vec{A}\vec{V}_{k}\vec{z}=\vec{V}_{k+1}(\beta\vec{e}_{1}-\tilde{\vec{T}}_{k}\vec{z}).$$ QMR minimize $\Vert\beta\vec{e}_{1}-\tilde{\vec{T}}_{k}\vec{z}\Vert$ by solving the least-squares problem $\tilde{\vec{T}}_{k}\vec{z}\approx\beta\vec{e}_{1}$, which is equivalent to minimizing the pseudonorm $$\Vert\vec{r}_{k}\Vert_{\vec{W}_{k+1}^{T}}=\Vert\vec{W}_{k+1}^{T}\vec{r}_{k}\Vert_{2},$$ where $\vec{W}_{k+1}$ was defined in (\[eq:biorth\_basis\_W\]). QMR requires explicit constructions of $\vec{W}_{k}$. TFQMR [@Freund93TFQMR] is transpose-free variant, which constructs $\vec{V}_{k}$ without forming $\vec{W}_{k}$. Motivated by CGS, [@Sonneveld89CGS], at the $k$th iteration, TFQMR finds a degree-$k$ polynomial $\tilde{\mathcal{P}}_{k}(\vec{A})$ such that $\vec{r}_{k}=\mathcal{\tilde{P}}_{k}^{2}(\vec{A})\vec{r}_{0}$. This is what we refer to as “transpose-free bi-Lanczos 1” in Table \[tab:KrylovSubspaces\]. Algorithm 2 outlines TFQMR with a right preconditioner. Its only difference from GMRES is in lines 3–5. Detailed pseudocode without preconditioners can be found in [@Freund93TFQMR] and [@Saad03IMS p. 252]. At the $k$th iteration, TFQMR requires two matrix-vector multiplication, ten axpy operations (i.e., $\alpha\vec{x}+\vec{y}$), and four inner products. In total, TFQMR requires $4n(\ell+7)$ floating-point operations per iteration, and requires storing eight vectors in addition to the matrix itself. This is comparable to QMR, which requires 12 axpy operations and two inner products, so QMR requires the same number of floating-point operations. However, QMR requires storing twice as many vectors as TFQMR. In practice, TFQMR often outperforms QMR, because the multiplication with $\vec{A}^{T}$ is often less optimized. In addition, the preconditioning of QMR is problematic, especially with multigrid preconditioners. Therefore, TFQMR is in general preferred over QMR. Both QMR and TFQMR may suffer from break downs, but they rarely happen in practice, especially with a good preconditioner. BiCGSTAB -------- Proposed by van der Vorst [@vanderVorst92BiCGSTAB], BiCGSTAB is a transpose-free version of BiCG, which has smoother convergence than BiCG and CGS. Different from CGS and TFQMR, at the $k$th iteration, BiCGSTAB constructs another degree-$k$ polynomial $$\mathcal{Q}_{k}(\vec{A})=(1-\omega_{1}\vec{A})(1-\omega_{2}\vec{A})\cdots(1-\omega_{k}\vec{A})\label{eq:bicgstab_poly}$$ in addition to $\mathcal{\tilde{P}}_{k}(\vec{A})$ in CGS, such that $\vec{r}_{k}=\mathcal{Q}_{k}(\vec{A})\mathcal{\tilde{P}}_{k}(\vec{A})\vec{r}_{0}$. BiCGSTAB determines $\omega_{k}$ by minimizing $\Vert\vec{r}_{k}\Vert$ with respect to $\omega_{k}$. This is what we referred to as “transpose-free bi-Lanczos 2” in Table \[tab:KrylovSubspaces\]. Like BiCG and CGS, BiCGSTAB solves the linear system $\vec{T}_{k}\vec{z}=\beta\vec{e}_{1}$ using LU factorization without pivoting, which is analogous to solving the tridiagonal system using Cholesky factorization in CG [@Hestenes52CG]. Algorithm 3 outlines BiCGSTAB with a right preconditioner, of which the only difference from GMRES is in lines 3–5. Detailed pseudocode without preconditioners can be found in [@Van-der-Vorst:2003aa p. 136]. At the $k$th iteration, BiCGSTAB requires two matrix-vector multiplications, six axpy operations, and four inner products. In total, it requires $4n(\ell+5)$ floating-point operations per iteration, and requires storing $10$ vectors in addition to the matrix itself. Like GMRES, the convergence rate of BiCGSTAB also depends on the distribution of the eigenvalues of $\vec{A}$. Unlike GMRES, however, BiCGSTAB is “parameter free.” Its underlying bi-Lanczos iteration may break down, but it is very rare in practice with a good preconditioner. Therefore, BiCGSTAB is often more efficient and robust than restarted GMRES. **[<span style="font-variant:small-caps;">Algorithm</span>]{}** **3:** **Preconditioned BiCGSTAB** **input**: $\vec{x}_{0}$: initial guess $\vec{r}_{0}$: initial residual **output**: $\vec{x}_{*}$: final solution $\vec{v}_{1}\leftarrow\vec{r}_{0}/\Vert\vec{r}_{0}\Vert$; $\beta\leftarrow\Vert\vec{r}_{0}\Vert$ **for** $k=1,2,\dots$     obtain $\vec{T}_{k}$ & $\vec{V}_{k}$ from bi-Lanczos s.t. $\vec{r}_{k}=\mathcal{Q}_{k}\mathcal{\tilde{P}}_{k}(\vec{A}\vec{M}^{-1})\vec{r}_{0}$     solve $\vec{T}_{k}\vec{z}=\beta\vec{e}_{1}$     $\vec{y}_{k}\leftarrow\vec{V}_{k}\vec{z}$     check convergence of $\Vert\vec{r}_{k}\Vert$ **end for** $\vec{x}_{*}\leftarrow\vec{M}^{-1}\vec{y}_{k}$ **[<span style="font-variant:small-caps;">Algorithm 4</span>]{}:** **Preconditioned QMRCGSTAB** **input**: $\vec{x}_{0}$: initial guess $\vec{r}_{0}$: initial residual **output**: $\vec{x}_{*}$: final solution $\vec{v}_{1}\leftarrow\vec{r}_{0}/\Vert\vec{r}_{0}\Vert$; $\beta\leftarrow\Vert\vec{r}_{0}\Vert$ **for** $k=1,2,\dots$     obtain $\tilde{\vec{T}}_{k}$ & $\vec{V}_{k}$ from bi-Lanczos s.t. $\vec{r}_{k}=\mathcal{Q}_{k}\mathcal{\tilde{P}}_{k}(\vec{A}\vec{M}^{-1})\vec{r}_{0}$     solve $\tilde{\vec{T}}_{k}\vec{z}\approx\beta\vec{e}_{1}$     $\vec{y}_{k}\leftarrow\vec{V}_{k}\vec{z}$     check convergence of $\Vert\vec{r}_{k}\Vert$ **end for** $\vec{x}_{*}\leftarrow\vec{M}^{-1}\vec{y}_{k}$ QMRCGSTAB --------- One disadvantage of BiCGSTAB is that the residual does not decrease monotonically, and is often quite oscillatory. Chan ${\it el\thinspace al.}$ [@CGS94QMRCGS] proposed QMRCGSTAB, which is a hybrid of QMR and BiCGSTAB, to improve the smoothness of BiCGSTAB. Like BiCGSTAB, QMRCGSTAB constructs a polynomial $\mathcal{Q}_{k}(\vec{A})$ as defined in ((\[eq:bicgstab\_poly\])) by minimizing $\Vert\vec{r}_{k}\Vert$ with respect to $\omega_{k}$, which they refer to as “local quasi-minimization.” Like QMR, it then minimizes $\Vert\vec{W}_{k+1}^{T}\vec{r}_{k}\Vert_{2}$ by solving the least-squares problem $\tilde{\vec{T}}_{k}\vec{z}\approx\beta\vec{e}_{1}$, which they refer to as “global quasi-minimization.” Algorithm 4 outlines the high-level algorithm, of which its only difference from BiCGSTAB is in line 4. Detailed pseudocode without preconditioners can be found in [@CGS94QMRCGS]. At the $k$th iteration, QMRCGSTAB requires two matrix-vector multiplications, eight axpy operations, and six inner products. In total, it requires $4n(\ell+7)$ floating-point operations per iteration, and it requires storing 13 vectors in addition to the matrix itself. Like QMR and BiCGSTAB, the underlying bi-Lanczos may break down, but it is very rare in practice with a good preconditioner. Comparison of Operation Counts and Storage ------------------------------------------ We summarize the cost and storage comparison of the four KSP methods in Table \[tab:Comparison-of-operations\]. Except for GMRES, the other methods require two matrix-vector multiplications per iteration. However, we should not expect GMRES to be twice as fast as the other methods, because the reduction of error in one iteration of the other methods is approximately equal to that of two iterations in GMRES. Therefore, the cost these methods are comparable in terms of matrix-vector multiplication. However, since GMRES minimizes the true 2-norm of the residual if no restart is needed, its cost per iteration is smaller for small iteration counts. Therefore, GMRES may indeed by the most efficient, especially with an effective preconditioner. However, without an effective preconditioner, the restarted GMRES may converge slowly and even stagnate for large systems. For the three methods based on bi-Lanczos, computing the $2$-norm of the residual for convergence checking requires an extra inner product, as included in Table \[tab:Comparison-of-operations\]. Among the three methods, BiCGSTAB is the most efficient, requiring $8n$ fewer floating point operations per iteration than TFQMR and QMRCGSTAB. In Section \[sec:Results\], we will present the numerical comparisons of the different methods, which mostly agree with the above analysis. -------------- --------------------------------------------- ------------- ----------- ------------ --------------------- ----------- [Mat-vec]{} [Inner]{} Stored [ Prod.]{} [ Prod.]{} vectors [GMRES]{} $\Vert\vec{r}_{k}\Vert$ [1]{} [$k$+1]{} [$k$+1]{} [$2n(\ell+2k+2)$]{} [$k+5$]{} [BiCGSTAB]{} $\Vert\vec{r}_{k}(\omega_{k})\Vert$ [6]{} [4]{} [$4n(\ell+5)$]{} [10]{} $\Vert\vec{r}_{k}\Vert_{\vec{W}_{k+1}^{T}}$ [10]{} [4]{} [$8$]{} $\Vert\vec{r}_{k}(\omega_{k})\Vert$ & $\Vert\vec{r}_{k}\Vert_{\vec{W}_{k+1}^{T}}$ -------------- --------------------------------------------- ------------- ----------- ------------ --------------------- ----------- : \[tab:Comparison-of-operations\]Comparison of operations per iteration and memory requirements of various KSP methods. $n$ denotes the number of rows, $\ell$ the average number of nonzeros per row, and $k$ the iteration count. In terms of storage, TFQMR requires the least amount of memory. BiCGSTAB requires two more vectors than TFQMR, and QMRCGSTAB requires three more vectors than BiCGSTAB. GMRES requires the most amount of memory when $k\apprge8$. These storage requirements are typically not large enough to be a concern in practice. The analysis above did not include the preconditioners. The computational cost of Gauss-Seidel and ILU is approximately equal to one matrix-vector multiplication per iteration. The cost of the multigrid preconditioner is dominated by that of the setup and the smoothing steps at the finest level, which is typically a few times of that of Gauss-Seidel and ILU, depending on how many times the smoother is called. Both ILU and multigrid preconditioner require extra storage proportional to the number of nonzeros in the coefficient matrices, which is rarely a concern in practice. \[sec:PDE-Discretization-Methods\]PDE Discretization Methods ============================================================ For our comparative study, we construct test matrices from PDE discretizations in $2$-D and $3$-D. In this section, we give a brief overview of a few discretization methods used in our tests, with a focus on the origins of non-symmetry of the linear systems. Weighted Residual Formulation of a Model Problem ------------------------------------------------ Consider an abstract but general linear, time-independent PDE over $\Omega$ $$\mathcal{P}\,u(\vec{x})=f(\vec{x}),\label{eq:linearPDE}$$ with Dirichlet or Neumann boundary conditions over $\partial\Omega$, where $\mathcal{P}$ is a linear differential operator, and $f$ is a known function. A specific example is the model problem $$\begin{aligned} -\nabla^{2}u+c\nabla u+du & =f\quad\text{in }\Omega,\label{eq:model_problem}\\ u & =g\quad\text{on }\partial\Omega,\end{aligned}$$ for which $\mathcal{P}=-\nabla^{2}+c\nabla+d$, where $c$ and $d$ are scalar constants or functions. When $d=0$, it is a convection-diffusion equation; when $c=0$, it is a Helmholtz equation. Most PDE discretization methods can be expressed in a weighted residual formulation. In particular, consider a set of test (a.k.a. weight) functions $\Psi(\vec{x})=\{\psi_{j}(\vec{x})\}$. The PDE (\[eq:linearPDE\]) is then converted into a set of integral equations $$\int_{\Omega}\mathcal{P}\,u(\vec{x})\,\psi_{j}\,d\vec{x}=\int_{\Omega}f(\vec{x})\,\psi_{j}\,d\vec{x}.\label{eq:weak_form}$$ To discretize the system fully, we approximate $u$ by a set of basis functions $\Phi(\vec{x})=\{\phi_{i}(\vec{x})\}$, i.e., $u\approx\vec{u}^{T}\vec{\Phi}=\sum_{i}u_{i}\phi_{i}$. We then obtain a linear system $$\vec{A}\vec{u}=\vec{b},\label{eq:linearsys}$$ where $$a_{ij}=\int_{\Omega}\left(\mathcal{P}\,\phi_{j}(\vec{x})\right)\,\psi_{i}(\vec{x})\,d\vec{x}\mbox{ \,\,\ and\,\,\ }b_{i}=\int_{\Omega}f(\vec{x})\psi_{i}(\vec{x})\,d\vec{x}.$$ The system needs to be further modified to apply the boundary conditions. In general, the test and/or basis functions have local support, and therefore $\vec{A}$ is in general sparse. Galerkin Finite Element Methods ------------------------------- The finite element methods (FEM) are widely used for discretizing PDEs over complex geometries. For an introduction of finite element methods, see e.g. [@ZTZ05FEM]. In the classical Galerkin FEM, the basis functions $\vec{\Phi}$ and the test functions $\vec{\Psi}$ are equal. If $\mathcal{P}$ is the Laplacian operator $\nabla^{2}$ and $\phi_{i}$ vanishes along $\partial\Omega$, then after integration by parts, $$a_{ij}=\int_{\Omega}\left(\nabla^{2}\,\phi_{j}(\vec{x})\right)\,\phi_{i}(\vec{x})\,d\vec{x}=-\int_{\Omega}\nabla\,\phi_{j}(\vec{x})\cdot\nabla\,\phi_{j}(\vec{x})\,d\vec{x},$$ so we obtain a symmetric linear system for Helmholtz equations. However, for the convection-diffusion equation or more complicated PDEs, the linear system is in general nonsymmetric. In this study, we will use the convention-diffusion equation as the test problem for FEM in both 2D and 3D. Petrov-Galerkin Methods ----------------------- Another source of nonsymmetric linear systems is the Petrov-Galerkin methods, in which the test functions are different from the basis functions. The Petrov-Galerkin methods are desirable, because the basis and test functions have different desired properties for accuracy and stability. An example of Petrov-Galerkin methods is AES-FEM [@CDJ16OEQ; @Conley16HAES], which uses generalized Lagrange polynomials for basis functions and the standard linear finite-element basis functions as test functions. Unlike Galerkin methods, the accuracy and stability of AES-FEM are independent of the element quality of the meshes. The linear systems from AES-FEM are always nonsymmetric, even for the Helmholtz equations. We will consider some matrices arising from AES-FEM methods for the convection-diffusion equation in both 2D and 3D. Finite Difference and Generalized Finite Difference --------------------------------------------------- The finite difference methods are often used to discretize PDEs on structured or curvilinear meshes. For the Helmholtz equations with Dirichlet boundary conditions, we may obtain a symmetric linear system by using centered difference approximation on a uniform structured mesh. However, the finite difference methods in general lead to nonsymmetric matrices with more complicated PDEs, more sophisticated boundary or jump conditions, higher-order discretizations, nonuniform meshes, or curvilinear meshes. We will consider a nonsymmetric matrix from finite difference methods of a Helmholtz equation on a nonuniform structured mesh, arising from climate modeling. The finite difference methods were traditionally limited to structured meshes. However, they can be generalized to unstructured meshes by the generalized finite difference methods, or GFD [@Benito08GFDM]. These methods are weighted residual methods with Dirac delta functions as the test functions and the generalized Lagrange polynomials as basis functions. Similar to the Petrov-Galerkin methods, the generalized finite difference methods in general result in nonsymmetric linear systems. We will consider some test matrices from GFD for the convection-diffusion equation. Numerical Results\[sec:Results\] ================================ In this section, we present some empirical comparisons of the preconditioned KSP methods described in Section \[sec:Analysis-KSP\]. For GMRES, TFQMR and BiCGSTAB, we use the built-in implementations in PETSc v3.7.1 [@petsc-user-ref]. For GMRES, we use 30 as the restart parameter, the default in PETSc, so we denote the method by GMRES(30). QMRCGSTAB is not available in PETSc. We implemented it ourselves using the lower-level matrix-vector libraries in PETSc. We use the Gauss-Seidel, ILU and AMG as right preconditioners for these KSP methods. The Gauss-Seidel preconditioner is available in PETSc as SOR with the relaxation parameter set to $1$. For ILU, we use the default options in PETSc, which has no fills. For AMG, we primarily use the smoothed aggregation in ML v5.0 [@GeeSie06ML] with default parameters. We will also compare ML against the classical AMG in Hypre v2.10 [@falgout2002hypre]. We compare the convergence history and runtimes of these methods. For the convergence criteria, we use the relative $2$-norm of the residual, i.e. the 2-norm of the residual divided by the 2-norm of the right-hand side. For all the cases, the tolerance is set to $10^{-10}$. We conducted our tests on a single node of a cluster with two 2.6 GHz Intel Xeon E5-2690v3 processors and 128 GB of memory. Because ILU is only available in serial in PETSc, we performed all the tests using a single core, and defer comparisons of parallel algorithms and implementations to future work. Test Matrices ------------- [&gt;p[1.1cm]{}|&gt;p[2.1cm]{}|c|&gt;p[1.5cm]{}|&gt;p[1.7cm]{}|&gt;p[1.6cm]{}]{} **Matrix** & **Discretization** & **PDE** & **Size** & **\#Nonzeros** & **Cond. No.**[\ ]{} 1 & **FEM 2D** & [ conv. diff.]{} & [1,044,226]{} & [7,301,314]{} & [8.31e5]{}[\ ]{} 2 & **FEM 3D** & [ conv. diff.]{} & [237,737]{} & [1,819,743]{} & [8.90e3]{}[\ ]{} 3 & **FEM 3D** & [ conv. diff.]{} & [1,529,235]{} & [23,946,925 ]{} & [3.45e4]{}[\ ]{} 4 & **FEM 3D** & [ conv. diff.]{} & [13,110,809]{} & [197,881,373]{} & $-$[\ ]{} 5 & **AES-FEM 2D** & [ conv. diff.]{} & [1,044,226 ]{} & [13,487,418]{} & [9.77e5]{}[\ ]{} 6 & **AES-FEM 3D** & [ conv. diff.]{} & [13,110,809]{} & [197,882,439]{} & $-$[\ ]{} 7 & **GFD 2D** & [ conv. diff.]{} & [1,044,226 ]{} & [7,476,484]{} & [2.38e6]{}[\ ]{} 8 & **GFD 3D** & [ conv. diff.]{} & [1,529,235]{} & [23,948,687]{} & [6.56e4]{}[\ ]{} 9 & **FDM 2D** & [ Helmholtz]{} & [1,340,640]{} & [6,694,058]{} & [7.23e08]{}[\ ]{} We constructed the test matrices from PDE discretizations as described in Section \[sec:PDE-Discretization-Methods\]. We selected nine representative matrices from a much larger number of case that we have tested. The sizes of these matrices range from about $10^{5}$ to $10^{7}$ unknowns. Table \[tab:test\_matrices\] summarizes the PDE discretizations, the sizes, and the condition numbers of each test matrix. The condition numbers of the largest matrices were unavailable because their estimations ran out of memory. ![\[fig:nonuniform-mesh\]Example nonuniform structured mesh for Helmholtz equation.](figures/Mesh2_2D_16){width="1\columnwidth"} ![\[fig:nonuniform-mesh\]Example nonuniform structured mesh for Helmholtz equation.](figures/climate_plot){width="0.98\columnwidth"} For the 2D FEM, AES-FEM, and GFD, our test matrices were obtained with an unstructured mesh generated using Triangle [@ShewchukTRIANGLE96]. Figure \[fig:Representative-example-2-D\] shows the qualitative pattern of the mesh at a much coarser resolution than what was used in our tests. For the 3D tests, we generated three unstructured meshes of a cube at different resolutions using TetGen [@Si2006], to facilitate the scalability study of the preconditioned KSP methods with respect to the number of unknowns. For the finite difference method, we consider a matrix obtained from an unequally spaced structured mesh for the Helmholtz equation with Neumann boundary conditions and a very small constant $d$ in (\[eq:model\_problem\]), so the matrix has a very large condition number. Figure \[fig:nonuniform-mesh\] shows the qualitative pattern of the mesh at a much coarser resolution. Convergence Comparison ---------------------- ![\[fig:FEM-residual\]Relative residuals versus numbers of matrix-vector multiplications for Gauss-Seidel, ILU and ML preconditioner for matrix 1 (left column) and matrix 4 (right column).](figures/Iteration_plots/line_fewer_markers_v4/Figures/matrix1_SOR){width="100.00000%"} ![\[fig:FEM-residual\]Relative residuals versus numbers of matrix-vector multiplications for Gauss-Seidel, ILU and ML preconditioner for matrix 1 (left column) and matrix 4 (right column).](figures/Iteration_plots/line_fewer_markers_v4/Figures/matrix4_SOR){width="100.00000%"} ![\[fig:FEM-residual\]Relative residuals versus numbers of matrix-vector multiplications for Gauss-Seidel, ILU and ML preconditioner for matrix 1 (left column) and matrix 4 (right column).](figures/Iteration_plots/line_fewer_markers_v4/Figures/matrix1_ILU){width="100.00000%"} ![\[fig:FEM-residual\]Relative residuals versus numbers of matrix-vector multiplications for Gauss-Seidel, ILU and ML preconditioner for matrix 1 (left column) and matrix 4 (right column).](figures/Iteration_plots/line_fewer_markers_v4/Figures/matrix4_ilu){width="100.00000%"} ![\[fig:FEM-residual\]Relative residuals versus numbers of matrix-vector multiplications for Gauss-Seidel, ILU and ML preconditioner for matrix 1 (left column) and matrix 4 (right column).](figures/Iteration_plots/line_fewer_markers_v4/Figures/matrix1_ml){width="100.00000%"} ![\[fig:FEM-residual\]Relative residuals versus numbers of matrix-vector multiplications for Gauss-Seidel, ILU and ML preconditioner for matrix 1 (left column) and matrix 4 (right column).](figures/Iteration_plots/line_fewer_markers_v4/Figures/matrix4_ml){width="100.00000%"} ![\[fig:FD-residual\]Relative residual versus iteration count for Gauss-Seidel, ILU and ML preconditioners for matrix 7 (left column) and matrix 9 (right right).](figures/Iteration_plots/line_fewer_markers_v4/Figures/matrix7_sor){width="100.00000%"} ![\[fig:FD-residual\]Relative residual versus iteration count for Gauss-Seidel, ILU and ML preconditioners for matrix 7 (left column) and matrix 9 (right right).](figures/Iteration_plots/line_fewer_markers_v4/Figures/matrix9_sor){width="100.00000%"} ![\[fig:FD-residual\]Relative residual versus iteration count for Gauss-Seidel, ILU and ML preconditioners for matrix 7 (left column) and matrix 9 (right right).](figures/Iteration_plots/line_fewer_markers_v4/Figures/matrix7_ilu){width="100.00000%"} ![\[fig:FD-residual\]Relative residual versus iteration count for Gauss-Seidel, ILU and ML preconditioners for matrix 7 (left column) and matrix 9 (right right).](figures/Iteration_plots/line_fewer_markers_v4/Figures/matrix9_ilu){width="100.00000%"} ![\[fig:FD-residual\]Relative residual versus iteration count for Gauss-Seidel, ILU and ML preconditioners for matrix 7 (left column) and matrix 9 (right right).](figures/Iteration_plots/line_fewer_markers_v4/Figures/matrix7_ml){width="100.00000%"} ![\[fig:FD-residual\]Relative residual versus iteration count for Gauss-Seidel, ILU and ML preconditioners for matrix 7 (left column) and matrix 9 (right right).](figures/Iteration_plots/line_fewer_markers_v4/Figures/matrix9_ml){width="100.00000%"} The Krylov subspace methods we consider theoretically are all based on the same Krylov subspace, but their practical convergence is complicated by the restarts in Arnoldi iteration and the nonorthogonal basis in the bi-Lanczos iteration. To supplement the theoretical results in Section \[sec:Analysis-KSP\], we present the convergence history of four test matrices, including FEM 2D (matrix 1), FEM 3D (matrix 4), GFD 2D (matrix 7), and Helmholtz equation 2D (matrix 9) in Figures \[fig:FEM-residual\] and \[fig:FD-residual\]. These plots are representative for the other test cases. Because the asymptotic convergence of the methods depend on the degrees of the polynomials, or equivalently the number of matrix-vector multiplications, we plot the relative residual with respect to the number of matrix-vector multiplications, instead of iteration counts. For ease of cross-comparison of different preconditioners, we truncated the $x$ axis to be the same for Gauss-Seidel and ILU for each matrix. Figure \[fig:FEM-residual\] shows the convergence history of the FEM in 2D and 3D. We observe that with ML, the four methods had about the same convergence trajectories, while GMRES(30) converged slightly faster, and the convergence of all the methods were quite smooth, without apparent oscillation. In contrast, with Gauss-Seidel or ILU preconditioners, GMRES(30) converged fast initially, but then slowed down drastically due to restart, whereas BiCGSTAB had highly oscillatory residuals. QMRCGSTAB was smoother than BiCGSTAB, and it sometimes converged faster than BiCGSTAB. The convergence of TFQMR exhibited a staircase pattern, indicating frequent near stagnation. These results indicate that an effective multigrid preconditioner can effectively overcome the disadvantages of each of these KSP methods, including oscillations in BiCGSTAB and slow convergence of GMRES due to restarts. Figure \[fig:FD-residual\] shows the convergence results for GFD 2D (matrix 7) and finite-difference solution for Helmholtz 2D (matrix 9). The result for matrix 7 is qualitative similar to those of 2D FEM, except that the stagnation of TFQMR is even more apparent. Matrix 9 is much more problematic for all the cases. Bi-CGTAB oscillated wildly. GMRES and TFQMR both stagnated with Gauss-Seidel and ILU. Even with ML, it took more than 300 matrix-vector products for all the methods. We will address the efficiency issue of AMG preconditioners further in Section \[sub:ML-VS-HYPRE\] when we compare smoothed aggregation with classical AMG. Timing Comparison ----------------- ![\[fig:Timing\]Timing results with convection-diffusion equation and Helmholtz equation. The encircled bars indicate the fastest solver-preconditioner combination. For matrix 9, star ([\*]{}) indicates stagnation of the solvers after 10,000 iterations.](figures/BAR_PLOT/matrix1_bar){width="100.00000%"} ![\[fig:Timing\]Timing results with convection-diffusion equation and Helmholtz equation. The encircled bars indicate the fastest solver-preconditioner combination. For matrix 9, star ([\*]{}) indicates stagnation of the solvers after 10,000 iterations.](figures/BAR_PLOT/matrix4_bar){width="100.00000%"} ![\[fig:Timing\]Timing results with convection-diffusion equation and Helmholtz equation. The encircled bars indicate the fastest solver-preconditioner combination. For matrix 9, star ([\*]{}) indicates stagnation of the solvers after 10,000 iterations.](figures/BAR_PLOT/matrix7_bar){width="100.00000%"} ![\[fig:Timing\]Timing results with convection-diffusion equation and Helmholtz equation. The encircled bars indicate the fastest solver-preconditioner combination. For matrix 9, star ([\*]{}) indicates stagnation of the solvers after 10,000 iterations.](figures/BAR_PLOT/matrix8_bar){width="100.00000%"} ![\[fig:Timing\]Timing results with convection-diffusion equation and Helmholtz equation. The encircled bars indicate the fastest solver-preconditioner combination. For matrix 9, star ([\*]{}) indicates stagnation of the solvers after 10,000 iterations.](figures/BAR_PLOT/matrix5_bar){width="100.00000%"} ![\[fig:Timing\]Timing results with convection-diffusion equation and Helmholtz equation. The encircled bars indicate the fastest solver-preconditioner combination. For matrix 9, star ([\*]{}) indicates stagnation of the solvers after 10,000 iterations.](figures/BAR_PLOT/matrix9_bar){width="100.00000%"} The convergence plots are helpful in revealing the intrinsic properties of the KSP methods, but for most applications, the overall runtime is the ultimate criteria. Figure \[fig:Timing\] compares the runtimes for six matrices: FEM 2D and 3D (matrices 1 and 4), AES-FEM 2D and 3D (matrices 5 and 6), GFD 2D (matrix 7), and Helmholtz 2D (matrix 9). The results of GFD 3D was qualitatively the same as FEM 3D, so we did not include them. We consider the combinations of all four KSP methods with the three preconditioners, and encircle the ones with the best performance for each matrix. It can be seen that ML accelerated all the KSP methods significantly better than Gauss-Seidel and ILU. With ML, GMRES(30) is slightly faster than the other KSP methods in five out of six cases. However, GMRES(30) is also significantly slower than the others when using Gauss-Seidel or ILU preconditioners. These are consistent with the convergence results in Figures \[fig:FEM-residual\] and \[fig:FD-residual\]. Therefore, the numbers of matrix-vector multiplications are fairly good predictors of the overall performance. Among the bi-Lanczos-based methods, BiCGSTAB is usually the most efficient, thanks to its lower cost per iteration, despite its less smooth convergence. QMRCGSTAB is a good alternative of BiCGSTAB if smoother convergence is desired, which may lead to earlier termination for relatively larger convergence tolerances. TFQMR is less reliable due to its frequent stagnation. Between ILU and Gauss-Seidel, ILU consistently outperforms. Note that for the Helmholtz equation, none of the methods converged with Gauss-Seidel after 10,000 iterations. These results suggest that GMRES or BiCGSTAB with ML should be ones’ first choices. However, if an AMG preconditioner is unavailable, then BiCGSTAB with ILU may be a viable alternative for relatively small problems. ML Versus HYPRE Preconditioners\[sub:ML-VS-HYPRE\] -------------------------------------------------- ![\[fig:ML\_vs\_Hypre\]Convergence history (left) and runtimes (right) of preconditioned KSP methods with ML and Hypre for FEM 2D, FEM 3D and Helmholtz 2D. The encircled bars indicate the solver-preconditioner combination with the best performance.](\string"figures/Iteration_plots/ML V_S HYPRE/matrix1\string".pdf){width="100.00000%"} ![\[fig:ML\_vs\_Hypre\]Convergence history (left) and runtimes (right) of preconditioned KSP methods with ML and Hypre for FEM 2D, FEM 3D and Helmholtz 2D. The encircled bars indicate the solver-preconditioner combination with the best performance.](\string"figures/Iteration_plots/ML V_S HYPRE/matrix1_bar\string".pdf){width="100.00000%"} ![\[fig:ML\_vs\_Hypre\]Convergence history (left) and runtimes (right) of preconditioned KSP methods with ML and Hypre for FEM 2D, FEM 3D and Helmholtz 2D. The encircled bars indicate the solver-preconditioner combination with the best performance.](\string"figures/Iteration_plots/ML V_S HYPRE/matrix4\string".pdf){width="100.00000%"} ![\[fig:ML\_vs\_Hypre\]Convergence history (left) and runtimes (right) of preconditioned KSP methods with ML and Hypre for FEM 2D, FEM 3D and Helmholtz 2D. The encircled bars indicate the solver-preconditioner combination with the best performance.](\string"figures/Iteration_plots/ML V_S HYPRE/matrix4_bar\string".pdf){width="100.00000%"} ![\[fig:ML\_vs\_Hypre\]Convergence history (left) and runtimes (right) of preconditioned KSP methods with ML and Hypre for FEM 2D, FEM 3D and Helmholtz 2D. The encircled bars indicate the solver-preconditioner combination with the best performance.](\string"figures/Iteration_plots/ML V_S HYPRE/matrix9\string".pdf){width="100.00000%"} ![\[fig:ML\_vs\_Hypre\]Convergence history (left) and runtimes (right) of preconditioned KSP methods with ML and Hypre for FEM 2D, FEM 3D and Helmholtz 2D. The encircled bars indicate the solver-preconditioner combination with the best performance.](\string"figures/Iteration_plots/ML V_S HYPRE/matrix9_bar\string".pdf){width="100.00000%"} Our preceding results demonstrated the effectiveness of ML versus ILU and Gauss-Seidel. There are two primary types of AMG: smoothed aggregation and classical AMG. We now compare their respective implementations in ML and Hypre. We consider three representative cases: FEM 2D (matrix 1), FEM 3D (matrix 4), and Helmholtz 2D (matrix 9). Figure \[fig:ML\_vs\_Hypre\] shows the convergence and runtimes of the four KSP methods with ML and Hypre. For ML, we used the default parameters. For Hypre, however, different “strong thresholds” are needed for 2D and 3D problems, as documented in Hypre User’s Manual. This threshold controls the sparsity of the coarse levels. For 2D problems, we used the default threshold, which is 0.25. For 3D FEM, the recommended value was 0.5 in the Manual, but we found 0.8 delivered the best performance in our tests, which is what used for the results Figure \[fig:ML\_vs\_Hypre\]. ML outperformed Hypre for FEM 3D by about a factor of 2, because of its lower cost per iteration. For FEM 2D, ML also outperformed Hypre, but less significantly. However, for the ill-conditioned 2D Helmholtz equation, Hypre outperformed ML by a factor of 30. We tried various smoothers in ML, but the results were quantitatively the same. This indicates that Hypre performs better than ML for ill-conditioned systems, because its coarser matrices are denser and hence preserve more information than those of ML. Overall, there is not a clear winner between the two AMG methods. ML may be preferred, because it does not require manually tuning the parameters based on the dimension of the PDE, and it performs better for well-conditioned problems. These results also indicate that more research into multigrid preconditioners are needed. Scalability Comparison ---------------------- ![\[fig:Scalability\]Scalability result of the preconditioned solvers for FEM 3D.](figures/Scalability/scalability_gmres){width="100.00000%"} ![\[fig:Scalability\]Scalability result of the preconditioned solvers for FEM 3D.](figures/Scalability/scalability_bicgstab){width="100.00000%"} ![\[fig:Scalability\]Scalability result of the preconditioned solvers for FEM 3D.](figures/Scalability/scalability_qmrcgstab){width="100.00000%"} ![\[fig:Scalability\]Scalability result of the preconditioned solvers for FEM 3D.](figures/Scalability/scalability_tfqmr){width="100.00000%"} The relative performances of preconditioned KSP methods may depend on problem sizes. To assess the scalability of different methods, we consider the matrices 2, 3 and 4 from FEM 3D, whose numbers of unknowns grow approximately by a factor of 8 between each adjacent pair. Figure \[fig:Scalability\] shows the timing results of the four Krylov subspace methods with Gauss-Seidel, ILU, ML, and Hypre preconditioners. The $x$ axis corresponds to the number of unknowns, and the $y$ axis corresponds to the runtimes, both in logarithmic scale. For a perfectly scalable method, the slope should be 1. We observe that with either ML or Hypre, the slope for the four KSP methods were all nearly 1, where ML has a slightly smaller slope. The slopes for Gauss-Seidel and ILU are greater than 1, so the numbers of iterations would grow as the problem size grows. Therefore, the performance advantage of multigrid preconditioners would become even larger as the problem size increases. Conclusions and Discussions\[sec:Conclusions-and-Future\] ========================================================= In this paper, we presented a systematic comparison of a few preconditioned Krylov subspace methods, including GMRES, TFQMR, BiCGSTAB and QMRCGSTAB, with Gauss-Seidel, ILU and AMG as right preconditioners. These methods are representative of the state-of-the-art methods for solving large, sparse, nonsymmetric linear systems arising from PDE discretizations. We compared the methods theoretically regarding their cost per iteration, and empirically regarding convergence, runtimes, and scalability. Our results show that GMRES with smoothed-aggregation AMG preconditioners is often the most efficient method, because GMRES tends to the most efficient when the iteration count is low, which is the case with an effective AMG preconditioner. However, GMRES is far less competitive than the other methods with Gauss-Seidel or ILU, because the restarts may cause slow convergence and even stagnation. Based on our analysis, we make the following primary recommendation: > *For a very large, reasonably well-conditioned linear system, use GMRES with smoothed-aggregation AMG as right preconditioner.* With an AMG preconditioner, BiCGSTAB converges almost as smoothly as the other methods, and it can be safely used in place of GMRES with only a slight loss of performance. However, GMRES should not be used for large systems without a multigrid preconditioner. The easiest way to implement the above recommendation is to use existing software packages. PETSc [@petsc-user-ref] is an excellent choice, since it supports both left and right preconditioning for GMRES and BiCGSTAB and supports smoothed aggregation through ML [@GeeSie06ML] as an optional external package. Note that PETSc uses left preconditioning by default. The user must explicitly set the option to use right preconditioning to avoid premature termination or false stagnation. Some software packages do not support right preconditioning or AMG preconditioners. For example, as of Release 2016a, the built-in GMRES solver in MATLAB only supports left preconditioning, and there is no built-in support for AMG. In these cases, we make a secondary recommendation: > *If AMG is unavailable and the problem size is moderate, BiCGSTAB with ILU as right preconditioner is a reasonable choice.* This choice may be good for MATLAB users, because MATLAB has built-in support for ILU, and the built-in BiCGSTAB uses right preconditioning. If smoother convergence is desired, a custom implementation of QMRCGSTAB with right preconditioning may be used. The built-in TFQMR in MATLAB supports right preconditioning, but it stagnates frequently, so we do not recommend. The Gauss-Seidel preconditioner should be used only as a last resort if neither AMG nor ILU is available. We note that although smoothed aggregation is a good choice in many cases, it is by no means bulletproof, especially for ill-conditioned systems. For linear systems arising from elliptic PDEs, the condition number typically grows inversely proportional to $h^{2}$, so ill-conditioning occurs quite frequently in practice for large-scale problems. For relatively ill-conditioned systems, the classical AMG, as implemented in Hypre [@falgout2002hypre], may be a better choice than smoothed aggregation. However, the classical AMG is not as scalable as smoothed aggregation, and it requires tuning parameters for 3D problems [@falgout2002hypre]. Further research and development are needed to match the efficiency of smoothed aggregation and the robustness of classical AMG. A promising direction is hybrid geometric+algebraic multigrid [@LJM14HYGA], which we plan to explore in the future. One limitation of this work is that we did not consider parallel performance and the scalability of the iterative methods with respect to the number of cores. This omission was necessary to make the scope of this study manageable. Fortunately, for our primary recommendation, the MPI-based parallel implementation of right-preconditioned GMRES and BiCGSTAB are available in PETSc, and that of the smoothed aggregation AMG is available in ML. They are excellent choices for distributed-memory machines. For shared-memory machines and GPU acceleration, some OpenMP and CUDA-based implementations are available in some software packages, such as [@Paralution], but its current implementation seems to support only left preconditioning. Further development and comparison of different parallel algorithms are still needed, which we plan to explore in the future. Another omission in this work was the solution of nonsymmetric, rank-deficient linear systems, which is a challenging problem in its own right. Acknowledgements {#acknowledgements .unnumbered} ================ Results were obtained using the high-performance LI-RED computing system at the Institute for Advanced Computational Science of Stony Brook University, funded by the Empire State Development grant NYS \#28451.
--- author: - | J. Adamczewski-Musch$^{4}$, O. Arnold$^{10,9}$, E. T. Atomssa$^{15}$, C. Behnke$^{8}$, A. Belounnas$^{15}$, A. Belyaev$^{7}$, J.C. Berger-Chen$^{10,9}$, J. Biernat$^{3}$, A. Blanco$^{2}$, C.  Blume$^{8}$, M. Böhmer$^{10}$, P. Bordalo$^{2}$, S. Chernenko$^{7}$, L. Chlad$^{16}$, C.  Deveaux$^{11}$, J. Dreyer$^{6}$, A. Dybczak$^{3}$, E. Epple$^{10,9}$, L. Fabbietti$^{10,9}$, O. Fateev$^{7}$, P. Filip$^{1}$, P. Fonte$^{2,a}$, C. Franco$^{2}$, J. Friese$^{10}$, I. Fröhlich$^{8}$, T. Galatyuk$^{5,4}$, J. A. Garzón$^{17}$, R. Gernhäuser$^{10}$, M. Golubeva$^{12}$, F. Guber$^{12}$, M. Gumberidze$^{5,b}$, S. Harabasz$^{5,3}$, T. Heinz$^{4}$, T. Hennino$^{15}$, S. Hlavac$^{1}$, C. Höhne$^{11}$, R. Holzmann$^{4}$, A. Ierusalimov$^{7}$, A. Ivashkin$^{12}$, B. Kämpfer$^{6,c}$, T. Karavicheva$^{12}$, B. Kardan$^{8}$, I. Koenig$^{4}$, W. Koenig$^{4}$, B. W. Kolb$^{4}$, G. Korcyl$^{3}$, G. Kornakov$^{5}$, R. Kotte$^{6}$, W. Kühn$^{11}$, A. Kugler$^{16}$, T. Kunz$^{10}$, A. Kurepin$^{12}$, A. Kurilkin$^{7}$, P. Kurilkin$^{7}$, V. Ladygin$^{7}$, R. Lalik$^{10,9}$, K. Lapidus$^{10,9}$, A. Lebedev$^{13}$, T. Liu$^{15}$, L. Lopes$^{2}$, M. Lorenz$^{8,g}$, T. Mahmoud$^{11}$, L. Maier$^{10}$, A. Mangiarotti$^{2}$, J. Markert$^{4}$, S. Maurus$^{10}$, V. Metag$^{11}$, J. Michel$^{8}$, D.M. Mihaylov$^{10,9}$, E. Morinière$^{15}$, S. Morozov$^{12,d}$, C. Müntz$^{8}$, R. Münzer$^{10,9}$, L. Naumann$^{6}$, K. N. Nowakowski$^{3}$, M. Palka$^{3}$, Y. Parpottas$^{14,e}$, V. Pechenov$^{4}$, O. Pechenova$^{8}$, O. Petukhov$^{12,d}$, J. Pietraszko$^{4}$, W. Przygoda$^{3,*}$, S. Ramos$^{2}$, B. Ramstein$^{15}$, A. Reshetin$^{12}$, P. Rodriguez-Ramos$^{16}$, P. Rosier$^{15}$, A. Rost$^{5}$, A. Sadovsky$^{12}$, P. Salabura$^{3}$, T. Scheib$^{8}$, H. Schuldes$^{8}$, E. Schwab$^{4}$, F. Scozzi$^{5,15}$, F. Seck$^{5}$, P. Sellheim$^{8}$, J. Siebenson$^{10}$, L. Silva$^{2}$, Yu.G. Sobolev$^{16}$, S. Spataro$^{f}$, H. Ströbele$^{8}$, J. Stroth$^{8,4}$, P. Strzempek$^{3}$, C. Sturm$^{4}$, O. Svoboda$^{16}$, P. Tlusty$^{16}$, M. Traxler$^{4}$, H. Tsertos$^{14}$, E. Usenko$^{12}$, V. Wagner$^{16}$, C. Wendisch$^{4}$, M.G. Wiebusch$^{8}$, J. Wirth$^{10,9}$, Y. Zanevsky$^{7}$, P. Zumbruch$^{4}$ (HADES collaboration) and\ A. V. Sarantsev$^{18,h}$ date: 'Received: date / Revised version: date' title: | Analysis of the exclusive final state npe$^+$e$^-$\ in quasi-free np reaction --- Introduction {#intro} ============ Dielectron production in nucleon-nucleon collisions at kinetic beam energies below the $\eta$ meson threshold production offers a unique possibility to study bremsstrahlung radiation with time-like virtual photons. The relevant final state is $NN\gamma^*(e^+e^-)$ resulting from the interaction between the nucleons or/and their excited states (such as $\Delta$) formed in the collisions. The production amplitude of the virtual photon $\gamma^*$ depends on the electromagnetic structure of the nucleons and on the excited baryon resonances. In the kinematic region of small positive (time-like) values of the squared four-momentum transfer $q^2$ ($q^2>0$), these electromagnetic amplitudes are related to off-shell light vector meson production [@mosel]. In general, the bremsstrahlung yield is given by a coherent sum of two types of amplitudes originating from “pure” nucleon-nucleon interactions and intermediate resonance excitation processes. The nucleon contribution provides information on the elastic time-like electromagnetic form factors in a region of four-momentum transfer squared $0<q^2 \ll 4m_p^2$, where $m_p$ is the proton mass, which is inaccessible to measurements in $e^+e^-$ or $\bar{p} p$ annihilation. The resonance contribution includes the production of baryon resonance ($N^*,\Delta$) states. One might visualize this contribution as resonance excitation subsequently decaying into $Ne^+e^-$ via the Dalitz process (since momentum-space diagrams have no time ordering, also other resonance - $Ne^+e^-$ vertices are to be accounted for). This process gives access to the time-like electromagnetic form factors of baryonic transitions in a complementary way to meson photo- or electro-production experiments where negative (i.e. space-like) values of $q^2$ are probed. Full quantum mechanics calculations have been performed for $np\rightarrow npe^+e^-$ based on effective model Lagrangians [@schafer; @deJong; @kaptari; @shyam], composing the nucleon-nucleon interaction via the exchange of mesons ($\pi,\rho,\omega,\sigma$,..). The virtual photon production happens at $\gamma^* NN$, $\gamma^* NN^{\star}$ and $\gamma^* N\Delta$ vertices and off meson exchange lines. In the energy range relevant for our study, the bremsstrahlung production in proton-proton collisions is dominated by the $\Delta$ resonance excitation. In neutron-proton collisions, however, the nucleon-nucleon contribution plays also a significant role being much stronger (factor 5-10) than in proton-proton collisions. The results of various calculations show some sensitivity to the electromagnetic form factors and to details of the implementation of gauge invariance in the calculations, in particular those related to the emission off the charged pion exchange (for details see discussion in [@shyam]). The adjustment of various effects on coupling constants is crucial, too. Consequently, the model cross sections can differ between the models substantially (up to a factor 2-4) in some phase space regions and need to be constrained further by experimental data. Another approach, often used in microscopic transport model calculations to account for the nucleon-nucleon bremsstrahlung, is the soft photon approximation [@soft; @soft1]. It assumes photon emission following elastic nucleon-nucleon interactions with an appropriate phase space modification induced by the produced virtual photon; any interference processes are neglected. Contributions from the $\Delta$ isobar and higher resonances are added incoherently and treated as separate source of pairs. Data on inclusive $e^+e^-$ production in $p-p$, $d-p$ [@dls; @hades_np] and the quasi-free $n-p$ [@hades_np] collisions have been provided by DLS (beam kinetic energy $T=1.04, 1.25$ GeV/u) and HADES ($T=1.25$ GeV/u) Collaborations. The $p-p$ data are well described by calculations with effective Lagrangian models, except [@kaptari] which overestimates the measured yields. Various transport models [@gibuu; @hsd; @fuchs], adding incoherently contributions from $\Delta$ Dalitz decay and from $p-p$ bremsstrahlung (calculated in the soft photon approximation) describe the data well. The dominant contribution is the $\Delta$ Dalitz decay with the dielectron invariant mass distribution slightly depending on the choice of the corresponding transition form-factors [@pena; @pena2016]. On the other hand, the $d-p$ and particularly the quasi-free $n-p$ data show a much stronger dielectron yield as compared to $p-p$ collisions at the same collision energy. While the yield at the low invariant masses $M_{e^+e^-}<M_{\pi^0}$ could be understood by the larger cross section (by a factor 2) for the $\pi^0$ production in $n-p$ collisions, the differential cross section above the pion mass was underestimated by most of the above mentioned calculations [@hades_np]. Even the calculations of [@kaptari], predicting a larger (by a factor $2-4$) bremsstrahlung contribution, fall too short to explain the data in the high mass region. Moreover, it has been demonstrated [@hades_np] that a properly scaled superposition of the $p-p$ and $n-p$ inclusive spectra explains dielectron invariant mass distributions measured in $C+C$ collisions at similar energies resolving, from an experimental point of view, the long standing “DLS puzzle” but moving its solution to the understanding of the production in $n-p$ collisions. Recently, two alternative descriptions have been suggested to explain the enhanced dielectron production in the $npe^+e^-$ final state. The first calculation by Shyam and Mosel [@shyam2] is based on the earlier results obtained within the One-Boson Exchange model [@shyam] which have been extended to include in the nucleon diagrams the electromagnetic form factors based on the Vector Dominance Model (VDM) [@sakurai]. The results show a significant improvement in the description of the inclusive data, mainly due to the effect of the pion electromagnetic form-factor in the emission of $e^+e^-$ from a charged exchange pion. Its presence enhances the dielectron yield at large invariant masses. Such a contribution can also be interpreted as a formation of a $\rho$-like final state via annihilation of the exchanged charged pion with a pion from the nucleon meson cloud. Since the charged pion exchange can only contribute to the $np\rightarrow npe^+e^-$ final state but not to the $pp\rightarrow ppe^+e^-$ (note that this is valid only for the exclusive final states) it explains in a natural way the observed difference between the two reactions. The second calculation by Bashkanov and Clement [@clement] also addresses a unique character of the $n-p$ reaction for a production of the $\rho$-like final state via the charged current. Here the mechanism of the $\rho$ production is different and proceeds via the interaction between two $\Delta$s created simultaneously by the excitation of the two nucleons. Indeed, such a double-$\Delta$ excitation is known to be an important channel for the two-pion production at these energies [@hades_2pi; @wasa] and is governed by the t- or u-channel meson exchange. The amplitude for the transition of the $n-p$ system to the $NN\rho$ final state via a $\Delta-\Delta$ state is proportional to the respective isospin recoupling coefficients ($9j$-symbols) which for the $p-p$ reactions is zero. It is important to stress that all aforementioned calculations were performed for the exclusive $npe^+e^-$ final state whereas the experimental data were analysed in the inclusive $e^+e^-X$ channels. The comparisons were not direct, since other channels, besides the exclusive $npe^+e^-$ channel, can also contribute. For example, the $\eta$ Dalitz decay in the $d-p$ collisions has to be considered in calculations due to the finite nucleon momentum distribution inside the deuteron providing an energy in the $np$ reference frame above the meson production threshold. Various calculations show, however, that the inclusion of this channel is not sufficient for the full description of the data. Moreover, also other channels, like the $np\rightarrow de^+e^-$ proposed in [@martem] or bremsstrahlung radiation accompanied by one or two pions in the final state can contribute to the inclusive production as well. The main goal of investigating the exclusive reaction $np \rightarrow npe^+e^-$ is two-fold: (i) to verify whether the observed enhancement of the inclusive dielectron production over $p-p$ data has its origin in the exclusive final state and (ii) to provide various multi-particle differential distributions of the exclusive final state to characterize the production mechanism and provide more constraints for the comparison to models. Our work is organized as follows. In Section \[exp\] we present experimental conditions, apparatus and principles of the particle identification and reconstruction. We also explain the method of selection of the exclusive channel and the normalization procedure. In Section \[sim\] we discuss our simulation chain composed of the event generator, modelling of the detector acceptance and the reconstruction efficiency. In Section \[results\] we present various differential distributions characterizing the $npe^+e^-$ final state and compare them to model predictions, followed by the conclusions and outlook in Section \[summary\_outlook\]. Experiment and data analysis {#exp} ============================ Detector overview ----------------- The High Acceptance Dielectron Spectrometer (HADES) consists of six identical sectors placed between coils of a superconducting magnet instrumented with various tracking and particle identification detectors. The fiducial volume of the spectrometer covers almost the full range of azimuthal angles and polar angles from $18^\circ$- $85^\circ$ with respect to the beam axis. The momentum vectors of produced particles are reconstructed by means of the four Multiwire Drift Chambers (MDC) placed before (two) and behind (two) the magnetic field region. The experimental momentum resolution typically amounts to $2-3\%$ for protons and $1-2\%$ for electrons, depending on the momentum and the polar emission angle. Particle identification (electron/pion/kaon/proton) is provided by a hadron blind Ring Imaging Cherenkov (RICH) detector, centred around the target, two time-of-flight walls based on plastic scintillators covering polar angles $\theta>45^\circ$ (TOF) and $\theta<45^\circ$ (TOFino), respectively, and a Pre-Shower detector placed behind the TOFino. The magnetic spectrometer is associated at the forward region ($0.5^\circ-7^\circ)$ by a high granularity Forward Wall (FW) placed 7 meters downstream of the target. The Forward Wall consists of 320 plastic scintillators arranged in a matrix with cells of varying sizes and time resolution of about 0.6 ns. In particular, it was used for identification of the spectator proton from the deuteron break-up. A detailed description of the spectrometer, track reconstruction and particle identification methods can be found in [@hadesspec]. In the experiment a deuteron beam with a kinetic energy of $T=1.25$ GeV/u and intensities of up to $10^7$ particles/s was impinging on a $5$ cm long liquid-hydrogen target with a total thickness of $\rho d=0.35$ g/cm$^2$. The events with dielectron candidates were selected by a two-stage hardware trigger: (i) the first-level trigger (LVL1) demanding hit multiplicity $\geq 2$ in the TOF/TOFino scintillators, in coincidence with a hit in the Forward Wall detector; (ii) the second-level trigger (LVL2) for electron identification requiring at least one ring in the RICH correlated with a fast particle hit in the TOF or an electromagnetic cascade in the Pre-Shower detector [@hadesspec]. Normalization {#elastic} ------------- The normalization of experimental yields is based on the quasi-free proton-proton elastic scattering measured in the reaction $d+p \rightarrow pn p_{spect}$ within the HADES acceptance ($\theta_{CM}^p \in (46^\circ-134^\circ)$). The known cross section of the $p-p$ elastic scattering has been provided by the EDDA experiment [@edda]. The events were selected using a dedicated hardware trigger requesting two hits in the opposite TOF/TOFino sectors. The proton elastic scattering was identified using conditions defined on (a) two-track co-planarity $\Delta \phi=180^\circ \pm 5^\circ$ and (b) the proton polar emission angles $tan (\theta_1)\times tan(\theta_2)=1/\gamma^2_{CM}= 0.596\pm 0.05$. These constraints account for the detector resolution and the momentum spread of the proton bound initially in the deuteron. The latter one was simulated using realistic momentum distributions implemented in the PLUTO event generator [@pluto]. The measured yield was corrected for the detection and the reconstruction inefficiencies and losses in the HADES acceptance due to the incomplete azimuthal coverage. The overall normalization error (including the cross section deduced from the EDDA data) was estimated to be $7\%$ [@hades_2pi]. Acceptance and reconstruction efficiency {#eff} ---------------------------------------- To facilitate the comparison of the data with the various reaction models the geometrical acceptance of the HADES spectrometer has been computed and tabulated as three-dimensional matrices depending on the momentum, the polar and the azimuthal emission angles for each particle species ($p$, $e^{+}$, $e^{-}$). The resolution effects are modelled by means of smearing functions acting on the generated momentum vectors (the matrices and smearing functions are available upon request from the authors). The efficiency correction factors were calculated individually as one-dimensional functions of all presented distributions. The calculations were performed using a full analysis chain consisting of three steps: (i) generation of events in the full space according to a specific reaction model, described in Section \[sim\], (ii) processing of the events through the realistic detector acceptance using the GEANT package and (iii) applying specific detector efficiencies and the reconstruction steps as for the real data case. The respective correction functions are calculated as ratios of the distributions obtained after steps (ii) and (iii). In Section \[results\] we also present various angular distributions corrected for the detector acceptance. Those correction factors were calculated as two-dimensional functions of the dielectron invariant mass and the given angle using two reaction models (described in details in Sec. \[sim\]). The difference between both models were used to estimate systematic errors related to model corrections. The models were verified to describe the measured distributions within the HADES acceptance reasonably well. For those cases we also present original distributions measured inside the acceptance. Selection of the npe$^+$e$^-$ final state {#channel_selection} ----------------------------------------- The procedure of identification of the $npe^+e^-$ final state is initiated by the event selection requesting (i) at least one track with a positive charge, (ii) at least one dielectron pair (like-sign or unlike-sign) detected in the HADES, and (iii) at least one hit in the FW. The electron and positron tracks are identified by means of the RICH detector, providing also emission angles for matching the rings with tracks reconstructed in the MDC, and the time of flight difference of the tracks measured by the TOF/TOFino detectors. Proton identification is achieved by a two-dimensional selection on the velocity ($\beta=v/c$) and the momentum reconstructed in the TOF/TOFino detectors and the tracking system, respectively. There was no dedicated start detector in our experiment, therefore, the reaction time was calculated from the time-of-flight of the identified electron track. The spectator proton was identified as the fastest hit in the FW within the time of flight window of 5 ns spanned around the central value of 26 ns expected for the proton from the deuteron break-up. Such a broad window takes into account both the detector resolution ($\pm 4\sigma$) and the much smaller effect of the spectator momentum distribution (about $\pm 8\sigma$). Further, for all $pe^+e^-$ candidates in an event, the missing mass for $np\to p e^+e^- X$ was calculated, assuming the incident neutron carrying half of the deuteron momentum. The exclusive $npe^+e^-$ final state was finally selected via a one-dimensional hard cut centred around the mass of the neutron $0.8 < M_{pe^+e^-}^{miss} < 1.08$ GeV/c$^2$. A variation of this selection has no influence on the data at $M_{inv}(e^+e^-) >$ 0.14 GeV/c$^2$ and introduces a systematic error on the yield of about 10% for the $\pi^{0}$ region, as deduced from comparisons to Monte Carlo simulations. The same procedure was also applied for the $pe^-e^-$ and the $pe^+e^+$ track combinations in order to estimate the combinatorial background (CB) originating mainly from a multi-pion production followed by a photon conversion in the detector material. The CB was estimated, using the like-sign pair technique, calculated for every event with a proton: $dN_{CB}/dM = 2\sqrt{(dN/dM)_{++} (dN/dM)_{--}}$. The signal pairs are obtained by the CB subtraction: $dN^{e^+e^-}_{SIG}/dM = dN^{e^+e^-}_{ALL}/dM - dN_{CB}/dM$. The resulting $e^+e^-$ invariant mass distributions of the signal and the CB are shown in Fig. \[ee\_raw\] (left panel) together with the signal to background ratio (inset) for the identified $pe^+e^-$ events. In the invariant mass region above the prominent $\pi^0$ Dalitz decay peak, the signal is measured with a small background. In Fig. \[ee\_raw\] (right panel), the missing mass distribution of the $pe^+e^-$ system with respect to the projectile-target is shown for the events with the invariant mass $M_{e^+e^-}>0.14 $ GeV/c$^2$. The data are compared to a Monte Carlo simulation - green solid curve (model A, see Section \[sim\] for details). Its total yield has been normalized to the experimental yield to demonstrate the very good description of the shape of the distribution. One should note that a slight shift of the peak position (0.944 GeV/c$^2$) and, particularly, a broadening of the missing mass distribution ($\sigma=0.037$ GeV/c$^2$) is caused by the momentum distribution of the neutron in a deuteron, which is accounted for in the simulation. The spectrometer resolution causes half of the measured width. Comparison to models: event generation and simulation {#sim} ===================================================== The most recent calculations of Shyam and Mosel [@shyam2] and Bashkanov and Clement [@clement] offer an explanation of inclusive dielectron data measured in $n-p$ collisions at $T=1.25$ GeV. A characteristic feature of both models is an enhancement in the dielectron invariant mass spectrum for $M_{e^+e^-}>0.3$ GeV/c$^2$ due to the intermediate $\rho$-like state in the in-flight emission by the exchanged charged pions, which are present in the case of the $np \to npe^+e^-$ reaction, unlike in the $pp\to ppe^+e^-$ reaction. A major difference between the models is that the charged pions are exchanged between two $\Delta$s in [@clement] and between two nucleons in [@shyam2]. We have chosen these models as a basis for our simulation (described in details below). The model [@clement] assumes a sub-threshold $\rho$-meson production, via intermediate double delta $\Delta^{+} \Delta^{0}$ or $\Delta^{++} \Delta^{-}$ excitation, and its subsequent $e^+e^-$ decay, according to a strict Vector Dominance Model (VDM) [@sakurai]. The total cross section, for the $np\rightarrow \Delta\Delta$ channel, has been predicted to be $\sigma_{\Delta\Delta}=170~\mu b$. Events generated with the theoretical differential distributions and characterized by the $np$ and the $\gamma^{\star}$ four-vectors, have been provided by the authors [@clement_priv]. The dielectron decays of the $\gamma^*$ have been modeled in our simulations following the VDM prescription for the $\rho$-meson differential decay rate (see [@clement]) and assuming the isotropic electron decay in the virtual photon rest frame. The remaining dielectron sources ($\pi^0$, $\Delta$ and $\eta$ Dalitz decays) were computed using the PLUTO event generator. The detailed description of the procedure was published in [@hades_np; @pluto], and in fact the calculations in [@clement] use exactly the same method. For the $\Delta$ Dalitz decay, the “QED model” was used, with the constant electromagnetic Transition Form Factors (eTFF) fixed to their values at the real-photon point. As a consequence, the Coulomb form factor is neglected and the $e^+$ or $e^-$ angular distribution with respect to the $\gamma^*$ in the rest frame of the $\gamma^*$, is taken as $\propto 1+\cos^2 \theta$, in agreement with data [@witek]. The channels included in our simulations are the following ones: (i) $np \rightarrow \Delta^{+,0}(n,p)\rightarrow np \pi^0 \rightarrow np e^+e^-\gamma$ (ii) $np\rightarrow np\eta\rightarrow np e^+e^- \gamma$ and (iii) $np \rightarrow \Delta^{+,0} (n,p) \rightarrow (n,p) e^+e^- (n,p)$. One should note that the latter channel accounts for the part of the bremsstrahlung radiation related to the $\Delta$ excitation, since the pre-emission graphs associated with the $\Delta$ excitation have a small contribution [@shyam]. We assume that one-pion production is dominated by the $\Delta$ excitation which saturates the $I=1$ component of the $n-p$ reaction. The iso-scalar component of the $n-p$ reaction at our energy is much smaller, as shown by [@andrej; @bystr], and has been neglected. The cross section $\sigma_{\Delta^{+,0}}$ for the production of the $\Delta^+$ and $\Delta^0$ resonances in the $n-p$ reactions has been deduced in [@teis] within the framework of the isobar model by a fit to the available data on one-pion production in nucleon-nucleon reactions and amounts to $\sigma_{\Delta^+}=\sigma_{\Delta^{0}}=5.7 $ mb. Furthermore, in the simulation we have included angular distributions for the production of the $\Delta$ excitation deduced from the partial wave analysis of the one-pion production in the $p-p$ collisions at the same energy [@hades_pwa]. These distributions provide a small correction with respect to the one-pion exchange model [@teis], which were originally included in the PLUTO generator. The contribution of the $\eta$ (see [@pluto] for details of the implementation) to the exclusive $npe^+e^-$ channel is negligible but was included for comparison with the calculations of the inclusive production [@hades_np], where it plays an important role. This model is later referred as the model A. The model of Shyam and Mosel [@shyam2] is based on a coherent sum of $NN$ bremsstrahlung and isobar contributions. It demonstrates a significant enhancement of the radiation in the high-mass region due to contributions from the charged internal pion line and the inclusion of the respective electromagnetic pion form factor. This mechanism modifies the contribution of the bremsstrahlung radiation from the nucleon charge-exchange graphs, which, as pointed out in the introduction, are absent in the case of the $pp \to ppe^+e^-$ reaction. The other part of the bremsstrahlung corresponds to the $\Delta$ excitation on one of the two nucleon lines and its subsequent Dalitz decay ($Ne^+e^-$). Although the latter dominates the total cross section at $M_{e^+e^-}<0.3$ GeV/c$^2$, the modified nucleon-nucleon contribution makes a strong effect at higher masses. Unfortunately, the proposed model does not provide details about angular distributions of the final state particles. In our simulation we use the bremsstrahlung generator included in the PLUTO package [@pluto] with the respective modification of the dielectron invariant mass distribution to account for the results of [@shyam2]. Since there is no guidance in the model on angular distributions of the protons and of the virtual photons, we have assumed the distribution introduced in the model A for the $\Delta$ production. We denote this model as the model B. The modeling of the quasi-free $np$ collisions has been implemented in both models based on a spectator model [@pluto]. This model assumes that only one of the nucleons (in our case the neutron) takes part in the reaction while the other one, the proton, does not interact with the projectile and is on its mass shell. The momenta of the nucleons in the deuteron rest frame are anti-parallel and generated from the known distribution [@benz]. Results ======= The exclusive final state $np\gamma^*$ can be characterized by five independent variables selected in an arbitrary way. Assuming azimuthal symmetry in the production mechanism, only four variables are needed. The decay of the $\gamma^*$ into the $e^+e^-$ pair can be characterized by two additional variables. In this work we have chosen the following observables: \(i) the three invariant masses of the $e^+e^-$ pair ($M_{e^+e^-}$, equivalent to the $\gamma^*$ mass), the proton-$e^+e^-$ system ($M_{pe^+e^-}$) and of the proton-neutron ($M_{np}$) system, respectively ii\) the two polar angles of the proton ($cos^{CM}(\theta_p)$) and of the virtual photon ($cos^{CM}(\theta_\gamma^*)$) defined in the center-of-mass system and the polar angle of the lepton (electron or positron) in the $\gamma^*$ rest frame ($cos(\theta^{e-\gamma^*}_{\gamma^*})$) with respect to the direction of the $\gamma^*$ in the c.m.s. In the next sections we present the corresponding distributions and compare them to the results of our simulations. The experimental distributions are corrected for the reconstruction inefficiencies (see paragraph \[eff\]) and are presented as differential cross sections within the HADES acceptance, after normalization, as described in paragraph \[elastic\]. We present also acceptance corrected angular distributions. Invariant mass distributions ---------------------------- The dielectron invariant mass distributions is very sensitive to the coupling of the virtual photon to the $\rho$-meson. Therefore we start the presentation of our data with Fig. \[eep\_th\] which displays the dielectron invariant mass distribution and a comparison to the simulated spectra. As already observed in the case of the inclusive $e^+e^-$ production [@hades_np], the $e^+e^-$ yield in the $\pi^0$ region is found to be in a very good agreement with the $\pi^0$ production cross section of 7.6 mb used as an input to the simulation (see Sec. \[sim\]). One should note that the contribution from $np\rightarrow np\pi^0 (\pi^0\rightarrow e^+e^-\gamma$) channel could not be completely eliminated by the selection on the $pe^+e^-$ missing mass (paragraph \[channel\_selection\]) due to the finite detector mass resolution. This contribution is well described by our simulations, confirming the assumed cross section of the one-pion production. The good description obtained in the exclusive case demonstrates in addition that the acceptance on the detected proton and the resolution of the $pe^+e^-$ missing mass are well under control. The distribution for invariant masses larger than the $\pi^0$ mass ($M_{e^+e^-}>M_{\pi^0}$) is dominated by the exclusive $np\to npe^+e^-$ reaction (as also proven by the missing mass distribution in Fig. \[ee\_raw\] - right panel), which is of main interest for this study. In this mass region the general features of the dielectron yield are reproduced by the model A. The $\Delta$ Dalitz decay dominates for the $e^+e^-$ invariant mass between 0.14 GeV/c$^2$ and 0.28 GeV/c$^2$, while the $\rho$ contribution prevails at higher invariant masses. The $\eta$ Dalitz decay gives a negligible contribution. A closer inspection reveals that the $\Delta$ Dalitz alone cannot describe the yield in the mass region $0.14<M_{e^+e^-}<0.28$ GeV/c$^2$. This is not surprising since the nucleon-nucleon bremsstrahlung is also expected to contribute in this region. On the other hand, the $\rho$ contribution overshoots the measured yield at higher masses, even in a stronger way, than observed in the case of the inclusive data [@clement]. The low mass cut of the $\rho$ contribution is due to the threshold at the double-pion mass, which should be absent in the case of the dielectron decay but is the feature of the applied decay model [@clement]. The simulation based on the model B presents a rather different shape, with a smooth decrease of the yield as a function of the invariant mass. It was indeed shown [@shyam] that the introduction of the pion electromagnetic form factor at the charged pion line enhances significantly the yield above the $\pi^0$ peak, but does not produce any structure. The yield for $M_{e^+e^-}< $ 0.14 GeV/c$^2$ is strongly underestimated, which is expected, due to the absence of $\pi^0$ Dalitz process in the model, which aimed only at a description of the $np\to np e^+e^-$. Above the $\pi^0$ peak, model B comes in overall closer to the data than model A, but it underestimates the yield at the very end of the spectrum ($M_{e^+e^-} > $ 0.35 GeV/c$^2$). The exclusive yield calculated within the model B might slightly depend on the hypothesis we have made on the angular distributions (see paragraph \[sim\]). The expected effect is however rather small, since the proton angular distribution is well described by the simulation, as will be shown in paragraph \[ang\]. The comparison of the simulations based on both models to the experimental dilepton invariant mass distributions seem to favour the explanation of the dielectron excess due to the electromagnetic form factor on the charged pion line, as suggested in [@shyam2]. The exclusive invariant mass distribution can be also compared with the $ppe^+e^-$ final state measured by the HADES at the same beam energy [@witek]. The latter one is well described, as discussed in Section \[intro\], by various independent calculations which all show the dominance of the $\Delta$ Dalitz decay process for invariant masses larger than 0.14 GeV/c$^2$. Thus, it can serve as a reference for the identification of some additional contributions appearing solely in the $npe^+e^-$ final state. Figure \[ee\_pp\] (left panel) shows the comparison of the $e^+e^-$ invariant mass distributions normalized to the $\pi^0$ production measured in the reaction $np\rightarrow npe^+e^-$. It reveals a different shape above the pion mass. The right panel of Fig. \[ee\_pp\] shows the ratio of both differential cross sections, with their absolute normalization, as a function of the invariant mass in comparison to three different simulations. The error bars plotted for data and simulations are statistical only. First, we note that the ratio of the two cross sections in the $\pi^0$ region within the HADES acceptance and inside the $M_{pe^+e^-}$ missing mass window amounts to $\sigma_{\pi^0}^{np}/\sigma_{\pi^0}^{pp}=1.48 \pm 0.24$, which is well reproduced by the simulations for the $\pi^0$ Dalitz decay. The ratio of the cross sections in the full solid angle is 2, according to the measured data [@hades_pwa] and as expected from the isospin coefficients for the dominant $\Delta$ contribution. However, the ratio measured inside the HADES acceptance is smaller because it is reduced by the larger probability to detect a proton in addition to the $e^+e^-$ pair for the $ppe^+e^-$ final state as compared to $npe^+e^-$. For the $e^+e^-$ invariant masses larger than the pion mass, the ratio clearly demonstrates an excess of the dielectron yield in the exclusive $n-p$ channel over the one measured in $p-p$. It indicates an additional production process which is absent in the $p-p$ reactions, as proposed by the discussed models. In order to exclude trivial effects, like the different phase space volumes available in the $p-p$ and quasi-free $n-p$ collisions due to the neutron momentum spread in the deuteron, first we plot the ratio of the cross sections of $\Delta$ channels in both reactions (red squares on the right panel of Fig. \[ee\_pp\]). An enhancement is indeed present but only at the limits of the available phase space. It confirms that the phase space volume difference gives a very small contribution to the measured enhancement in the $npe^+e^-$ channel. The green triangles (model A) and blue dots (model B) in Fig. \[ee\_pp\] (right panel) represent the ratio of the respective model simulation and the $p-p$ Monte Carlo simulation: the sum of $\pi^0$ and $\Delta$ Dalitz decays ($\Delta$ with a point-like eTFF) [@witek]. The ratios take into account the differences in the phase volume between $n-p$ and $p-p$, as mentioned above. Similar to the comparison of the dielectron invariant mass distribution in Fig. \[eep\_th\], the calculation of [@shyam2] (model B) gives a better description of the data for the invariant masses larger than the $\pi^0$ mass. Figure \[eep1\_th\] shows the two other invariant mass distributions of the $pe^+e^-$ ($M_{pe^+e^-}$, left panel) and the $np$ ($M_{np}$, right panel) systems. Both distributions are plotted for masses of the virtual photon $M_{e^+e^-}>0.14$ GeV/c$^2$ and are compared to the models A and B. For the model A, the $\Delta$ and $\rho$ contributions are shown separately. As expected, the distribution at low $M_{pe^+e^-}$ is dominated by low mass dielectrons, originating mainly from the $\Delta$ decays (we note that the observed shape in the simulation is due to an interplay between $\Delta^+\rightarrow pe^+e^-$ and $\Delta^0 \rightarrow ne^+e^-$ decays, both contributing with same cross sections) and at higher masses by the $\rho$-like channel. On the other hand, the invariant mass distribution of the $np$ system is dominated at low masses by the $\rho$ contribution, which in the model A overshoots slightly the data. In general, the high-mass enhancement visible in the $e^+e^-$ mass spectrum is consistently reflected in the shapes of the two other invariant mass distributions. Angular distributions {#ang} --------------------- In the discussion of the angular distributions we consider separately two bins of the dielectron invariant mass: $0.14<M_{e^+e^-}<0.28$ GeV/c$^2$ and $M_{e^+e^-}>0.28$ GeV/c$^2$. The selection of the two mass bins is dictated by the calculations which suggest two possible different production regimes, with a dominance of the $\rho$-like contribution in the second bin. Figure \[cos\_p\] displays the differential angular distributions of the proton in the c.m.s., both within the HADES acceptance and after acceptance corrections. In the first case, the experimental distributions are compared to the predictions of the simulations on an absolute scale. In the second case, the simulated distributions are normalized to the experimental yield after acceptance corrections in order to compare the shapes. The acceptance correction applied to the data has been calculated as described in paragraph \[eff\]. As can be deduced from Fig. \[eep\_th\], according to model A, the low-mass bin is dominated in the simulation by the $\Delta$ Dalitz decay process, while the $\rho$-like contribution determines the dielectron production in the higher mass bin. In the first mass bin, the distribution exhibits a clear anisotropy, pointing to a peripheral mechanism. The simulated distributions for the models A (dashed green curve) and B (dotted blue curve) differ in magnitude but have similar shapes. This is due to the fact that the angular distribution for the model B is the same as in the $\Delta$ contribution of model A, which dominates in this mass region (see Sec. \[sim\]) - both contributions have the same angular distribution in the full solid angle (solid green and superimposed dashed blue curves). The shape of the experimental angular distribution is rather well taken into account by both simulations, where the angular distributions for the $\Delta$ production from the PWA analysis is used, leading to a symmetric forward/backward peaking. However, there is an indication for some enhancement above the simulation in the $npe^+e^-$ channel for the forward emitted protons, unfortunately cut at small angles by the HADES acceptance. It might be due to the charge exchange graphs involving nucleons, which are not properly taken into account by the symmetric angular distribution used as an input for the simulation. Indeed, in the case of the $\Delta$ excitation, charge exchange and non-charge exchange graphs have the same weight, which yields a symmetric angular distribution for the proton in the c.m.s.. This is different for nucleon graphs, where the contribution of the charge exchange graphs to the cross section are enhanced due to the isospin coefficients by a factor 4 and, therefore, forward emission of the proton is favoured. For the higher invariant $e^+e^-$ masses, the angular distribution is more isotropic and is described rather well by both simulations which again exhibit similar characteristics. The flattening of the distributions reflects the different momentum transfers involved in the production of heavy virtual photons. However, as already mentioned, the angular distribution in model B follows the $\Delta$ production angular distribution, while in model A it is properly calculated for the $\rho$ production via the double-$\Delta$ mechanism. It is interesting to observe that the two angular distributions are very similar. In particular, the distribution with respect to $cos_{p}^{CM}(\theta)$ from the model A is symmetric, although graphs with emission of the neutron from a $\Delta^-$ excited on the incident neutron (and corresponding emission of the proton from the excitation of a $\Delta^{++}$ on the proton at rest) are highly favoured by isospin factors and induce a strong asymmetry for the production of the $\Delta$s, as shown for example in [@Huber94]. Figure \[cos\_gam\] presents similar angular distributions as discussed above but for the virtual photon. The distributions are also strongly biased by the HADES acceptance, which suppresses virtual photon emission in the forward and even more strongly in the backward direction. In the lower mass bin, where the $\Delta$ contribution is dominant, a deviation from the isotropic distribution could be expected due to the polarization of the $\Delta$ resonance. However, the experimental distributions are compatible with an isotropic emission, as assumed in the simulation. In the larger mass bin, it is interesting to see that the model A (solid green curve) predicts a significant anisotropy, related to the angular momentum in the double-$\Delta$ system for the $\rho$ emission by the charged pion line between the two $\Delta$s, which is the dominant contribution in this mass bin. However, our data present a different trend, which seems also to deviate from isotropy but with a smaller yield for the forward and the backward emission. Unfortunately, as already mentioned for the proton angular distributions, we cannot really verify these distributions based on the hypothesis of an emission by the charged pion between two nucleons, since the calculations in [@shyam2] do not provide them and the distribution of the model B remains here rather flat (dashed blue curve if Fig. \[cos\_gam\]). Finally we present distributions of leptons in the rest frame of the virtual photon. These observables are predicted to be particularly sensitive to the time-like electromagnetic structure of the transitions [@titov]. Indeed, for the Dalitz decay of the pseudo-scalar particle, like pion or $\eta$ mesons, the angular distribution of the electron (or positron) with respect to the direction of the virtual photon in the meson rest-frame is predicted to be proportional to $1+cos^2(\theta_e)$. These predictions were confirmed in our measurements of the exclusive pion and eta meson decays in proton-proton reactions [@pp2GeV]. For the $\Delta$ Dalitz decay, the angular distribution has a stronger dependence on the electromagnetic form factors due to the wider range in $e^+e^-$ invariant masses. Assuming the dominance of the magnetic transition in the $\Delta \rightarrow Ne^+e^-$ process, the authors of [@titov] arrive at the same distribution as for the pseudo-scalar mesons. Concerning the elastic bremsstrahlung process, only predictions based on the soft photon approximation exist in the literature [@titov]. According to this model, the corresponding angular distributions show at our energies a small anisotropy with some dependence on the dielectron invariant mass. On the other hand, the angular distribution of leptons from the $\rho$-meson decay from pion annihilation, measured with respect to the direction of the pion in the virtual photon rest frame, has a strong anisotropy, i.e. $\propto 1-\cos^2(\theta_e)$. Figure \[helicity\] presents the respective $e^+$ and $e^-$ angular distributions for the experimental data and the two bins of the dielectron invariant mass. The distributions are symmetric due to the fact that both angles, between electron and $\gamma^*$ as well as positron and $\gamma^*$, in the rest frame of the virtual photon, have been plotted. For the left panel (bin with the smaller masses, $0.14<M_{e^+e^-}<0.28$ GeV/c$^2$) the distribution has been calculated with respect to the $\gamma^*$ direction, obtained in the $pe^+e^-$ rest frame, while for the right panel (bin with the larger masses, $M_{e^+e^-}>0.28$ GeV/c$^2$) it has been calculated with respect to the direction of the exchanged charged pion momentum. The latter one has been calculated as the direction of the vector constructed from the difference between the vectors of the incident proton and reconstructed emitted neutron and boosted to the rest frame of the virtual photon. The open red symbols present the data within the HADES acceptance while the full black symbols show the acceptance corrected data. The solid green curve displays a prediction from the simulation in the full solid angle while the dashed green curve, is normalized to the experimental distributions within the HADES acceptance for a better comparison of the shape. The dashed blue curve shows a fit with a function $A (1+B\cos^2(\theta_e))$. In the lower mass bin the data follow the distribution expected for the $\Delta$, $B=1.58 \pm 0.52$ and the fit almost overlays with the simulated distribution. This seems to confirm the dominance of the $\Delta$ in this mass bin, in agreement with both models. However, it would be interesting to test the possible distortion that could arise due to contribution of nucleon graphs, following [@shyam2]. For these graphs, the distribution of the $e^+$ or $e^-$ angle in the virtual photon rest frame should depend on the electric and magnetic nucleon form factors in a very similar way to the $e^+e^- \leftrightarrow \bar{p} p$ reactions, i.e. following $|G_M|^2 (1+\cos^2\theta)+(4m_p^2/q^2)|G_E|^2\sin^2\theta$, where $m_p$ is the proton mass. In the calculation of [@shyam2], the anisotropy of the $e^+$ ($e^-$) angular distribution should therefore derive from the VDM form factor model. A similar fit to the higher mass bin in the same reference frame (not shown) gives a significantly smaller anisotropy $B=0.25 \pm 0.35$ which changes the sign, when the distribution of the lepton with respect to the exchanged charged pion is fitted ($B=-0.4 \pm 0.20$), as shown in Fig. \[helicity\] (right panel). The latter may indicate the dominance of the $\rho$ decay, as suggested by both models [@shyam2; @clement]. Summary and Outlook {#summary_outlook} =================== We have shown results for the quasi-free exclusive $np\to np e^+ e^-$ channel measured with HADES using a deuterium beam with a kinetic energy $T=1.25$ GeV/nucleon. The $e^+e^-$ invariant-mass differential cross section presents a similar excess with respect to the one measured in the $pp\to ppe^+e^-$ channel as previously observed for the corresponding inclusive $e^+e^-$ distributions, hence confirming the baryonic origin of this effect. In addition, the detection of the proton provides additional observables (invariant masses, angular distributions) which bring strong constraints for the interpretation of the underlying process. We tested two models which provided an improved description of the inclusive $e^+e^-$ production in the $n-p$ reaction at large invariant masses. The first one consists of an incoherent cocktail of dielectron sources including (in addition to $\pi^0$, $\Delta$ and $\eta$ Dalitz decay) a contribution from the $\rho$-like emission via the double-$\Delta$ excitation following the suggestion by Bashkanov and Clement [@clement]. The second model is based on the Lagrangian approach by Shyam and Mosel [@shyam2] and provides a coherent calculation of the $np\to npe^+e^-$ reaction including nucleon and resonant graphs. In both models, the enhancement at large invariant masses is due to the VDM electromagnetic form factor which is introduced for the production of the $e^+e^-$ pair from the exchanged pion. The evolution of the shape of the experimental $e^+$ and $e^-$ angular distribution in the $\gamma^*$ rest frame seems to confirm the emission via an intermediate virtual $\rho$ at the largest invariant masses. Since this process is absent in the reaction $pp\to ppe^+e^-$, it provides a natural explanation for the observed excess. The different nature of the graphs at the origin of this $\rho$-like contribution in the two models is reflected in the invariant mass distributions. A better description of the experimental distributions is obtained with the model B, where the effect is related to the nucleon charge-exchange graphs. However, this conclusion should be tempered by the fact that we had to introduce a hypothesis for the angular distributions of the final products, which were not provided by the models. The agreement is also not perfect, which points to missing contributions. On the other hand, it is clear that the double-$\Delta$ excitation process is expected to play a role in the $e^+e^-$ production. In [@clement], the corresponding amplitude is deduced from the modified Valencia model, which gave a fair description of $np\to np\pi^+\pi^-$ measured by HADES at the same energy [@hades_2pi]. A realistic test of the contribution of the double-$\Delta$ excitation to the $e^+e^-$ production can be only supplied once the effect is included as a coherent contribution in a full model including the nucleon and $\Delta$(1232) graphs, like the OBE calculation [@clement] and if all distributions are provided for a comparison with the differential distributions measured in the exclusive $np\to npe^+e^-$. The present analysis should serve as a motivation for such a complete calculation. The first observation of an unexplained dielectron excess measured in the inclusive $n-p$ reaction with respect to the $p-p$ reaction triggered a lot of theoretical activity and raised interesting suggestions of mechanisms specific to the $n-p$ reaction. Understanding in detail the $e^+e^-$ production in $n-p$ collisions is a necessary step towards the description of $e^+e^-$ production in heavy-ion collisions where medium effects are investigated. On the other hand, the description of the $np\to npe^+e^-$ process is challenging because it implies many diagrams with unknown elastic and transition electromagnetic form factors of baryons in the time-like region. We have shown that our exclusive measurement of the quasi-free $np\to npe^+e^-$ reaction at $T=1.25$ GeV is sensitive to the various underlying mechanisms and in particular sheds more light on contributions which are specific to the $n-p$ reaction. While definite conclusions can only be drawn when more detailed calculations are available, we also expect additional experimental constraints from the on-going analysis of the $np\to de^+e^-$ reaction, also measured by HADES at the same energy. Acknowledgements ================ The HADES Collaboration gratefully acknowledges the support by the grants LIP Coimbra, Coimbra (Portugal) PTDC/FIS/113339/2009, UJ Kraków (Poland) NCN 2013/10/M/ST2/00042, TU München, Garching (Germany) MLL München: DFG EClust 153, VH-NG-330 BMBF 06MT9156 TP5 GSI TMKrue 1012 NPI AS CR, Rez, Rez (Czech Republic) GACR 13-06759S, NPI AS CR, Rez, USC - S. de Compostela, Santiago de Compostela (Spain) CPAN:CSD2007-00042, Goethe University, Frankfurt (Germany): HA216/EMMI HIC for FAIR (LOEWE) BMBF:06FY9100I GSI F&E, IN2P3/CNRS (France). The work of A.V. Sarantsev is supported by the RSF grant 16-12-10267. U. Mosel, Ann. Rev. Nucl. Part. Sci. **41** (1991) 29. M. Schäfer, H. C. Dönges, A. Engel and U. Mosel, Nucl. Phys. A **575** (1994) 429. F. de Jong and U. Mosel, Phys. Lett. B **379** (1996) 45. L. Kaptari and B. Kämpfer, Phys. Rev. C **80** (2009) 064003, L. Kaptari and B. Kämpfer, Nucl. Phys. A **764** (2008) 338. R. Shyam and U. Mosel, Phys. Rev. C **67** (2003) 065202, R. Shyam and U. Mosel, Phys. Rev. C **79** (2009) 035203. R. Rückl, Phys. Lett. B **64** (1976) 39. C. Gale and J. Kapusta, Phys. Rev. C **35** (1987) 2107, C. Gale and J. Kapusta, Phys. Rev. C **40** (1989) 2397. W. K. Wilson *et al.*, Phys. Rev. C **57** (1998) 1865. G. Agakishiev *et al.*, Phys. Lett. B **690** (2010) 118. J. Weil, H. van Hees and U. Mosel, Eur. Phys. J. A **48** (2012) 111. E. L. Bratkovskaya, W. Cassing, R. Rapp, and J. Wambach, Nucl. Phys. A **634** (1988) 168. K. Shekhter, C. Fuchs, A. Faessler, M. Krivoruchenko, and B. Martemyanov, Phys. Rev. C **68** (2003) 014904. G. Ramalho and M. T. Peña, Phys. Rev. D**85** (2012) 113014. G. Ramalho, M. T. Peña, J.  Weil, H. van Hees and U. Mosel, Phys. Rev. D**93** (2016) 033004. R. Shyam and U. Mosel, Phys. Rev. C **82** (2010) 062201. J. J. Sakurai, Ann. Phys. **11** (1960) 1. M. Bashkanov and H. Clement, Eur. Phys. J. A **50** (2014) 107. G. Agakichiev *et al.* (HADES), Phys.Lett. B**750** (2015) 184. P. Adlarson *et al.*, Phys. Lett. B**721** (2013) 229. B. Martemyanov, M. Krivoruchenko and A. Faessler, Phys. Rev. C **84** (2011) 047601. G. Agakichiev *et al.* (HADES), Eur. Phys. J. A **41** (2009) 243. D. Albers *et al.* (EDDA), Eur. Phys. J. A **22** (2004) 125. I. Fröhlich et al., PoS **ACAT2007** (2007) 076; arXiv: 0708.2382v2, I. Fröhlich et al., Eur. Phys. J. A **45** (2010) 401. M. Bashkanov, private communication V. V. Sarantsev *et al.*, Eur. Phys. J. A **21** (2004) 303. J. Bystricky *et al.*, J. Phys. (Paris) **48** (1987) 901. S. Teis, W. Cassing, M. Effenberger, A. Hombach, U. Mosel and Gy. Wolf, Z. Phys. A **356** (1997) 421. G. Agakichiev *et al.* (HADES), Eur. Phys. J. A **50** (2014) 82. P. Benz *et al.*, Nucl. Phys. B **65** (1973) 158. G. Agakichiev *et al.* (HADES), arXiv:1703.07840. S. Huber and J. Aichelin, Nucl. Phys. A **573** (1994) 587. E. L. Bratkovskaya, O. V. Teryaev and V. D. Toneev, Phys. Lett. B **348** (1995) 283. G. Agakichiev *et al.* (HADES), Eur. Phys. J. A**48** (2012) 74. B. Martemyanov and M. Krivoruchenko, private communication E. L. Bratkovskaya and W. Cassing, Nucl. Phys. A **807** (2008) 214. E. L. Bratkovskaya, J. Aichelin, M. Thomere, S. Vogel and M. Bleicher, Phys. Rev. C **87** (2013) 064907. M. I. Krivoruchenko and B. V. Martemyanov, Ann. Phys. **296** (2002) 299.
--- abstract: 'New observations in the $VI$ bands along with archival data from the 2MASS and $WISE$ surveys have been used to generate a catalog of young stellar objects (YSOs) covering an area of about $6^\circ\times6^\circ$ in the Auriga region centered at $l\sim173^\circ$ and $b\sim1^\circ.5$. The nature of the identified YSOs and their spatial distribution are used to study the star formation in the region. The distribution of YSOs along with that of the ionized and molecular gas reveals two ring-like structures stretching over an area of a few degrees each in extent. We name these structures as Auriga Bubbles 1 and 2. The center of the Bubbles appears to be above the Galactic mid-plane. The majority of Class[i]{} YSOs are associated with the Bubbles, whereas the relatively older population, i.e., Class[ii]{} objects are rather randomly distributed. Using the minimum spanning tree analysis, we found 26 probable sub-clusters having 5 or more members. The sub-clusters are between $\sim$0.5 pc - $\sim$ 3 pc in size and are somewhat elongated. The star formation efficiency in most of the sub-cluster region varies between 5$\%$ - 20$\%$ indicating that the sub-clusters could be bound regions. The radii of these sub-clusters also support it.' author: - | A. K. Pandey,$^{1}$[^1] Saurabh Sharma,$^{1}$, N. Kobayashi,$^{2,3}$  Y. Sarugaku,$^{3}$ and K. Ogura$^{4}$\ \ $^{1}$Aryabhatta Research Institute of Observational Sciences (ARIES), Manora Peak, Nainital, 263 002, India\ $^{2}$Institute of Astronomy, University of Tokyo, 2-21-1 Osawa, Mitaka, Tokyo 181-0015, Japan\ $^{3}$Kiso Observatory, School of Science, University of Tokyo, Mitake-mura, Kiso-gun, Nagano 397-0101, Japan\ $^{4}$Kokugakuin University, Higashi, Shibuya-ku, Tokyo 150-8440, Japan\ bibliography: - 'auriga.bib' date: 'Accepted XXX. Received YYY; in original form ZZZ' title: Large scale star formation in Auriga region --- \[firstpage\] stars: formation – stars: pre-main-sequence – (ISM:) H[ii]{} regions Introduction ============ Massive stars (mass $\gtrsim$ 8 M$_\odot$, OB spectral types) are usually found in stellar groups, i.e., star clusters or OB associations. This is expected as a majority of the stars are born in groups embedded in molecular clouds [see e.g., @2003ARAA..41...57L]. However, the recent study by @2018MNRAS.475.5659W favors the hierarchical star formation model, in which a minority of stars forms in bound clusters and large-scale, hierarchically-structured associations are formed in-situ. Recent mid-infrared (MIR) surveys have also shown that a significant number of young stellar objects (YSOs) form in the distributed mode [see e.g., @2008ApJ...688.1142K; @2019AJ....157..112P]. O stars are the major source of Lyman continuum (Lyc) ionizing radiation. Also strong stellar winds from O and B stars release kinematic energy of up to 10$^{51}$ ergs over their lifetimes [@1987ApJ...317..190M]. Since young star clusters/ OB associations have several OB stars (mass $\gtrsim$ 8 M$_\odot$), the ionizing radiation and stellar wind of these stars are the dominant input power source into the surroundings at the initial phase, creating a shell/ bubble of radius $\leq$ 100 pc at the end of the wind-driven phase [@1987ApJ...317..190M]. These shells can be observed in H[i]{} 21 cm line observations, also they can be observable in X-ray from their hot interiors, in optical due to emission from ionized gas and in infrared (IR) from swept up dust in the shell [@2005AJ....129..393O; @2014MNRAS.438.1089C; @2019MNRAS.484.1800S]. Actually expanding shells of dense gas around H[ii]{} regions have been found in several previous studies, e.g., around $\lambda$ Orionis, Gemini OB1, W33, the Cepheus Bubble [cf. @1998ApJ...507..241P and reference therein]. In the course of expansion of the shells the remaining molecular cloud could be induced to form a second generation of stars. However the energy input by these massive stars to the surroundings toward the end phase is probably dominated by their supernova (SN) explosions, which could release energy of $\sim$ 10$^{51}$ ergs; however, only a few cases are known which provide direct evidence of supernova-remnant (SNR)-molecular cloud interaction [@1987ApJ...317..190M; @1998ApJ...507..241P; @2006ApJ...649..759C]. The expansion of the SN blast waves also could produce a new generation of stars. Only morphological signatures are often invoked to prove a physical association between SNRs and star-forming molecular clouds [e.g., @2001AJ....121..347R and reference therein]. The most convincing evidence is the line broadening due to shocked molecular gas [@1998ApJ...508..690F; @1999ApJ...511..836R]. @2012AJ....143...75K made H[i]{} 21 cm line observations and detected high velocity H[i]{} gas at $l\sim$173$^\circ, b\sim$1.5$^\circ $. They suggested a large scale star formation activity possibly associated with an SN explosion in the H[ii]{} complex G173+1.5. The low resolution H[i]{} survey reveals that this high velocity gas has velocities extended beyond those allowed by Galactic rotation. They designated this feature as Forbidden Velocity Wing 172.8+1.5, which is composed of knots, filaments, Sharpless H[ii]{} regions distributed along a radio continuum loop of size $4^\circ.4 \times 3^\circ.4$. They concluded that the H[i]{} gas is well correlated with the radio continuum loop and both of them seem to trace an expanding shell. The expansion velocity and kinetic energy of the shell is estimated as 55 km s$^{-1}$ and $2.5\times10^{50}$ erg, suggesting that it could be due to a SN explosion. The authors proposed that the progenitor may have belonged to a stellar association near the center of the shell, and this SN explosion triggered the formation of the H[ii]{} regions. The H[ii]{} complex G173+1.5 associated with the Forbidden Velocity Wing features seems to be an excellent site to study SN-triggered star formation. A very general description of many star forming regions (SFRs) in Auriga is given in @2008hsf1.book..869R, and the individual regions have been discussed in several previous studies . By mapping the spatial distribution of YSOs around $l\sim$173$^\circ$, $b\sim 1.5^\circ$, we can study the star formation in this whole region; it is, however, a challenging task to procure uniform optical data over such a large region ($\sim 4^\circ.4 \times 3^\circ.4$). The Kiso Wide field Camera [KWFC, @2012SPIE.8446E..6LS], mounted on the Kiso Schmidt telescope operated by the Institute of Astronomy of the University of Tokyo, Japan, covers a FOV of $2^\circ.2\times2^\circ.2$ and enables to provide homogeneous optical data for a very wide area such as the region of the H[ii]{} complex G173+1.5. Thus by combining $V\&I$ optical photometry obtained with this facility with archival data, we can analyze the distribution of the low-mass YSOs and gas/dust around $l\sim173^\circ$ in Auriga, aiming to study the star formation in this region. We also attempt to understand the origin and evolution of the expanding H[i]{} shell. As a result, we have found two ring-like structures of YSOs/gas/dust distribution extending over an area of a few degrees. They are termed as Auriga Bubbles 1 and 2. In this paper, however, we deal mainly with the former, and Auriga Bubble 2 will be discussed in a forthcoming paper. Section 2 describes the observations, data reduction and completeness of the data. The YSO identification technique, the resulting YSOs sample and their physical parameters are derived in Section 3. In Section 4 we discuss the characteristics and distribution of identified YSOs, distribution of gas and dust as well as the star formation scenario in the region. We will conclude in Section 5. Observations and data reduction =============================== Kiso Observations ------------------ Optical ($V\&I$) data for nine overlapping regions of Auriga (Auriga 1 - Auriga 9, cf. Fig. \[spa\]) were procured with KWFC [FOV $\sim$2.2$^\circ$ $\times$ 2.2$^\circ$; scale 0.946 arcsec/pixel; @2012SPIE.8446E..6LS] on the 1.05-m Schmidt telescope at Kiso Observatory, Japan during the nights of 06, 07 December 2012 and 27 December 2014. KWFC, an optical wide-field 64 megapixel imager with 4 SITe and 4 MIT Lincoln Laboratory (MIT/LL) 15$\mu$m 2K$\times$4K CCDs is attached to the prime focus of the Kiso Schmidt telescope. The details of the instrument can be found in @2012SPIE.8446E..6LS and @2014PASJ...66..114M. The observations were carried out in the unbinned and slow readout mode. In this mode, the readout noise is about 20 and 5-10 electrons for the SITe and MIT/LL CCDs, respectively. A number of short and deep exposures were taken. The detailed log of the observations is given in Table \[Tlog\]. Several bias and dome-flat frames were also taken each night. Initial processing (bias subtraction, flat-fielding and cosmic ray correction) of the data frames was done by using the IRAF[^2] data reduction package. Astrometric calibration and image co-addition were carried out by using the SCAMP[^3] and SWarp[^4] software, respectively. The total FOV observed in this study is $\sim6^\circ\times6^\circ$ and is shown in Fig. \[spa\]. Photometry of the cleaned frames was carried out by using DAOPHOT-II software which includes [find, phot, psf]{} and [allstar]{} sub-routines [@1987PASP...99..191S; @1994PASP..106..250S]. The point spread function (PSF) was obtained for each frame by using several uncontaminated stars. When brighter stars were saturated on deep exposure frames, their magnitudes have been taken from short exposure frames. We used the DAOGROW program for construction of an aperture growth curve required for determining the difference between the aperture and PSF magnitudes. To calibrate the data we used the following procedure. We first calibrated the data for the Auriga 1 field (see Fig. \[spa\]) using the photometric data of NGC 1960 taken from @2006AJ....132.1669S which provides photometry in the $U,B,V,R$ and $I$ bands down to a limiting magnitude of $V$=20 mag (error$<$0.1 mag) in a FOV of $\sim 1^\circ \times 1^\circ$. This cluster lies in the Auriga 1 field . The area covering the observations of NGC 1960 is also shown in Fig. \[spa\]. After calibrating the Auriga 1 field, stars lying in the common overlapping area of Auriga 1 and Auriga 2 were taken as the secondary standards to calibrate the Auriga 2 field. The common stars of Auriga 2 and Auriga 4 were taken as the secondary standards to calibrate the Auriga 4 field and so on. To translate the instrumental magnitudes to the standard magnitudes the following calibration equations, derived by using a least-square linear regression, were used: $$V - v = M1\times(V-I) + C1$$ $$(V-I) = M2\times(v-i) + C2,$$ where $V$ and $(V - I)$ are the standard magnitudes and colours from @2006AJ....132.1669S, respectively; $v$, and $(v - i)$ are the instrumental magnitudes and colours, respectively; C1, C2 and M1, M2 are the zero-point constants and colour coefficients, respectively. Fig. \[calib\] plots, as an example, the $(V-I)$ vs. $(V-v)$ and $(v-i)$ vs. $(V-I)$ diagram for the stars in NGC 1960. Fig. \[resd\] (top panels) plot the difference between the NGC 1960 data from @2006AJ....132.1669S and the calibrated $V$ magnitudes and $(V-I)$ colours of the NGC 1960 region obtained by using the equations 1 and 2. The standard deviation in $\Delta V$ and $\Delta (V - I)$, in the magnitude range $V \sim$ 13.5 - 19.0 mag, is 0.09 and 0.08 mag, respectively. The values of all the zero-point constants and colour coefficients for various fields (Auriga 1 - Auriga 9) are given in Table \[coeff\]. The Auriga 3 field was calibrated by using the common stars in Auriga 5 and Auriga 3 as secondary standards as described earlier in this section. We further used the common stars lying in Auriga 1 and Auriga 3 to get another set of the transformed standard magnitudes for Auriga 1 stars. Fig. \[resd\] (middle panel) and Table \[cmpT\] shows the residuals $\Delta$, between the two sets of calibrated data of Auriga 1 in $V$ magnitudes and $(V - I)$ colours. The standard deviation in $\Delta V$ and $\Delta (V - I)$, in the magnitude range $V \sim$ 13.5 - 19.0 mag is $\sim$0.02 and $\sim$0.03 mag, respectively. Although there is some trend in lower panel, but the scatter is small. The comparison of these two transformed standard data indicates a fair agreement. Here we would like to point out that although the uncertainty due to the calibration shown in Fig. \[calib\] is not negligible, the uncertainty in optical data due to calibration are not crucial for the scientific results of the present study as they can be safely assumed to be negligible compared to those associated with the analysis of the Spectral Energy Distribution (SED) aimed at the derivation of stellar and disk parameters (cf. Section 3.3.1). To ensure the quality of the calibration of the data we further calibrated the Auriga 3 field using the NGC 1960 data from @2006AJ....132.1669S and compared the calibrated phtometric data of common stars in the FOVs of Auriga 1 and Auriga 3. The comparison is shown in the lower panel of Fig. \[resd\]. Table \[cmpT1\] shows residuals between the two photometric calibrations. The standard deviation in $\Delta V$ and $\Delta (V - I)$, in the magnitude range $V\sim$13.5 - 19.0 mag is $\sim$0.07 and $\sim$0.08 mag, respectively. The comparison of these two independently calibrated data indicates a fair agreement and ensures the reliability of the calibration. The typical DAOPHOT errors in magnitude as a function of the corresponding magnitudes are shown in Fig. \[err\]. It can be seen that the errors become large ($\sim$0.1 mag) for stars $V \sim 20$ mag. More than 0.47 million stars have errors less than 0.1 mag in $V$ and $I$ bands and these stars have been used for analysis in the ensuing sections. Archival data: 2MASS and $WISE$ data \[obs-spit\] ------------------------------------------------- To investigate the star formation in the Auriga Bubble region we also used near/mid infra-red (NIR/MIR) data from the Two Micron All Sky Survey [2MASS, @2003yCat.2246....0C] and the Wide-field Infrared Survey Explorer survey [$WISE$, @2010AJ....140.1868W] archived data in addition to the optical $VI$ data from the Kiso Schmidt telescope. $WISE$ is a 40-cm telescope in low-Earth orbit that surveyed the whole sky in four mid-IR bands at 3.4, 4.6, 12, and 22 $\mu$m (bands $W1$, $W2$, $W3$, and $W4$) with nominal angular resolutions of 6$^\prime$$^\prime$.1, 6$^\prime$$^\prime$.4, 6$^\prime$$^\prime$.5, and 12$^\prime$$^\prime$.0 in the respective bands [@2010AJ....140.1868W]. In this paper, we make use of the AllWISE source catalog, which provides accurate positions, apparent motion measurements, four-band fluxes and flux variability statistics for over 747 million objects. The online explanatory supplement[^5] and Marsh & Jarrett (2012) describe the $WISE$ source detection method in detail. The AllWISE catalog is searchable via NASA/IPAC Infrared Science Archive (IRSA). It also provides information on 2MASS counterparts. We have also used the 2MASS Point Source Catalog [PSC, @2003yCat.2246....0C] for near-IR (NIR) $JHK$$_s$ photometry of all the sources (including those sources which do not have $WISE$ photometry) in the Auriga Bubble region. This catalog is reported to be 99$\%$ complete down to the limiting magnitudes of 15.8, 15.1 and 14.3 in the $J$, $H$ and $K_s$ band, respectively [^6]. Completeness of the data ------------------------ To have an unbiased study of star formation in the region, it is vital to know the completeness of the data in terms of magnitudes/ masses for the sample YSOs identified. The photometric data may be incomplete due to various reasons, e.g., nebulosities, crowding of stars, detection limit etc . To check the completeness factor for the optical data we used the ADDSTAR routine of DAOPHOT II. This method has been used by various authors [cf. @2007MNRAS.380.1141S; @2008AJ....135.1934S and references therein]. Briefly, the method consists of randomly adding artificial stars of known magnitudes and positions into the original frame. The frames are reduced by using the same procedure used for the original frame. The ratio of the number of stars recovered to those added in each magnitude interval gives the completeness factor as a function of magnitude. The luminosity distribution of artificial stars is chosen in such a way that more stars are inserted into the fainter magnitude bins. In all about 15% of the total stars are added so that the crowding characteristics of the original frame do not change significantly . We found that the present optical data for $I$ band are complete at $\sim90\%$ level for 17.75 magnitude (cf. Fig. \[fcft\]). The estimate of the completeness of the present YSO sample is rather difficult as it is limited by several factors. For example, the bright nebulosity in the $WISE$ bands significantly limits the point source detection. The YSO identification on the basis of 2MASS - $WISE$ colours may be limited by the sensitivity of the 2MASS survey and saturation of $W3$ and $W4$ images. Variable reddening and stellar crowding characteristics across the region could also affect the local completeness limit. The completeness of the YSO selection using the NIR data is also dictated by the amount of IR excess, the evolutionary status of disks etc. In the present study the larger $WISE$ PSF may also hamper the detection of YSOs in the crowded region. All these effects are difficult to quantify. Results ======= YSO identification \[idf\] -------------------------- In the present study the 2MASS data along with the $WISE$ data have been used to identify and classify the YSOs associated with the Auriga Bubbles 1 and 2 using the following classification schemes. YSOs are generally classified as Class[0]{}, Class[i]{}, Class[ii]{} or Class[iii]{} sources on the basis of the infrared slopes of their spectral energy distributions [@1994ASPC...65..197B; @2006AJ....131.1574L]. YSOs of the early stages (i.e., Class[0]{} and [i]{}) are usually deeply buried inside the molecular clouds, hence detection of them at optical wavelengths is difficult. The most prominent feature of these YSOs is accreting circumstellar disks and envelopes. The radiation from the central YSO is absorbed by the circumstellar material and re-emitted in NIR/MIR. Therefore, Class[i]{} (and Class[ii]{} as well) sources can be probed through their IR excess (compared to normal stellar photospheres). ### $WISE$ classification The four wave-bands of $WISE$ are useful to detect mid-IR emission from cold circumstellar disk/envelope material in YSOs. This makes $WISE$ all-sky survey as a readily available tool to identify and classify YSOs, in a similar way to what can be done with $Spitzer$ [@2004ApJS..154..363A; @2008ApJ...674..336G; @2009ApJS..184...18G and others] but over the entire sky. We use the AllWISE Source Catalog to search for YSOs adopting the criteria given by @2014ApJ...791..131K. We refer Figure 3 of @2014ApJ...791..131K to summarize the entire scheme. This method includes the selection of candidate contaminants (AGN, AGB stars and star forming galaxies). Out of 40788 $WISE$ source meeting photometric quality flags, 22238 sources were excluded from the data file as contaminants on the basis of selection criteria of @2014ApJ...791..131K. In the present study we followed and applied the photometric quality criteria for different $WISE$ bands as given in @2014ApJ...791..131K. #### $WISE$ three-band classification We applied this classification scheme to all the sources that were detected in three $WISE$ bands (namely $W1$, $W2$ and $W3$) and satisfying the photometric quality criteria as given in @2014ApJ...791..131K. The YSOs were selected on the basis of their $(W1 - W2)$ and $(W2 - W3)$ colours according to the criteria given by [@2014ApJ...791..131K], which are based on the colours of known YSOs in Taurus, extragalactic sources, and Galactic contaminants. This approach efficiently separates out IR excess contaminants such as star forming galaxies, broad-line active galactic nuclei, unresolved shock emission knots, objects that suffer from polycyclic aromatic hydrocarbon (PAH) emissions etc. [see e.g., figure 2 of @2016ApJ...827...96F]. Fig. \[wise\] (top left-hand panel) shows the $(W2 - W3)$ versus $(W1 - W2)$ $WISE$ two-colour diagram (TCD) for all the sources in the region, where candidate YSOs classified as Class[i]{} and Class[ii]{} are shown by red stars and red squares, respectively. However, it is worthwhile to mention that there could be a overlap between Class[i]{} and Class[ii]{} objects as can be seen in figure 5 of @2014ApJ...791..131K. However, this is not expected to affect our discussion on bubbles as we are using both Class[i]{} and Class[ii]{} YSOs having age $\leq$ 1 Myr to delineate the structure of the bubbles (cf. Section 4.3, Fig. \[gal\]). #### $WISE$ four-band classification $W4$ photometry has been used to identify candidates transition disk objects and also to retrieve candidate protostars which might have been classified as AGN candidate on the basis of the $WISE$ three band classification scheme (cf. Sec. 3.1.1.1). Fig. \[wise\] (top right-hand panel) shows the $(W3 - W4)$ versus $(W1 - W2)$ TCD for all the sources in the region, where probable YSOs classified as transition disk sources and protostars are shown by red pentagons and red circles, respectively. The classification discussed in this section yields one and four transition disk source and protostars, respectively. #### 2MASS and $WISE$ classification Since the studied region has highly variable nebulosity, many sources in the region may not be detected at longer wavelengths due to the saturation of detectors, hence the selection of YSOs on the basis of only $WISE$ bands photometry may not be complete. Therefore, we use 2MASS $H$, $K_s$ data along with $W1$ and $W2$ band $WISE$ data as some sources might have not been detected in the $W3$ band, but have possibility of detection in the 2MASS $H$ and $K$ band [@2014ApJ...791..131K]. Fig. \[wise\] (bottom left-hand panel) shows the $(W1 - W2)$ versus $(H - K)$ TCD for all the sources in the region, where probable YSOs classified as Class[i]{} and Class[ii]{} are shown by red stars and red squares, respectively. All the YSOs identified above have been checked against the possibility of being AGB stars as discussed by @2014ApJ...791..131K. Finally on the basis of the $WISE$ data we classified 154 and 331 sources as probable Class[i]{} and Class[ii]{} YSOs. ### 2MASS classification The $(J - H)/(H - K)$ NIR TCD is also a useful tool to identify pre-main sequence (PMS) objects. It is possible that some of the candidate YSOs might have not been detected on the basis of $WISE$ data, hence we have also used 2MASS $JHK$ three band data to identify additional YSOs. Fig. \[wise\] (bottom right-hand panel) displays the NIR TCD for all the stars which have not been detected in the WISE survey or do not meet the photometric quality criterion (cf. Section 3.1.1) in the region studied. The 2MASS magnitudes and colours have been converted into the California Institute of Technology (CIT) system[^7]. The solid and long dashed lines in Fig. \[wise\] (bottom left-hand panel) represent unreddened main sequence (MS) and giant branch loci [@1988PASP..100.1134B], respectively. The dotted line indicates the intrinsic loci of CTTSs [@1997AJ....114..288M]. The parallel dashed lines are the reddening vectors drawn from the tip (spectral type M4) of the giant branch (‘left reddening line’), from the base (spectral type A0) of the MS branch (‘middle reddening line’) and from the tip of the intrinsic CTTSs line (‘right reddening line’). The extinction ratios $A_J /A_V$ = 0.265, $A_H /A_V$ = 0.155 and $A_K /A_V$ = 0.090 have been adopted from @1981ApJ...249..481C. The sources lying in the ‘F’ region could be either field stars (MS stars, giants), Class[iii]{} or Class[ii]{} sources with small NIR excesses. The sources lying in the ‘T’ region are considered to be mostly classical T-Tauri stars (CTTSs, i.e., Class[ii]{} objects). The sources lying in the ‘P’ region - redward of the right reddening line - are most likely Class[i]{} objects [protostellar-like objects; @2004ApJ...616.1042O]. In this scheme we consider only those sources as YSOs that lie at a location above the intrinsic loci of CTTSs with margins larger than the errors in their colours. This classification criterion yields 21 and 204 probable Class[i]{} and Class[ii]{} YSOs, respectively. YSO sample ---------- On the basis of the criteria discussed above we have compiled a catalog of 710 YSOs in an area of $\sim$6$\times$6 degree$^2$, divided into 175 Class[i]{} and 535 Class[ii]{} YSOs. A portion of the catalog is shown in Table \[data1\_yso\], which lists the positions of YSOs, their magnitudes at various bands, and classification. The complete catalog is available in an electronic form only. The optical magnitudes of the nearest optical counterparts for 176 YSOs which have been found within a match radius of 2 arcsec are also given in Table \[data1\_yso\]. Here, is it worthwhile to mention that no multiple identification between the optical and IR counterparts of YSOs are found within 2 arcsec. As assumed in various previous studies, the peak of the observed luminosity function can be considered as the 90% completeness limit of the data . We constructed the luminosity function for each band (see Fig. \[cft-hist\]). The resultant completeness limits of the optical and $WISE$ data are given in Table \[cftT\]. The completeness limit in the $I$ band obtained here agrees well with that obtained in Sec 2.3. As mentioned in Sec 2.2, we assume that the 2MASS $JHK$ data have completeness of $\sim 90\%$ at the limiting magnitudes of 15.8, 15.1 and 14.3 for the $J, H$ and $K$ bands respectively. ### Unidentified Class[iii]{} (diskless) sources In the present study we have not attempted to identify diskless YSOs. The Class[iii]{} sources may possibly have a different spatial distribution in the Auriga region as compared to the Class[i]{} and [ii]{} YSOs and inclusion of these sources may have impact on the parameters described in the ensuing sections. It is worthwhile to mention that disk fraction estimates in young clusters having age $\leq 3$ Myr varies from 60-70% (M $\lesssim$ 2 M$_\odot$) and 35 - 40% (M$>$2 M$_\odot$) , whereas the disk half-life estimates varies from 1.3 Myr to 3.5 Myr [@2018MNRAS.477.5191R and references therein]. Since the missing YSO mass due to the incompleteness of our YSO search criteria as well as to unidentified Class[iii]{} sources may play a role in the analysis to be carried out in the ensuing sections, we will assume that 50% of the total YSO population is missed in the present YSO sample. Physical properties of the identified YSOs ------------------------------------------ ### Spectral energy distribution The YSOs can also be characterized from their SED. The SED fitting provides evolutionary stages and physical parameters such as mass, age, disk mass, disk accretion rate and photospheric temperature of YSOs and hence is an ideal tool to study their evolutionary status. We constructed the SEDs of the YSOs using the grid models and Python version of SED fitting tools[^8] of @2006ApJS..167..256R [@2007ApJS..169..328R][^9]. The models were computed by using a Monte-Carlo based 20000 2-D radiation transfer calculations from @2003ApJ...591.1049W [@2003ApJ...598.1079W; @2004ApJ...617.1177W] and by adopting several combinations of a central star, a disk, an infalling envelope, and a bipolar cavity in a reasonably large parameter space and with 10 viewing angles (inclinations). The SEDs were constructed by using the multiwavelength data (i.e. optical to MIR) with the condition that a minimum of 5 data points should be available. While fitting the models to the data we assumed the extinction and the distance as free parameters. Considering the errors associated with the distance estimates available in the literature, the range in distance estimate was assumed to vary between 2.0 kpc to 2.4 kpc. Since the extinction in the region is variable [cf. @2008MNRAS.384.1675J; @2013ApJ...764..172P], we used a range for $A_V$ of 1.6 to 30 mag. We further set photometric uncertainties of 10% for optical and 20% for both NIR and MIR. These values are adopted instead of the formal errors in the catalog in order to fit without any possible bias in underestimating the flux uncertainties. In Fig. \[sed\], we show example SEDs of Class[i]{} and Class[ii]{} sources, where the solid black curves represent the best-fit and the gray curves are the subsequent well-fits satisfying our requirements for good fit discussed below. Since the SED models are highly degenerate, the best-fit model is unlikely to give an unique solution and the estimated physical parameters of the YSOs tabulated in Table \[data3\_yso\] are the weighted mean with standard deviation of the physical parameters obtained from the models that satisfy $\chi^2 - \chi^2_{min} \leq 2 N_{data}$, where $\chi^2_{min}$ is the goodness-of-fit parameter for the best-fit model and $N_{data}$ is the number of input data points. Discussion ========== Physical conditions in the Auriga region ---------------------------------------- In this Section, we present a brief description of the identified YSOs, ionized gas and molecular clouds, along with their correlation with each other. The region contains five Sharpless H[ii]{} regions Sh2-231 to Sh2-235. In addition it is reported that there are about 14 embedded SFRs having ages $\sim$3-5 Myrs [cf. @2012AJ....143...75K and references therein]. It has also been proposed that the formation of these young objects could have been triggered by a older generation of stars [@2012AJ....143...75K and references therein]. ### Characteristics of the identified YSOs in the Auriga region We have identified 710 YSOs in the Auriga Bubble region (cf. Table \[data1\_yso\]) and derived the physical parameters of 489 YSOs from the SED fitting analysis (cf. Table \[data3\_yso\]). These parameters were used in further analysis. Histograms of the age and mass of these YSOs are shown in Fig. \[histogram\]. The distribution of the ages estimated on the basis of SEDs indicates that $\sim$76% (370/489) of the sources have ages $\le $ 3.5 Myr. The masses of the YSOs are found to range between 0.75 to 9 M$_\odot$, and a majority ($\sim$86%) of them are in the range of 1.0 to 3.5 M$_\odot$. The $A_V$ distribution shows a long tail indicating its large spread from $A_V$=1 - 27 mag, which is consistent with the nebulous nature of this region. The average age, mass and extinction ($A_V$) for this sample of YSOs are $2.5\pm1.7$ Myr, $2.4\pm1.1$ M$_\odot$ and $7.1\pm4.1$ mag, respectively. The evolutionary classes of the identified 710 YSOs given in the Table \[data1\_yso\] reveal that $\sim$25% (175 out of 710) sources are Class[i]{} YSOs. The comparatively high percentage of Class[i]{} YSOs indicates the youth of this region. ### The distribution of gas, dust and YSOs Details on the environment and distribution of YSOs can be used to probe the star formation scenario in the region. In Fig. \[Fiso\] (top left-hand panel), the H[i]{} contours by (black contours) and the $^{12}$CO contour map (cyan contours) from @2001ApJ...547..792D along with the distribution of the YSOs are overlaid on the $WISE$ 12 $\mu$m image. The $WISE$ 12 $\mu$m image covers the prominent PAH features at 11.3 $\mu$m, which is indicative of star formation activity [see e.g. @2004ApJ...613..986P]. This figure reveals that ionized as well as molecular gas distribution is well correlated with that of YSOs. The distribution of ionized gas, molecular gas and YSOs indicates a ring-like structure spread over an area of a few degrees. The distribution of YSOs also reveals that a majority of Class[i]{} sources belong generally to this ring-like structure, whereas the comparatively older population, i.e. Class[ii]{} objects, are rather randomly distributed throughout the region. As stated already we term this structure as Auriga Bubble1. Furthermore there is another, very well defined distribution of Class[i]{} and Class[ii]{} objects towards the north-west of the H[ii]{} complex G173+1.5, forming another bubble feature, which we call as Auriga Bubble2. Its nature will be discussed in an ensuing study. ### Extinction and YSOs surface density maps To quantify the extinction in the region and to characterize the structure of the molecular gas associated with various SFRs in the Auriga Bubble, we derived $A_K$ extinction maps using the $(H - K)$ colours of field stars [cf. @2011ApJ...739...84G]. To produce the extinction map we excluded the candidate YSOs (cf. Section 3.2) and probable contaminating sources (AGNs, AGB stars and star forming galaxies) using the procedure by @2014ApJ...791..131K from the sample stars. Similar approaches were used by other studies also [e.g., @2008ApJ...675..491A; @2009ApJS..184...18G; @2013MNRAS.432.3445J; @2016ApJ...822...49J; @2016AJ....151..126S and references therein]. Mean values of $A_K$ were derived by using the nearest neighbor (NN) method as described in detail by @2005ApJ...632..397G and @2009ApJS..184...18G. Briefly, the mean value of $(H - K)$ colours of five nearest stars at each position in a grid of 30 arcsec was calculated for the entire Auriga region ($\sim6^\circ\times6^\circ$). The sources deviating above 3$\sigma$ were excluded to calculate the final mean colours of each grid. The reddening law $A_K$ = 1.82 $\times$ ($(H - K)_{obs}- (H - K)_{int}$) by @2007ApJ...663.1069F was used to convert the $(H -K)$ colours into $A_K$, where $(H - K)_{int}$ = 0.2 was assumed as an average intrinsic colours of the field stars [see. @2008ApJ...675..491A; @2009ApJS..184...18G]. To eliminate the foreground contribution in generating the extinction map we used only those stars which have $A_K >$ 0.15$\times$D, where D is the distance in kpc [@2005ApJ...619..931I]. The extinction map is sensitive down to $A_K\sim$2.8 mag (=$A_V\sim$30 mag). However, the derived $A_K$ values are to be considered as lower limits, because the sources in the region with higher extinction might have not been detected in the present sample. The extinction map, smoothened to a resolution of 18 arcmin, is shown in blue colour in Fig. \[Fiso\] (top right-hand panel). It is interesting to note that the extinction map resembles the general distribution of the molecular gas as outlined by the $^{12}$CO emission map shown in Fig. \[Fiso\] (top left-hand panel). It shows a concentration of molecular clouds towards Sh2 - 231 - 235. A comparison of the stellar density distribution and the morphology of the molecular material can provide a clue to the history of star formation in the region. The surface density maps of YSOs were generated by using the NN method as described by @2005ApJ...632..397G. We used the radial distance that contains 5 nearest YSOs to compute the local surface density in a grid size of 30 arcsec. The grid size identical to that of the extinction map was used to compare the stellar density and the gas column density. The density distribution of YSOs is shown in red colour in Fig. \[Fiso\] (top right-hand panel). The distribution of the YSOs and molecular material shows a nice correspondence. @2011ApJ...739...84G also found similar trends in eight nearby molecular clouds. @2008ApJ...674..336G have shown that the sources in each of the Class[i]{} and Class[ii]{} evolutionary stages have very different spatial distributions relative to that of the dense gas in their natal cloud. We have also compared the extinction map with the positions of YSOs of different evolutionary stages and found that the Class[i]{} sources are located towards the places with higher extinction. These properties agree well with previous findings in several star-forming regions such as W5 , i.e., younger Class[i]{} sources are more clustered and closely associated with the densest molecular clouds in which they were born presumably, while the Class[ii]{} sources are scattered probably by drifting away from their birthplaces. Clustered population in the Auriga Bubble ----------------------------------------- ### Extraction of sub-clusters and the distribution of scattered YSO population Many ground-based NIR surveys of molecular clouds [e.g., @1991ApJ...368..432L; @1993ApJ...412..233S; @2000ApJS..130..381C; @2003AJ....126.1916P; @2003ARAA..41...57L] have shown that molecular clouds host both dense ‘clustered’ and diffuse ‘distributed’ population. As discussed earlier the Auriga Bubble region contains several star forming subregions, hence a clustered distribution of YSOs is expected. @2009ApJS..184...18G have used an empirical method based on the minimum spanning tree (MST) technique to isolate groupings (sub-clusters) from the more diffuse distribution of YSOs in nebulous regions. This method effectively isolates sub-structures without any type of smoothening . The sub-groups detected in this way have no biases regarding the shapes of the distribution and preserve the underlying geometry of the distribution [@2009ApJS..184...18G]. In Fig. \[Fiso\] (bottom left-hand panel) we plot the derived MSTs for the YSOs in the region. In order to isolate the sub-structures, we adopted a surface density threshold expressed by a critical branch length. With the help of an adopted MST branch length threshold we can identify local surface density enhancements. To do that, we used a similar approach to that suggested by @2009ApJS..184...18G. In Fig. \[Fcdf\], we plot the cumulative distribution for MST branch lengths, which shows a three-segment curve; a steep segment at short lengths, a transition segment at the intermediate lengths, and a shallow-sloped segment at long lengths. The majority of the sources are found in the steep segment, where the lengths are small (i.e., sub-cluster). Therefore, to isolate sub-cluster regions in the Auriga Bubble, we fitted two straight lines to the shallow and steep segments of the cumulative distribution function (CDF) and extended them to connect together. The intersection is adopted as the MST critical branch length, as shown in Fig. \[Fcdf\] [see also, @2009ApJS..184...18G]. The sub-clusters of the SFRs were then isolated from the lower density distribution by clipping the MST branches longer than the critical length described above. Similarly, we defined the extended area for the SFR by selecting the point where the curved transition segment meets the shallow-sloped segment at longer spacings. This range represents the extended region of star formation or the area where YSOs might have moved away from the sub-clusters due to dynamical evolution, and we have named this region as the active region. The black dots connected with black lines and the blue dots connected with blue lines in Fig. \[Fiso\] (bottom left-hand panel) are the branches smaller than the critical length for the sub-clusters and the active region, respectively. We have also plotted the convex hulls [cf. @2009ApJS..184...18G] for the active region in Fig. \[Fiso\] (bottom left-hand panel) with solid purple lines. The physical details of the sub-groups (sub-clusters) and the active regions are given in Tables \[Tp1\], \[Tp2\] and \[Tp3\]. In total, we have identified 9 active regions and 26 probable sub-clusters having at-least 5 YSO members in the Auriga Bubble region. Although sub-clusters with a small number of members have been reported in previous studies, e.g., @2009ApJS..184...18G [N=10], @2004MNRAS.348..589C [N=7-10] and @2016AJ....151..126S [N=10], however it is worthwhile to mention that smaller numbers of YSOs in some of the probable sub-clusters may introduce relatively large errors in the derived physical parameters discussed in the ensuing sub-sections. The median value of the critical branch lengths for the sub-clusters and the active regions are 2 pc and 9 pc, respectively. In Fig. \[Fiso\] (bottom right-hand panel) we can see a correlation between the identified active regions with the extinction contours and the YSO locations in the Auriga Bubble. In many SFRs YSOs are observed to have both diffuse and clustered spatial distribution. For example, @2008ApJ...688.1142K have analyzed the clustering properties across the W5 region and found 40-70% of the YSOs belong to groups with $\geq$10 members and the remaining were described as scattered populations. Using the AllWISE database, @2016ApJ...827...96F identified 479 YSOs in a $10\times10$ degree$^2$ region centered on the Canis Major star-forming region. Their YSO sample contains 144 and 335 Class[i]{} and Class[ii]{} YSOs, respectively. On the basis of the MST of the YSO distribution, @2016ApJ...827...96F concluded that there were 16 groups with more than four members. Of the 479 YSOs, 53% are in such groups. @2009ApJS..184...18G presented a uniform MIR imaging and photometric survey of 36 nearby young clusters and groups using Spitzer IRAC and MIPS. They found 39 clusters/sub-clusters with 10 or more YSO members. Of the 2548 YSOs identified, 1573 (62%) are members of one of these clusters/sub-clusters. Although the sub-clusters in the Auriga Bubble region have sizes of the order of a few parsecs, however, some of the member stars might have moved away in the last few Myr due to dynamical and environmental effects [@2006MNRAS.369..143M]. @2011MNRAS.410.1861W used N-body calculations to study the numbers and properties of escaping stars from young embedded star clusters during the first 5 Myr of their existence prior to the removal of gas from the system. They found that these clusters can lose substantial amounts (up to 20%) of stars within 5 Myr. In the present sample, the YSOs formed in the sub-clusters having a mean velocity of $\sim$2 km s$^{-1}$ [cf. @2011MNRAS.410.1861W] can travel a distance of $\sim$2-6 pc in 1-3 Myr of their formation. Therefore, we expect that the effect of escaping members from the sub-clusters/active regions must be insignificant. We have estimated that the fraction of the scattered YSO population (the YSOs outside sub-clusters, but in the active regions) is about $\sim$37% of the total YSOs in the whole active regions. Similar numbers ($\sim$40%) in the case of 8 bright rimmed clouds [@2016AJ....151..126S] and 5 embedded clusters [@2014MNRAS.439.3719C] have also been reported in previous studies. The explanation for the scattered populations may include: escape of sub-clusters/cluster members due to their dynamical interaction and isolated star formation [for details, @1997ApJ...480..235E; @2003ARAA..41...57L; @2008ApJ...688.1142K; @2009ApJS..181..321E; @2014ApJ...787L..15E; @2014MNRAS.439.3719C; @2019AJ....157..112P]. ### Sub-cluster morphology and structural parameters Known SFRs show a wide range of sizes, morphologies and star numbers [cf. @2008ApJ...674..336G; @2009ApJS..184...18G; @2011ApJ...739...84G; @2014MNRAS.439.3719C]. We use cluster’s convex hull radii ($R_H$) and aspect ratios to investigate their morphology (see Table \[Tp2\] and Fig. \[Fhull\]). Here we would like to point out that the present procedure is applied to the sample which contains only Class[i]{} and Class[ii]{} YSOs and does not include Class[iii]{} YSOs. The similar approach has been adopted in previous studies also [@2009ApJS..184...18G; @2016AJ....151..126S]. However, as discussed in Section 3.2.1, the contribution of diskless YSOs (i.e. Class[iii]{} sources) in star-forming regions having age 2-3 Myr may be about 50% of the total YSO population. This missing population may play a significant role in estimating various parameters discussed in the ensuing sections. Hence we also estimate parameters by assuming the contribution of Class[iii]{} sources as 50% of the total YSO population. A majority of the sub-clusters identified in the present sample show an elongated morphology with the median value of the aspect ratios around 1.6. The median number of YSOs in the sub-clusters and the active regions are 9 and 38, respectively (cf. Table \[Tp3\]). The median MST branch length for these sub-clusters is found to be $\sim$0.5 pc. The total sum of YSOs in the active regions is 546, out of which 345 (63%) falls in the sub-clusters. The YSOs in the Auriga Bubble have mean surface densities mostly between 0.3 and 7.5 pc$^{-2}$ (see Table \[Tp1\] and Fig. \[Fdensity\]). The median values for the surface densities for the sub-clusters and the active region come out to be around 1.35 pc$^{-2}$ and 0.1 pc$^{-2}$, respectively. The peak surface densities vary between $\sim$ 0.5 - 18 pc$^{-2}$ with a median value of 4 pc$^{-2}$ for our sample (cf. Table \[Tp1\] and Fig. \[Fdensity\]). As in the case of low-mass embedded clusters studied by @2014MNRAS.439.3719C, we found a weak proportionality between the peak surface density and the number of cluster members, suggesting that the clusters are better characterized by their peak YSO surface density. The spatial distribution of YSOs in a SFR can be investigated with the help of the structural $Q$ parameter . It is used to measure the level of hierarchical versus radial distributions of a set of points, and it is defined by the ratio of the MST-normalized mean branch length to the normalized mean separation between points [cf. @2014MNRAS.439.3719C for details]. If the normal values are used, the $Q$ parameter becomes independent on the cluster size . A group of points distributed radially will have a high $Q$ value ($Q$ $>$ 0.8), while clusters with a more fractal distribution will have a low $Q$ value ($Q$ $<$ 0.8) [@2004MNRAS.348..589C]. We find that the groups of the YSOs in the present study (including only Class[i]{} and [ii]{} sources) have median $Q$ values less than 0.8 (0.71 in sub-clusters and 0.52 in active regions, cf. Tables \[Tp2\] and \[Tp3\]), indicating a more fractal distribution. @2014MNRAS.439.3719C have found a weak trend in the distribution of $Q$ values per number of members, suggesting a higher occurrence of sub-clusters merging in the most massive cluster, which reduces the $Q$ value. A similar trend can be noticed in Fig. \[Fq\] (left-hand panel). ### Associated molecular material, stellar mass and Jeans length The mean $A_K$ values for the identified sub-clusters have been found in the range of 0.6 and 1.3 mag, with a median value of 0.9 mag (cf. Table \[Tp3\] and Fig. \[Fak\]). A weak correlation (Spearman’s correlation coefficient ‘r’ = 0.6 with 95% confidence interval of 0.3 to 0.8) between the peak $A_K$ and the number of cluster members can be noticed in Fig. \[Fak\]. The median $A_K$ value for the active regions is 0.6 mag, which is lower than the sub-cluster value, naturally indicating the YSO distribution of higher density towards the molecular clouds of higher density. The observed correlation indicates that active regions/ sub-clusters having higher number of YSOs have higher peak $A_K$ value. Fig \[Fmass\] (left panel) suggests that higher number of YSOs in active regions/ sub-cluster are associated with the higher mass cloud. We presume that massive clouds have higher peak $A_K$ value. The extinction maps generated in §4.2.1 have been used to estimate the molecular mass associated with the identified sub-clusters/active regions. The $A_V$ value (corrected for the foreground extinction) in each grid of the 30 arcsec was converted to $H_2$ column density by using the relation given by @1978ApJS...37..407D and @1989ApJ...345..245C, i.e., $\rm N(H_2) = 1.25\times10^{21}\times A_V ~cm^{-2}mag^{-1}$. The $H_2$ column density was integrated over the convex hull of each region and multiplied by the $H_2$ molecule mass to get the cloud mass. The extinction law, $A_K/A_V=0.09$ [@1981ApJ...249..481C] has been used to convert $A_K$ values to $A_V$. The foreground contributions have been corrected for by using the relation: $A_{K_{foreground}}$ =0.15$\times$D [@2005ApJ...619..931I D is distance in kpc]. The properties of the molecular clouds associated with the sub-clusters and active regions are listed in Table \[Tp2\]. The molecular gas associated with the sub-clusters in the present sample shows a wide range in the mass distribution ($\sim$5.8 to 7621 M$_\odot$), with a median value around $\sim$146 M$_\odot$. SED analysis would have allowed us to estimate the mass of 489 YSOs (cf. Section 3.3.1), while for other 221 YSOs available data-points are not enough to properly constrain stellar parameters. Hence, the total mass of all the candidate YSOs in the sub-clusters/ active regions has been estimated by multiplying the total number of the YSOs (i.e., 710) with the average SED mass (2.4 M$_\odot$ as discussed in Section 4.1). An analysis of YSO spacings in sub-clusters of 36 star-forming clusters by @2009ApJS..184...18G suggested that Jeans fragmentation is a starting point for understanding the primordial structure in SFRs. We estimated the minimum radius required for the gravitational collapse of a homogeneous isothermal sphere (Jeans length ‘$\lambda_J$’) in order to investigate the fragmentation scale by using the formula given in @2014MNRAS.439.3719C. The Jeans length $\lambda_J$ estimated for the sub-clusters in the current study has values between 0.8 - 3.3 pc for the sample having Class[i]{} and Class[ii]{} sources as well the sample having assumed missing mass of Class[iii]{}. The contribution of missing mass of Class[iii]{} sources has been accounted by assuming a disk fraction of 50%. The resultant number of missed stars has been multiplied with an average SED mass (2.4 M$_\odot$, as discussed in Section 4.1) to get contribution of unidentified Class[iii]{} sources. The estimated value of Jeans length ‘$\lambda_J$’ are given in Tables 9 and 10. The median values of ‘$\lambda_J$’ are estimated as $\sim$2 pc for both of the two samples as discussed. We also compared the $\lambda_J$ and the mean separation ‘$S_{YSO}$’ between the cluster members (Fig. \[Fq\], right-hand panel) and found that the ratio $\lambda_J/S_{YSO}$ has an average value of $3.4\pm0.9$ and $3.2\pm0.8$ for the two samples, respectively. @2014MNRAS.439.3719C reported the ratio for their sample of embedded clusters as $4.3 \pm 1.5$. The present results agree with a non-thermal driven fragmentation since it takes place at scales smaller than the Jeans length [@2014MNRAS.439.3719C]. @2010ApJ...724..687L have reported that the number of YSOs in a cluster are linearly related to the dense cloud mass M$_{0.8}$ (the mass above a column density equivalent to $A_K \sim$ 0.8 mag) with a slope equal to unity. Recently @2014MNRAS.439.3719C have also found a similar relation for a sample of embedded clusters. This suggests that the star formation rates depend linearly on the mass of the dense cloud [@2010ApJ...724..687L]. We also estimated the M$_{0.8}$ (cf. Table \[Tp2\]), accounting only for the Class[i]{} and [ii]{} objects, and found that the number of YSOs in the sub-cluster or active region is linearly correlated with the dense cloud mass with a value of Spearman’s correlation coefficient ‘r’ = 0.8 with 95% confidence interval of 0.6 to 0.9 (cf. Fig. \[Fmass\], left-hand panel). The right-hand panel of Fig. \[Fmass\] shows the hull radius of sub-clusters/ active regions as a function of the total YSO mass, which manifests that the hull radius is linearly correlated (r=0.8) to the total YSO mass. The radius versus mass relation gives a clue of whether the cluster will be bound or unbound. The radius limit of a group moving in the Galactic tidal field is defined as the distance from the center of the group at which the attraction of a given star from the cluster is balanced by the tidal force of external masses [@1969SvA....12..625K]. The limiting radius $r_{lim}$ for a group that is moving in elliptical orbit around the Galactic center is given by the relation [@1962AJ.....67..471K]. $$r_{lim} = R_p \Big(\frac{M_*}{3.5 M_G}\Big)^{1/3}$$ where $R_p$ is the perigalactic distance of the group, $M_*$ is the mass of the group and $M_G$ is the mass of the Galaxy. Assuming $R_p$ of the Sun as 8.5 kpc and $M_G \sim 2\times10^{11} M_\odot$, the limiting radius $r_{lim}$ as a function of mass of the cluster is shown as the continuous curve in Fig. \[Fmass\] (right-hand panel). This figure indicates that the radii of all the sub-clusters are below the limiting radius, which suggests that all the sub-clusters will be bound systems, whereas all the active regions may be unbound systems. ### Star formation efficiency The star formation efficiency (SFE), defined as the percentage of the gas mass converted into stars, is an important parameter to determine whether a cluster will be a bound or unbound system etc. Several studies have been carried out, suggesting that the formation of a bound system requires a SFE of $\geq$$\sim$ 50% when the gas dispersal is quick or $\geq$$\sim$ 30% when the gas dispersal time is $\sim$3 Myr [cf. @1983MNRAS.203.1011E; @1984ApJ...285..141L]. @1984ApJ...285..141L have also concluded that in the case of slow dispersal a lower SFE of $\sim$ 15% may also produce a bound system. The observed surface density of the YSOs in the sub-clusters and active regions provides an opportunity to study how this quantity is related to the observed SFE and other properties of the associated molecular cloud. Recent works indicate that SFE increases with the stellar density; e.g., @2009ApJS..181..321E reported that YSO clusters of higher surface density have higher SFE (30%) than their lower density surroundings (3%-6%). Similarly @2008ApJ...688.1142K also found SFEs of $>$10%-17% for high surface density clusters, whereas in lower density regions the SFEs are found to be $\sim$ 3%. We have calculated the SFE by using the cloud mass derived from $A_K$ inside the cluster convex hull areas and the number of YSOs found in the same areas [see also @2008ApJ...688.1142K]. [ The total mass of the stellar content was estimated as discussed in Section 4.2.3. It is found that for the sub-cluster regions the SFEs varies between 5% to 20% with an average of $\sim 10.2\pm 1.2$%. These SFE estimates must be considered as lower limits in these regions as we are considering stellar contents composed of Class[i]{} and II sources. Inclusion of Class[iii]{} sources will increase the SFE values.]{} In the case of embedded clusters @2014MNRAS.439.3719C have obtained SFE as 3-45% with an average 20%. The SFE distribution as a function of the number of the cluster members and the mean surface density of each of our regions is shown in Fig. \[Fsfe\]. The SFE seems to be anti-correlated with the number of the cluster members (Spearman’s correlation coefficient ‘r’ =-0.7 with 95% confidence interval of -0.5 to -0.8) in the sense that the regions associated with high SFEs have smaller numbers of closely packed stars. The right-hand panel of Fig. \[Fsfe\] also indicates a tight correlation between the SFE and mean surface density (Spearman’s correlation coefficient ‘r’ =0.9 with 95% confidence interval of 0.8 to 1.0) in the sense that the regions having higher YSO densities exhibit higher SFEs. However, Fig. \[Fsfe\] left-hand panel indicates that the regions associated with high SFEs have smaller numbers of stars. The probable explanation for the regions having a smaller number of stars but a higher SFE could be that these regions are closely packed as revealed in Fig. \[Fdensity\]. Star formation history in the Auriga Bubble region: a series of triggered star formation ---------------------------------------------------------------------------------------- @2012AJ....143...75K estimated that the shell is expanding with a velocity of $\sim$55 km s$^{-1}$ and the kinetic energy of the shell is $\sim2.5\times10^{50}$ ergs. They also detected hard X-ray emitting hot gas inside the shell with a thermal energy of $\sim3\times10^{50}$ ergs. These authors have discussed two possibilities for the formation of this shell, i.e., stellar winds from OB stars and a SN explosion. However, the wind energy of the eight O stars found in the shell cannot explain the large kinetic energy and the hard X-ray emitting hot gas, and so they concluded that a SN created the shell. As for the origin of H[ii]{} regions (viz $Sh2~231-235$ and $Sh2~237$) within the boundary of the H[i]{}/ continuum structure, @2012AJ....143...75K discussed that these could have been triggered either by SN explosions, stellar winds, or expanding H[ii]{} regions by a previous generation of stars. However, since the ages of the H[ii]{} regions are of the order of a few Myr [see e.g., @2008MNRAS.384.1675J; @2013ApJ...764..172P] and the estimated age of the hot shell is only $\sim$0.33 Myr, it is not possible that the current expanding shell triggered the formation of these H[ii]{} regions. They speculated that the first generation massive stars in the stellar association, to which the SN progenitor belonged, could have triggered the formation of the OB stars currently exciting the H[ii]{} regions. In a similar line, we propose the following star formation scenario for the Auriga Bubble1 region. The H[ii]{} complex has several O9 type stars [cf. Table 6, @2012AJ....143...75K]. If one or more of the first generation O9 type stars in the association formed on an expanding H[ii]{} regions with a large shell around it, after $\sim$ 8 Myr [the MS life time of 20 M$_\odot$ O9 type star; @2005fost.book.....S] the dense material collected around it might have collapsed to form the second generation OB stars [the collect and collapse mechanism advocated by @1977ApJ...214..725E], which are exciting the current H[ii]{} regions (i.e., $Sh2~231-235$ and $Sh2~237$) around the shell and their associated clusters. Also in a somewhat similar timescale a massive star of the first generation exploded as a SN, forming the shell/Auriga Bubble1. Fig. \[Fiso\] manifests that a majority of the Class[i]{} and Class[ii]{} YSOs identified in the present study are located mostly along the boundary of Bubble1 and Bubble2. Their distribution is well correlated with that of the ionized and molecular gas as well as PAHs around the H[ii]{} complex G173+1.5. It is interesting to note that @2008MNRAS.384.1675J noticed several OB stars around the cluster Stock 8 (associated with Sh2 234) and inferred that these may belong to the group of the first generation in the region. They argued that the OB stars within Stock 8 have ages of 2-3 Myr and were formed by the action of these first generation stars. They also noticed a strange ‘nebulous stream’ towards east of Stock 8, which has a group of very young YSOs ($\sim$1 Myr) around it, younger than the stars in Stock 8, and argued that the star formation in the nebulous stream region is independent and these very young YSOs belong to a different generation from those in the Stock 8 cluster. They further inferred that these YSOs might have formed in the remaining clouds due to compression by a shock front from the north. Here it is worthwhile to mention that the nebulous stream is located on the Shell/Bubble1 and north is the direction toward the center of the Shell/Bubble1 [see Fig. 5 of @2012AJ....143...75K]. In addition, a $^{12}$CO cloud is found adjacent just to the south of the stream [see Fig. 22 of @2008MNRAS.384.1675J]. Based on this fact we propose that most of the youngest YSOs (having ages $\lesssim$1.0 Myr), identified in the present study as well as those in the nebulous stream are of different origin from the somewhat evolved YSO population located in and around the H[ii]{} regions and that they may be a SN- triggered population. Fig. \[gal\] (left-hand panel) shows the distribution of the YSOs having ages less than 1.0 Myr. The distribution shows that they are located at the periphery of the Bubbles. Keeping in mind the errors in the age estimation and the distribution of the youngest YSOs along the boundary of the H[i]{} and continuum emission, we propose that the expanding Auriga Bubble1 might have compressed the low density molecular material or pre-existing clouds to form a dense shell and this became gravitationally unstable to give birth to a new generation of stars. Thus the region seems to have a complicated star formation history. It seems that the massive stars of the first generation have mostly completed their lives. One of them made the SN explosion and created Bubble1. The currently ionizing sources of the H[ii]{} regions located at its periphery (viz $Sh2~231-235$ and $Sh2~237$) are the results of the triggered star formation through the collect and collapse process by the first generation population. The more or less evolved YSOs around the H[ii]{} regions are probably stars of the third generation due to the various star formation activities associated with these H[ii]{} regions. However, the majority of the younger YSOs (having age $\lesssim$1.0 Myr), located at the periphery of Bubble1 can be of different origin from this YSO population, making another group of the third generation which was formed due to SN-induced compression presumably. The size of the Bubble is quite large (diameter $\sim$100 pc) and it could be one of the largest SNR driven bubbles in the Galaxy. @2008ApJ...683..178K have also reported a SNR driven shell in the extreme outer Galaxy having an extent of $\sim$100 pc, which may also have triggered star formation. This shell has survived for more than 3 Myrs. The distribution of the YSOs associated with the Bubble1 and 2 region is shown in Fig. \[gal\] (right-hand panel) on the $l - z$ plane. It is interesting to note that a significant number of YSOs are located above the Galactic mid-plane. The center of Bubble1 appears to be $\sim$50 pc above the Galactic mid-plane. This seems to be consistent with its warping around $l\sim170^\circ - 173^\circ$ toward the northern side. The distribution of the blue plume population around $l\sim170^\circ$ in the Norma-Cygnus arm also indicates the warping towards the north [@2006MNRAS.373..255P]. The distribution of the large-scale molecular gas [@2006PASJ...58..847N] also reveals the northward warp. The model-based distribution of integrated star light [@2001ApJ...556..181D] and red-clump stars also supports the northward warping of the Galactic plane. Conclusion ========== Using optical observations and archive NIR and MIR data, we have compiled a catalog of 710 YSOs in an area of $6^\circ\times6^\circ$ around the Auriga Bubble region. Of 710 YSOs 175 and 535 are Class[i]{} and Class[ii]{} YSOs, respectively. The physical parameters of the YSOs were estimated by using SED model fitting. The spatial distribution of the YSOs and the MIR and radio emission have been used to understand the star formation in the region. The followings are the main results: - The ages of the majority of the YSOs are found to be $\leq$ 3 Myr, and the masses are in the range of $\sim$3-5 M$_\odot$. Twenty five percent (175 out of 710) of the YSOs are Class[i]{} sources. - The spatial distribution of the ionized gas as well as the molecular gas is found to be well correlated with that of the YSOs, which follows two ring-like structures (named as Auriga Bubble 1 and 2) extending over an area of a few degrees each. The majority of the Class[i]{} sources are found to be distributed along this structure. Auriga Bubble 1 coincides spatially with the high velocity H[i]{} shell discovered by @2012AJ....143...75K. - Twenty six probable sub-clusters of the YSOs have been identified on the basis of a MST analysis. The size of the sub-clusters lies in the range of $\sim$0.5 pc to $\sim$ 3 pc. The SFE and the limiting radius of these sub-clusters suggest that these may be bound stellar groups. - We propose the following possible star formation history in the region: the first generation is an already dispersed OB association to which a SN progenitor belonged, whereas the second generation is the current H[ii]{} regions and their associated clusters; and the more or less evolved YSOs distributed in and around the above H[ii]{} region belong to the third generation. Further there seems to be another, third generation of YSOs, i.e., the youngest population of the region (ages $\lesssim$1 Myr). They are distributed along the periphery of the Bubbles and may have been triggered by the SN explosion. - The center of the bubbles appear to be $\sim$50 pc above the Galactic mid-plane. Acknowledgments {#acknowledgments .unnumbered} =============== We are very thankful to the anonymous referee for the critical review of the contents and useful comments . The observations reported in this paper were carried out by using the Schmidt telescope at Kiso Observatory, Japan. We thank the staff members for their assistance during the observations. We are grateful to the DST (India) and JSPS (Japan) for providing financial support to carry out the present study. We are also thankful to Dr. Neelam Panwar for critical reading of the manuscript and useful discussions. This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. This publication also made use of data from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. [@rrr@]{} Date of observations/Filter& Exp. (sec)$\times$ No. of frames & Field\ 06 December 2012\ $V$ & $180\times3$,$10\times5$ & Auriga 1\ $I $ & $180\times3$,$10\times3$ & Auriga 1\ $V$ & $180\times3$,$10\times2$ & Auriga 2\ $I $ & $180\times3$,$10\times4$ & Auriga 2\ $V$ & $180\times3$,$10\times4$ & Auriga 3\ $I $ & $180\times3$,$10\times4$ & Auriga 3\ $V$ & $180\times3$,$10\times3$ & Auriga 5\ $I $ & $180\times3$,$10\times5$ & Auriga 5\ \ 10 December 2012\ $V$ & $180\times3$,$10\times3$ & Auriga 4\ $I $ & $180\times3$,$10\times3$ & Auriga 4\ $V$ & $180\times3$,$10\times4$ & Auriga 6\ $I $ & $180\times3$,$10\times3$ & Auriga 6\ \ 27 December 2014\ $V$ & $180\times5$ & Auriga 7\ $I $ & $180\times7$ & Auriga 7\ $V$ & $180\times3$ & Auriga 8\ $I $ & $180\times7$ & Auriga 8\ $V$ & $180\times10$& Auriga 9\ $I $ & $180\times9$ & Auriga 9\ \ ----------------------------------- ------------------- ---------------------- ------------------------- ----------------------- ----------------------------- --------------------- ------------------------ -------------------- Calibrated Field - Standard Field $M1$ $C1$ $M2$ $C2$ $M1$ $C1$ $M2$ $C2$ Auriga 1 - NGC 1960 $ 0.013\pm0.002$ $ 3.354\pm 0.003 $ $ 1.003\pm 0.002 $ $ 0.396\pm 0.002 $ $ 0.018 \pm0.002 $ $ 0.208 \pm0.002 $ $ 1.032\pm 0.002 $ $ 0.354\pm 0.002 $ Auriga 3 - NGC 1960 $ 0.022\pm0.002$ $ 3.346\pm 0.003 $ $ 1.029\pm 0.002 $ $ 0.342\pm 0.002 $ $ 0.028 \pm0.002 $ $ 0.157 \pm0.002 $ $ 1.058\pm 0.002 $ $ 0.305\pm 0.002 $ Auriga 2 - Auriga 1 $ 0.042\pm0.002$ $ 3.259\pm 0.003 $ $ 0.995\pm 0.004 $ $ 0.261\pm 0.005 $ $ 0.028 \pm0.003 $ $ 0.173 \pm0.003 $ $ 0.994\pm 0.005 $ $ 0.396\pm 0.004 $ Auriga 4 - Auriga 2 $ 0.046\pm0.001$ $ 3.156\pm 0.001 $ $ 0.986\pm 0.002 $ $ 0.164\pm 0.003 $ $ 0.057 \pm0.002 $ $ 0.143 \pm0.002 $ $ 0.990\pm 0.003 $ $ 0.357\pm 0.002 $ Auriga 6 - Auriga 4 $ 0.049\pm0.001$ $ 3.306\pm 0.001 $ $ 0.984\pm 0.002 $ $ 0.289\pm 0.002 $ $ 0.091 \pm0.003 $ $ 0.123 \pm0.002 $ $ 1.017\pm 0.003 $ $ 0.241\pm 0.002 $ Auriga 5 - Auriga 6 $ 0.054\pm0.005$ $ 2.970\pm 0.006 $ $ 0.972\pm 0.003 $ $ 0.120\pm 0.003 $ $ 0.056 \pm0.007 $ $ -0.06 \pm0.007 $ $ 0.961\pm 0.006 $ $ 0.304\pm 0.004 $ Auriga 3 - Auriga 5 $ 0.083\pm0.004$ $ 3.289\pm 0.004 $ $ 0.957\pm 0.004 $ $ 0.373\pm 0.003 $ $ 0.056 \pm0.005 $ $ 0.095 \pm0.005 $ $ 0.967\pm 0.004 $ $ 0.328\pm 0.003 $ Auriga 8 - Auriga 4 $ 0.053\pm0.001$ $ 3.101\pm0.003$ $ 0.978\pm0.002$ $ 0.184\pm0.002$ $ - $ $ -$ $ -$ $ -$ Auriga 7 - Auriga 6 $ 0.101\pm0.001$ $ 3.194\pm0.001$ $ 0.988\pm0.001$ $ 0.288\pm0.002$ $ - $ $ -$ $ -$ $ -$ Auriga 9 - Auriga 2 $ 0.053\pm0.002$ $ 3.389\pm0.002$ $ 0.968\pm0.003$ $ 0.652\pm0.002$ $ - $ $ -$ $ -$ $ -$ ----------------------------------- ------------------- ---------------------- ------------------------- ----------------------- ----------------------------- --------------------- ------------------------ -------------------- $V - v = M1\times(V-I) + C1$\ $(V-I) = M2\times(v-i) + C2$\ V range N $\Delta (V) \pm \sigma$ $\Delta (V-I) \pm \sigma$ ----------- ------- ------------------------- --------------------------- 9.5-10.5 69 $ -0.018\pm 0.020 $ $ 0.046\pm 0.036$ 10.5-11.5 175 $ -0.014\pm 0.022 $ $ 0.039\pm 0.040$ 11.5-12.5 406 $ -0.006\pm 0.025 $ $ 0.025\pm 0.044$ 12.5-13.5 867 $ -0.001\pm 0.024 $ $ 0.017\pm 0.043$ 13.5-14.5 1584 $ 0.001\pm 0.020 $ $ 0.008\pm 0.025$ 14.5-15.5 3163 $ 0.008\pm 0.021 $ $ -0.002\pm 0.026$ 15.5-16.5 5486 $ 0.016\pm 0.021 $ $ -0.011\pm 0.026$ 16.5-17.5 8644 $ 0.023\pm 0.020 $ $ -0.020\pm 0.024$ 17.5-18.5 12098 $ 0.032\pm 0.018 $ $ -0.031\pm 0.023$ 18.5-19.5 13350 $ 0.043\pm 0.019 $ $ -0.044\pm 0.023$ V range N $\Delta (V) \pm \sigma$ $\Delta (V-I) \pm \sigma$ ----------- ------ ------------------------- --------------------------- 9.5-10.5 12 $ 0.034 \pm 0.042 $ $ -0.066 \pm 0.038 $ 10.5-11.5 31 $ 0.042 \pm 0.030 $ $ -0.046 \pm 0.031 $ 11.5-12.5 92 $ 0.029 \pm 0.045 $ $ -0.033 \pm 0.061 $ 12.5-13.5 182 $ 0.009 \pm 0.048 $ $ -0.033 \pm 0.071 $ 13.5-14.5 290 $-0.008 \pm 0.037 $ $ -0.011 \pm 0.043 $ 14.5-15.5 614 $-0.026 \pm 0.048 $ $ -0.009 \pm 0.054 $ 15.5-16.5 1039 $-0.039 \pm 0.054 $ $ 0.002 \pm 0.058 $ 16.5-17.5 1636 $-0.047 \pm 0.059 $ $ 0.012 \pm 0.066 $ 17.5-18.5 2317 $-0.040 \pm 0.067 $ $ 0.037 \pm 0.074 $ 18.5-19.5 2182 $-0.027 \pm 0.095 $ $ 0.074 \pm 0.108 $ ID RA Dec $V\pm \sigma$ $I\pm \sigma$ $J\pm \sigma$ $H\pm \sigma$ $K\pm \sigma$ $W1\pm \sigma$ $W2\pm \sigma$ $W3\pm \sigma$ $W4\pm \sigma$ Classification$^*$ ---- ----------- ------------ ------------------- -------------------- ------------------- -------------------- --------------------- ------------------ ------------------ ------------------- ------------------ -------------------- 1 79.530584 +38.373309 $ - $ $ - $ $ 15.507\pm0.080$ $ 14.092\pm - $ $ 13.460\pm - $ $12.487\pm0.025$ $11.475\pm0.022$ $ 8.064\pm0.022$ $ 5.554\pm0.042$ 1 2 79.549134 +37.034571 $ - $ $ - $ $ 16.615\pm0.131$ $ 15.082\pm0.077 $ $ 13.607\pm 0.042 $ $10.999\pm0.024$ $9.826 \pm0.021$ $ 7.136\pm0.017$ $ 4.802\pm0.032$ 1 3 79.710650 +38.903351 $ - $ $ - $ $ 16.282\pm0.101$ $ 15.184\pm0.089 $ $ 14.163\pm 0.057 $ $12.366\pm0.023$ $11.287\pm0.022$ $ 7.950\pm0.021$ $ 5.164\pm0.031$ 1 4 79.753427 +36.830059 $ - $ $ - $ $ 14.120\pm0.031$ $ 13.081\pm0.033 $ $ 12.348\pm 0.028 $ $11.000\pm0.023$ $9.762 \pm0.021$ $ 6.491\pm0.016$ $ 4.228\pm0.022$ 1 5 79.765052 +36.770991 $ - $ $ - $ $ 14.301\pm0.036$ $ 12.597\pm0.033 $ $ 11.018\pm 0.026 $ $9.051 \pm0.022$ $7.530 \pm0.021$ $ 4.630\pm0.014$ $ 2.655\pm0.018$ 1 $^*$: Classification of YSOs i.e., 1=Class[i]{} (WISE 3 Band), 2=Class[ii]{} (WISE 3 Band),3=Class[i]{} (WISE+2MASS, two Band each),4=Class[ii]{} (WISE+2MASS, two Band each),5=Class[i]{} (from AGN contamination list), 6=Class[ii]{} (WISE 4 BAND, Tr Disk),7=Class[i]{} (2MASS TCD),8=Class[ii]{} (2MASS TCD).\ ------ ---------------- ------------- -------------------------- Band Sources in the Detection 90% Auriga Region Limit (mag) Completeness Limit (mag) V 470876 21.21 19.00 I 470876 20.03 17.75 J 550210 17.31 15.80 H 538281 15.51 15.10 K 500720 14.55 14.30 W1 696985 17.06 15.75 W2 671510 17.30 15.50 W3 50900 11.71 11.25 W4 1001 6.80 6.25 ------ ---------------- ------------- -------------------------- ----- --- ------- ---------- --------------- ------------- ------------- IDs N Model $\chi^2$ $A_V$ Mass Age (mag) (M$_\odot$) (Mys) 1 7 501 $1.8$ $ 6.5\pm4.1$ $1.3\pm1.2$ $1.2\pm2.2$ 2 7 213 $2.7$ $ 12.0\pm4.7$ $2.9\pm1.1$ $4.4\pm2.7$ 3 7 263 $1.5$ $ 3.5\pm3.0$ $0.9\pm1.0$ $0.6\pm2.3$ 4 7 243 $2.6$ $ 6.1\pm3.1$ $3.2\pm1.6$ $2.3\pm2.1$ 5 7 334 $1.3$ $ 8.0\pm3.0$ $4.6\pm1.6$ $2.7\pm2.2$ ----- --- ------- ---------- --------------- ------------- ------------- ---------------- ------------------- ------------------------------------- ------- -------- ---------------- --------------- -------- --------------------- --------------------- --------- --------- ------- ------- ---------- -- Name $\alpha_{(2000)}$ $\delta_{(2000)}$ N$^a$ V $^b$ $R_{\rm hull}$ $R_{\rm cir}$ Aspect $\sigma_{\rm mean}$ $\sigma_{\rm peak}$ MST$^c$ NN2$^c$ Class Class Frac$^d$ [$(^h:^m:^s)$]{} [$(^o:^\prime:^{\prime\prime)} $]{} (pc) (pc) Ratio (pc$^{-2}$) (pc$^{-2}$) (pc) (pc) I II (%) Sub-clusters g0 05:27:06.528 +38:31:00.69 7 5 0.57 0.80 1.96 6.81 4.00 0.18 0.13 2 5 29 g1 05:33:44.632 +37:18:09.48 36 7 2.98 4.13 1.92 1.29 12.59 0.47 0.34 6 30 17 g2 05:36:25.763 +36:39:57.65 11 7 1.15 1.21 1.11 2.64 7.58 0.39 0.36 1 10 9 g3 05:36:14.706 +36:29:12.13 5 4 1.51 1.72 1.30 0.70 0.49 0.41 0.22 2 3 40 g4 05:36:51.731 +36:10:47.44 7 5 0.83 0.95 1.30 3.24 8.19 0.25 0.22 3 4 43 g5 05:37:19.582 +36:24:39.27 5 4 0.46 1.24 7.22 7.45 0.00 0.17 0.12 1 4 20 g6 05:37:57.437 +36:01:19.43 17 7 2.19 4.37 4.00 1.13 5.07 0.68 0.41 12 5 71 g7 05:38:56.528 +36:01:34.84 15 7 3.28 2.84 0.75 0.44 1.63 1.10 0.89 5 10 33 g8 05:40:43.580 +35:55:49.86 13 6 2.21 2.03 0.85 0.85 2.79 0.71 0.54 4 9 31 g9 05:41:03.392 +35:44:52.26 79 7 5.05 6.52 1.67 0.99 17.70 0.48 0.28 18 61 23 g10 05:40:43.951 +35:26:31.58 7 5 1.90 1.53 0.65 0.62 1.84 0.78 0.50 1 6 14 g11 05:41:20.297 +36:09:24.94 7 6 1.70 1.44 0.72 0.77 8.42 0.36 0.30 2 5 29 g12 05:41:23.041 +36:18:05.91 5 3 0.48 0.81 2.86 6.89 1.63 0.47 0.30 2 3 40 g13 05:38:51.456 +33:41:33.88 9 6 1.46 1.78 1.50 1.35 2.24 0.33 0.25 3 6 33 g14 05:24:39.753 +35:02:36.77 7 4 1.44 1.82 1.59 1.07 1.68 0.52 0.45 0 7 0 g15 05:25:25.517 +34:58:04.70 11 6 1.56 2.02 1.67 1.43 8.91 0.53 0.22 3 8 27 g16 05:25:49.676 +34:52:46.42 5 4 1.50 1.54 1.05 0.71 4.28 0.77 0.93 1 4 20 g17 05:26:47.987 +35:08:53.74 5 3 0.80 0.85 1.12 2.48 1.79 0.55 0.35 1 4 20 g18 05:28:06.715 +34:24:34.32 18 4 2.22 3.09 1.94 1.16 4.17 0.64 0.41 2 16 11 g19 05:28:57.746 +34:23:10.62 15 8 1.23 2.49 4.10 3.17 6.33 0.33 0.26 1 14 7 g20 05:30:48.071 +33:47:43.60 5 3 0.49 0.85 3.05 6.73 2.00 0.43 0.23 0 5 0 g21 05:31:25.302 +34:13:04.01 6 5 1.65 1.73 1.11 0.71 0.74 0.75 0.60 0 6 0 g22 05:22:55.492 +33:28:58.03 22 7 2.15 2.77 1.66 1.51 5.07 0.51 0.45 0 22 0 g23 05:21:53.771 +36:38:38.55 9 5 1.25 1.76 1.97 1.83 6.60 0.45 0.26 3 6 33 g24 05:21:06.340 +36:39:20.86 10 6 1.14 1.85 2.66 2.47 6.90 0.37 0.26 4 6 40 g25 05:20:20.546 +36:37:10.68 9 5 1.34 1.31 0.95 1.59 2.79 0.50 0.30 4 5 44 Active regions g26 05:30:11.112 +38:18:03.20 66 10 22.96 42.82 3.48 0.04 5.24 2.63 1.19 26 40 39 g27 05:33:42.774 +37:17:49.50 40 6 5.05 7.71 2.34 0.50 12.59 0.52 0.36 3 33 8 g28 05:39:39.651 +35:57:23.56 220 9 30.96 39.23 1.61 0.07 17.70 0.69 0.49 64 156 29 g29 05:39:42.971 +34:19:10.29 19 9 17.40 21.44 1.52 0.02 0.06 4.50 3.02 5 14 26 g30 05:39:09.134 +33:37:16.20 16 7 4.86 8.19 2.84 0.22 2.24 0.77 0.33 5 11 31 g31 05:30:12.869 +36:49:49.87 11 6 11.41 14.68 1.66 0.03 0.90 4.37 1.35 1 10 9 g32 05:26:58.099 +34:41:14.16 110 10 21.13 30.04 2.02 0.08 10.09 1.01 0.64 16 94 15 g33 05:22:52.650 +33:29:09.24 25 5 4.81 6.61 1.89 0.34 5.07 0.54 0.46 1 24 4 g34 05:20:44.558 +36:39:37.74 38 7 9.06 12.42 1.88 0.15 10.09 0.72 0.45 13 25 34 ---------------- ------------------- ------------------------------------- ------- -------- ---------------- --------------- -------- --------------------- --------------------- --------- --------- ------- ------- ---------- -- a: Number of YSOs enclosed in the group; b: Vertex of the convex hull; c: Median branch length; d: Ratio of Class[i]{}/(Class[i]{} + Class[ii]{}) ---------------- ---------------- ---------------- ------------- --------------- -------- ----------- ---------- ------------- ----------------- -------------------- ----- Name $A_{V_{mean}}$ $A_{V_{peak}}$ Mass Mass $_{0.8}$ Q SFE1$^*$ SFE2$^{**}$ J/$S_{YSO}$$^*$ J/$S_{YSO}$$^{**}$ (mag) (mag) (M$_\odot$) (M$_\odot$) J1$^*$ J2$^{**}$ Sub-clusters g0 6.7 8.7 10.3 - 1.76 1.48 0.60 61.5 76.2 5.6 4.7 g1 9.2 17.8 1688.8 708.2 2.10 2.08 0.59 4.8 9.1 3.9 3.9 g2 8.4 14.3 96.7 12.9 2.02 1.92 0.83 21.1 34.8 4.6 4.4 g3 11.3 15.8 127.8 88.3 2.73 2.68 0.67 8.4 15.5 2.8 2.8 g4 13.9 16.4 50.0 50.0 1.70 1.60 0.86 24.8 39.7 3.7 3.5 g5 - - - - - - - - - - - g6 10.8 18.8 867.2 611.4 1.85 1.83 0.53 4.4 8.4 2.3 2.3 g7 10.1 16.6 1576.7 829.5 2.53 2.52 0.79 2.2 4.3 2.4 2.4 g8 11.5 22.7 798.6 558.6 1.96 1.94 0.80 3.7 7.1 2.5 2.5 g9 12.1 22.5 7621.0 5930.7 2.20 2.18 0.61 2.4 4.6 4.0 3.9 g10 9.7 13.9 265.7 131.6 2.69 2.66 0.84 5.8 11.0 3.0 3.0 g11 10.8 19.0 131.6 92.4 3.20 3.12 0.66 11.1 20.0 5.3 5.1 g12 7.3 7.3 5.8 - 1.72 1.42 0.76 67.0 80.2 3.6 2.9 g13 7.6 10.7 140.5 18.4 2.45 2.38 0.66 13.1 23.1 3.9 3.7 g14 10.3 14.9 222.3 103.6 1.94 1.91 0.71 6.9 12.9 2.4 2.4 g15 9.6 13.7 268.6 148.0 1.98 1.94 0.66 8.8 16.1 3.2 3.1 g16 10.3 15.3 106.0 56.4 2.96 2.90 0.83 10.0 18.1 2.9 2.9 g17 10.6 12.9 73.3 42.3 1.37 1.33 0.81 13.8 24.3 2.2 2.2 g18 6.6 12.9 601.1 72.1 2.26 2.22 0.70 6.6 12.3 3.2 3.1 g19 9.9 17.4 195.6 101.2 1.60 1.54 0.45 15.3 26.5 3.8 3.7 g20 13.7 16.1 49.4 49.4 0.79 0.76 0.71 19.2 32.2 1.7 1.6 g21 8.4 11.9 110.6 40.3 3.33 3.25 0.70 11.3 20.3 3.9 3.8 g22 7.3 13.4 563.0 129.3 2.21 2.17 0.71 8.4 15.5 3.8 3.7 g23 10.1 15.1 199.3 98.5 1.64 1.61 0.71 9.6 17.5 2.8 2.7 g24 10.5 14.1 145.7 89.1 1.66 1.61 0.68 13.9 24.4 3.3 3.2 g25 9.6 14.6 196.8 111.8 1.84 1.80 0.79 9.7 17.7 3.0 2.9 Active regions g26 5.8 17.6 64530.5 4674.1 7.35 7.35 0.40 0.2 0.5 2.6 2.6 g27 7.8 17.8 4185.3 925.2 2.96 2.95 0.75 2.2 4.3 3.7 3.7 g28 7.9 24.1 182462.2 53738.4 6.84 6.84 0.52 0.3 0.6 4.7 4.7 g29 6.1 15.4 22525.6 1537.7 8.21 8.20 0.67 0.2 0.4 1.7 1.7 g30 6.1 19.5 1905.0 176.1 4.15 4.13 0.50 1.9 3.8 2.9 2.9 g31 6.2 15.7 9304.3 649.5 6.78 6.78 0.67 0.3 0.6 1.5 1.5 g32 6.4 24.5 61706.7 5518.6 6.63 6.63 0.50 0.4 0.8 3.8 3.8 g33 6.8 15.1 3047.0 419.7 3.23 3.22 0.99 1.9 3.7 2.7 2.7 g34 6.4 17.1 10179.2 1095.2 4.58 4.57 0.48 0.9 1.7 3.5 3.5 ---------------- ---------------- ---------------- ------------- --------------- -------- ----------- ---------- ------------- ----------------- -------------------- ----- $^*$: For Class[i]{} and Class[ii]{} sources only. $^{**}$: With inclusion of assumed missing contribution of Class[iii]{} sources. Properties Sub-cluster Active region --------------------------------- ---------------- --------------- Number of YSOs 9 38 R$_{\rm hull}$ (pc) 1.5 11.4 Aspect Ratio 1.6 1.9 Mean number density (pc$^{-2}$) 1.4 0.1 Peak number density (pc$^{-2}$) 4 5.1 A$_K$ (mag) 0.9 0.6 Peak A$_K$(mag) 1.3 1.6 Cloud mass (M$_\odot$) 146 10179 Dense cloud mass (M$_\odot$) 99 1095 MST branch length (pc) 0.47 0.77 Structural $Q$ parameter 0.71 0.52 Jeans Length (pc) 2.0 (1.9)$^*$ 6.6 (6.6)$^*$ Star formation efficiency (%) 9.7 (17.7)$^*$ 0.4 (0.8)$^*$ $^*$: Values within parenthesis are the estimates with inclusion of assumed contribution of Class [iii]{} sources. ![image](figures/fig1.eps){height="16cm" width="16cm"} ![image](figures/fig2.eps){height="10cm" width="16cm"} ![image](figures/fig3b.eps){height="5cm" width="16cm"} ![image](figures/fig4.eps){width="45.00000%"} ![image](figures/fig7.eps){width="45.00000%"} ![image](figures/fig5a.eps){height="8cm" width="8cm"} ![image](figures/fig5b.eps){height="8cm" width="8cm"} ![image](figures/fig5c.eps){height="8cm" width="8cm"} ![image](figures/fig5d.eps){height="8cm" width="8cm"} ![image](figures/fig6.eps){width="75.00000%"} ![image](figures/fig9b.eps){height="6cm" width="7cm"} ![image](figures/fig9a.eps){height="6cm" width="7cm"} ![image](figures/fig11a.eps){height="4cm" width="5.7cm"} ![image](figures/fig11b.eps){height="4cm" width="5.7cm"} ![image](figures/fig11c.eps){height="4cm" width="5.7cm"} ![image](figures/fig14a.eps){height="6cm" width="8cm"} ![image](figures/fig15.eps){width="75.00000%"} ![image](figures/fig16.eps){width="75.00000%"} ![image](figures/fig17a.eps){height="7cm" width="8cm"} ![image](figures/fig17b.eps){height="7cm" width="8cm"} ![image](figures/fig18.eps){width="75.00000%"} ![image](figures/fig19a.eps){width="45.00000%"} ![image](figures/fig19b.eps){width="45.00000%"} ![image](figures/fig20a.eps){width="45.00000%"} ![image](figures/fig20b.eps){width="45.00000%"} ![image](figures/fig21a.eps){width="45.00000%"} ![image](figures/fig21b.eps){width="45.00000%"} \[lastpage\] [^1]: E-mail: [email protected] [^2]: IRAF is distributed by National Optical Astronomy Observatories, USA [^3]: http://www.astromatic.net/software/scamp [^4]: http://www.astromatic.net/software/swarp [^5]: http://wise2.ipac.caltech.edu/docs/release/allwise/expsup/ [^6]: http://tdc-www.harvard.edu/catalogs/tmpsc.html [^7]: http://www.astro.caltech.edu/$\sim$jmc/2mass/v3/transformations/ [^8]: https://sedfitter.readthedocs.io/en/stable/ [^9]: WISE fluxes were acquired from Dr. Thomas Robitaille through private communication
--- abstract: 'The reduced matrix elements $a_2$ and $d_2$ are computed in lattice QCD with $N_f=2$ flavors of light dynamical (sea) quarks. For proton and neutron targets we obtain as our best estimates $d_2^{(p)}=0.004(5)$ and $d_2^{(n)}=-0.001(3)$, respectively, in the $\overline{\mbox{MS}}$ scheme at $Q^2 = 5$ GeV$^2$, while for $a_2$ we find $a_2^{(p)}=0.077(12)$ and $a_2^{(n)}=-0.005(5)$, where the errors are purely statistical.' author: - 'M. Göckeler' - 'R. Horsley' - 'D. Pleiter' - 'P. E. L. Rakow' - 'A. Schäfer' - 'G. Schierholz' - 'H. Stüben' - 'J. M. Zanotti' date: 'September 5, 2005' title: 'Investigation of the Second Moment of the Nucleon’s $g_1$ and $g_2$ Structure Functions in Two-Flavor Lattice QCD' --- Introduction {#sec:intro} ============ The nucleon’s second spin-dependent structure function $g_2$ is of considerable phenomenological interest since at leading order in $Q^2$ it receives contributions from both twist-2 and twist-3 operators. Consideration of $g_2$ via the operator product expansion (OPE) [@Jaffe] offers the unique possibility of directly assessing higher-twist effects which go beyond a simple parton model interpretation. Neglecting quark masses and contributions of twist greater than two, one obtains the “Wandzura-Wilczek” relation [@WW] $$g_2(x,Q^2) \approx g_2^{WW}(x,Q^2) = -g_1(x,Q^2) + \int_x^1 \frac{\mbox{d}y}{y} g_1(y,Q^2)\, , \label{eq:g2WW}$$ depending only on the nucleon’s first spin-dependent structure function, $g_1(x,Q^2)$. Including mass and gluon dependent terms up to and including twist-3, $g_2$ can be written [@Cortes:1991ja] $$g_2(x,Q^2) = g_2^{WW}(x,Q^2) + \overline{g_2}(x,Q^2)\, , \label{eq:g2}$$ where $$\overline{g_2}(x,Q^2) = -\int_x^1 \frac{\mbox{d}y}{y} \frac{\mbox{d}}{\mbox{d}y} \left[\frac{m}{M}h_T(y,Q^2) + \xi(y,Q^2) \right]\, . \label{eq:g2bar}$$ The function $h_T(x,Q^2)$ denotes the transverse polarization density and has twist two. The contribution from $h_T(x,Q^2)$ to $g_2$ is suppressed by the quark-to-nucleon mass ratio, $m/M$, and hence is small for physical up and down quarks. The twist-3 term $\xi$ arises from quark-gluon correlations. From Eqs. (\[eq:g2WW\])-(\[eq:g2bar\]), the moments of $g_2$ are $$\begin{aligned} \int_0^1\mbox{d}x\, x^n g_2(x,Q^2) = \frac{n}{n+1}\left\{ -\int_0^1\mbox{d}x\, x^n g_1(x,Q^2)\right. \nonumber\\ + \left.\int_0^1\mbox{d}x\, x^{n-1}\bigg[ \frac{m}{M}h_T(x,Q^2) + \xi(x,Q^2)\bigg] \right\}\, . \label{eq:mom-g2}\end{aligned}$$ A leading order OPE analysis with massless quarks shows that the moments of $g_1$ and $g_2$ are given by [@Jaffe] $$\begin{aligned} 2\int_0^1\mbox{d}x\, x^n g_1(x,Q^2) \!\!&=&\!\! \frac{1}{2} \sum_{f=u,d} e^{(f)}_{1,n}(\mu^2/Q^2,g(\mu)) a_n^{(f)}(\mu)\, , \nonumber\\ && \label{eq:ope-g1} \\ 2\int_0^1\mbox{d}x\, x^n g_2(x,Q^2) \!\!&=&\!\! \frac{1}{2}\frac{n}{n+1} \sum_{f=u,d} \big[e^{(f)}_{2,n}(\mu^2/Q^2,g(\mu)) \nonumber\\ \times d_n^{(f)}(\mu) \!\!&-&\!\! e^{(f)}_{1,n}(\mu^2/Q^2,g(\mu))\, a_n^{(f)}(\mu)\big]\, , \label{eq:ope-g2}\end{aligned}$$ for even $n \ge 0$ for Eq. (\[eq:ope-g1\]) and even $n \ge 2$ for Eq. (\[eq:ope-g2\]), where $f$ runs over the light quark flavors and $\mu$ denotes the renormalization scale. The reduced matrix elements $a_n^{(f)}(\mu)$ and $d_n^{(f)}(\mu)$ are defined by [@Jaffe] $$\begin{aligned} \langle \vec{p},\vec{s}| {\cal O}^{5 (f)}_{ \{ \sigma\mu_1\cdots\mu_n \} } | \vec{p},\vec{s} \rangle \!\!&=&\!\! \frac{1}{n+1}a_n^{(f)} \nonumber\\ &&\!\! \times [ s_\sigma p_{\mu_1} \cdots p_{\mu_n} + \cdots -\mbox{traces}], \nonumber\\ \label{eq:twist2} \\ \langle \vec{p},\vec{s}| {\cal O}^{5 (f)}_{ [ \sigma \{ \mu_1 ] \cdots \mu_n \} } | \vec{p},\vec{s} \rangle &=& \frac{1}{n+1}d_n^{(f)} \nonumber\\ \times [ (s_\sigma p_{\mu_1} \!\!&-&\!\! s_{\mu_1} p_\sigma) p_{\mu_2}\cdots p_{\mu_n} + \cdots -\mbox{traces}], \nonumber\\ \label{eq:twist3} \end{aligned}$$ $${\cal O}^{5 (f)}_{\sigma\mu_1\cdots\mu_n} = \left(\frac{\mathrm i}{2}\right)^n\bar{\psi}\gamma_{\sigma} \gamma_5 {\overset{\leftrightarrow}{D}}_{\mu_1} \cdots {\overset{\leftrightarrow}{D}}_{\mu_n} \psi -\mbox{traces}\, .$$ Here ${\overset{\leftrightarrow}{D}}={\overset{\rightarrow}{D}}-{\overset{\leftarrow}{D}}$ and $e_{1,n}^{(f)}$, $e_{2,n}^{(f)}$ are the Wilson coefficients which depend on the ratio of scales $\mu^2/Q^2$, the running coupling constant $g(\mu)$ and the quark charges ${\cal Q}^{(f)}$, $$e^{(f)}_{i,n}(\mu^2/Q^2,g(\mu)) = {\cal Q}^{(f)2} \big(1 + {\cal O}(g(\mu)^2) \big)\, . \label{eq:wilsoncoef}$$ The symbol$\{\cdots\}$ ($[\cdots]$) indicates symmetrization (antisymmetrization) of indices. The operator (\[eq:twist2\]) has twist two, whereas the operator (\[eq:twist3\]) has twist three. Note that our definitions of $a_2$ and $d_2$ differ by a factor of two from those in [@exp2; @exp]. Using the equations of motion of massless QCD one can rewrite the twist-3 operators $ {\cal O}^{5 (f)}_{ [ \sigma \{ \mu_1 ] \cdots \mu_n \} } $ such that the dual gluon field strength tensor $\tilde{G}_{\mu \nu}$ and the QCD coupling $g$ appear. For $n=2$ one finds $${\cal O}^{5 (f)}_{ [ \sigma \{ \mu_1 ] \mu_2 \} } = - \frac{g}{6} \bar{\psi} \left( \tilde{G}_{\sigma \mu_1} \gamma_{\mu_2} + \tilde{G}_{\sigma \mu_2} \gamma_{\mu_1} \right) \psi - \mbox{traces} \,,$$ so we can define the reduced matrix element $d_2$ in the chiral limit also by (see, e.g., Ref. [@schaefer]) $$\begin{aligned} & & - \frac{g}{6} \langle \vec{p},\vec{s}| \bar{\psi} \left( \tilde{G}_{\sigma \mu_1} \gamma_{\mu_2} + \tilde{G}_{\sigma \mu_2} \gamma_{\mu_1} \right) \psi - \mbox{traces} | \vec{p},\vec{s} \rangle \nonumber\\ & & = \frac{1}{3} d_2 [ (s_\sigma p_{\mu_1} - s_{\mu_1} p_\sigma) p_{\mu_2} + \cdots -\mbox{traces}] \,.\end{aligned}$$ This shows (setting $\mu_1 = \mu_2 = 0$) that $d_2$ parametrizes the magnetic field component of the gluon field strength tensor which is parallel to the nucleon spin. Furthermore we have $$d_2 = 4\int_0^1\mbox{d}x\, x \xi(x) \ . \label{eq:d2w2}$$ Hence, a calculation of $d_2$ (in the chiral limit) is especially interesting as it will provide insights into the size of the quark-gluon correlation term, $\xi(x)$. The Wilson coefficients (\[eq:wilsoncoef\]) can be computed perturbatively, while the reduced matrix elements $a_n^{(f)}$ and $d_n^{(f)}$ have to be computed non-perturbatively. In the following we shall drop the flavor indices, unless they are necessary. A few years ago we computed the lowest non-trivial moment of $g_2$ in the quenched approximation [@QCDSF1]. In this paper we give our results for the reduced matrix elements $a_2$ and $d_2$ in full QCD, including $N_f=2$ flavors of light dynamical (sea) quarks, using ${\cal O}(a)$-improved Wilson fermions. We employ the same methods as in the quenched case, in particular the renormalization of the lattice operators is done entirely non-perturbatively. Lattice Operators And Renormalization {#sec:operators} ===================================== The lattice calculation divides into two separate tasks. The first task is to compute the nucleon matrix elements of the appropriate lattice operators. This was described in detail in [@QCDSF2]. The second task is to renormalize the operators. In the case of multiplicative renormalizability, the renormalized operator ${\cal O}(\mu)$ is related to the bare operator ${\cal O}(a)$ by $${\cal O}(\mu) = Z_{\cal O}(a\mu)\, {\cal O}(a), \label{eq:op1}$$ where $a$ is the lattice spacing. In our earlier work [@QCDSF2; @QCDSF3], we computed the renormalization constants in perturbation theory to one-loop order. However, this does not account for mixing with lower-dimensional operators, which we encounter in the case of the reduced matrix elements $d_n^{(f)}$. In [@QCDSF1] an entirely non-perturbative solution to this problem was presented for quenched lattice QCD. Here we shall apply the same approach. We impose the (MOM-like) renormalization condition [@Martinelli; @QCDSF4] (which can also be used in the continuum) $${\mbox{\small $\frac{1}{4}$}}\,\mbox{Tr} \, \langle q(p)|{\cal O}(\mu)|q(p)\rangle \Big[\langle q(p)|{\cal O}(a)|q(p)\rangle\, |^{\rm tree}\Big]^{-1} \underset{p^2 =\mu^2}{=} 1,$$ where $|q(p)\rangle$ is a quark state of momentum $p$ in Landau gauge. In the following we shall restrict ourselves to the case $n = 2$. Furthermore, we consider quark-line connected diagrams only, as calculations of quark-line disconnected diagrams are extremely computationally expensive. In an attempt to improve on our earlier analysis [@QCDSF1], we simulate with two non-vanishing values for the nucleon momentum, $\vec{p}_1 = ( p, 0, 0 )$ and $\vec{p}_2 = ( 0, p, 0 )$, together with two different polarization directions, described by the matrices $\Gamma_1 = \frac{1}{2}(1+\gamma_4)\, {\mathrm i}\gamma_5\gamma_1$ and $\Gamma_2 = \frac{1}{2}(1+\gamma_4)\, {\mathrm i}\gamma_5\gamma_2$. Here $p=2\pi/L_S$ denotes the smallest non-zero momentum available on a periodic lattice of spatial extent $L_S$. We consider the two combinations $\vec{p}_1$/$\Gamma_2$ and $\vec{p}_2$/$\Gamma_1$. For the twist-2 matrix element $a_2$ we use in both cases the operator $${\cal O}^5_{\{214\}} =: {\cal O}^{\{5\}} \label{eq:os1}$$ as in [@QCDSF1]. For the twist-3 matrix element $d_2$ we need to use different operators for our two momentum/polarization combinations. For $\vec{p}_1$/$\Gamma_2$ and $\vec{p}_2$/$\Gamma_1$ we take $$\begin{aligned} {\cal O}^5_{[2\{1] 4\}} \!\!& =&\!\! {\mbox{\small $\frac{1}{3}$}}\left(2 {\cal O}^5_{2\{14\}} - {\cal O}^5_{1\{24\}} - {\cal O}^5_{4\{12\}}\right) \nonumber \\ \!\!& =&\!\!{\mbox{\small $\frac{1}{12}$}}\bar{\psi}\Big(\gamma_2 {\overset{\leftrightarrow}{D}}_{1} {\overset{\leftrightarrow}{D}}_{4} + \gamma_2 {\overset{\leftrightarrow}{D}}_{4} {\overset{\leftrightarrow}{D}}_{1} - {\mbox{\small $\frac{1}{2}$}}\gamma_1 {\overset{\leftrightarrow}{D}}_{2} {\overset{\leftrightarrow}{D}}_{4} \nonumber \\ \!\!&- &\!\!{\mbox{\small $\frac{1}{2}$}}\gamma_1 {\overset{\leftrightarrow}{D}}_{4} {\overset{\leftrightarrow}{D}}_{2} - {\mbox{\small $\frac{1}{2}$}}\gamma_4 {\overset{\leftrightarrow}{D}}_{1} {\overset{\leftrightarrow}{D}}_{2} - {\mbox{\small $\frac{1}{2}$}}\gamma_4 {\overset{\leftrightarrow}{D}}_{2} {\overset{\leftrightarrow}{D}}_{1}\Big) \gamma_5 \psi \nonumber \\ &=:& {\cal O}^{[5]}_1 \, , \label{eq:o5} \\ {\cal O}^5_{[1\{2] 4\}} \!\!& =&\!\! {\mbox{\small $\frac{1}{3}$}}\left(2 {\cal O}^5_{1\{24\}} - {\cal O}^5_{2\{14\}} - {\cal O}^5_{4\{21\}}\right) \nonumber \\ \!\!& =&\!\!{\mbox{\small $\frac{1}{12}$}}\bar{\psi}\Big(\gamma_1 {\overset{\leftrightarrow}{D}}_{2} {\overset{\leftrightarrow}{D}}_{4} + \gamma_1 {\overset{\leftrightarrow}{D}}_{4} {\overset{\leftrightarrow}{D}}_{2} - {\mbox{\small $\frac{1}{2}$}}\gamma_2 {\overset{\leftrightarrow}{D}}_{1} {\overset{\leftrightarrow}{D}}_{4} \nonumber \\ \!\!&- &\!\!{\mbox{\small $\frac{1}{2}$}}\gamma_2 {\overset{\leftrightarrow}{D}}_{4} {\overset{\leftrightarrow}{D}}_{1} - {\mbox{\small $\frac{1}{2}$}}\gamma_4 {\overset{\leftrightarrow}{D}}_{2} {\overset{\leftrightarrow}{D}}_{1} - {\mbox{\small $\frac{1}{2}$}}\gamma_4 {\overset{\leftrightarrow}{D}}_{1} {\overset{\leftrightarrow}{D}}_{2}\Big) \gamma_5 \psi \nonumber \\ &=:& {\cal O}^{[5]}_2 \, , \label{eq:o5-2}\end{aligned}$$ respectively. In the following we shall suppress the index of ${\cal O}^{[5]}$ unless it is needed. The operators ${\cal O}^{\{5\}}$ and ${\cal O}^{[5]}$ belong to the representations $\tau_3^{(4)}$ and $\tau_1^{(8)}$, respectively, of the hypercubic group $H(4)$ [@Mandula]. The operator ${\cal O}^{[5]}$ has dimension five and $C$-parity $+$. It turns out that there exist two operators of dimension four and five, respectively, transforming identically under $H(4)$ and having the same $C$-parity, with which ${\cal O}^{[5]}$ can mix: $$\begin{aligned} {\mbox{\small $\frac{1}{12}$}}{\mathrm i}\, \bar{\psi} \left(\sigma_{13} {\overset{\leftrightarrow}{D}}_{1} - \sigma_{43} {\overset{\leftrightarrow}{D}}_{4}\right) \psi \!\!&=:&\!\! {\cal O}^\sigma, \label{eq:osigma} \\ {\mbox{\small $\frac{1}{12}$}}\bar{\psi} \left(\gamma_1 {\overset{\leftrightarrow}{D}}_{3} {\overset{\leftrightarrow}{D}}_{1} - \gamma_1 {\overset{\leftrightarrow}{D}}_{1} {\overset{\leftrightarrow}{D}}_{3} \right. \qquad\qquad\quad && \nonumber \\ \left. -\ \gamma_4 {\overset{\leftrightarrow}{D}}_{3}{\overset{\leftrightarrow}{D}}_{4} + \gamma_4 {\overset{\leftrightarrow}{D}}_{4} {\overset{\leftrightarrow}{D}}_{3}\right) \psi \!\!&=:&\!\! {\cal O}^0 \, , \label{eq:o0}\end{aligned}$$ for $\vec{p}_1$/$\Gamma_2$, and similarly for $\vec{p}_2$/$\Gamma_1$ with $1\rightarrow 2$. We use the definition $\sigma_{\mu \nu} = (\mathrm i /2)[\gamma_\mu,\gamma_\nu]$. The operator (\[eq:o0\]) mixes with ${\cal O}^{[5]}$ with a coefficient of order $g^2$ and vanishes in the tree approximation between quark states. We therefore neglect its contribution to the renormalization of ${\cal O}^{[5]}$. The operator ${\cal O}^\sigma$, on the other hand, contributes with a coefficient $\propto a^{-1}$ and hence must be kept. We then remain with $${\cal O}^{[5]}(\mu) = Z^{[5]}(a\mu) {\cal O}^{[5]}(a) + \frac{1}{a} Z^\sigma(a\mu) {\cal O}^\sigma(a). \label{eq:renorm}$$ The renormalization constant $Z^{[5]}$ and the mixing coefficient $Z^\sigma$ are determined from $$\begin{aligned} {\mbox{\small $\frac{1}{4}$}}\,\mbox{Tr} \,\langle q(p)|{\cal O}^{[5]}(\mu)|q(p)\rangle \left[\langle q(p)|{\cal O}^{[5]}(a)|q(p)\rangle\, |^{\rm tree}\right]^{-1} \hspace*{-5mm}&\underset{p^2 =\mu^2}{=}&\hspace*{-3mm} 1, \nonumber \\ &&\label{eq:cond1} \\ {\mbox{\small $\frac{1}{4}$}}\,\mbox{Tr} \,\langle q(p)|{\cal O}^{[5]}(\mu)|q(p)\rangle \left[\langle q(p)|{\cal O}^{\phantom{[}\sigma\phantom{]}}(a)|q(p)\rangle\, |^{\rm tree}\right]^{-1} \hspace*{-5mm}&\underset{p^2 =\mu^2}{=}&\hspace*{-3mm} 0. \nonumber \\ &&\label{eq:cond2} \end{aligned}$$ Rewriting Eq. (\[eq:renorm\]) as $${\cal O}^{[5]}(\mu) = Z^{[5]}(a\mu)\left( {\cal O}^{[5]}(a) + \frac{1}{a} \frac{Z^\sigma(a\mu)}{Z^{[5]}(a\mu)} {\cal O}^\sigma(a)\right)\, , \label{eq:renorm2}$$ we see that ${\cal O}^{[5]}(\mu)$ will have a multiplicative dependence on $\mu$ only if the ratio $Z^\sigma(a\mu)/Z^{[5]}(a\mu)$ does not depend on $\mu$, which should happen for large enough values of the renormalization scale. The scale dependence will then completely reside in $Z^{[5]}$. Simulation Details {#sec:details} ================== To reduce cut-off effects, we use non-perturbatively $O(a)$ improved Wilson fermions. The calculation is done at four different values of the coupling, $\beta$, and at three different sea quark masses each. The latter are specified by the hopping parameter $\kappa_{\rm sea}$. We use the force parameter $r_0$ to set the scale, with $r_0 = 0.467$ fm. Our lattice spacings range from $a=0.07$ to $0.09$ fm. The actual parameters, as well as the corresponding values of $r_0/a$ and the pseudoscalar meson masses, are given in Table \[table:parameters\] and shown pictorially in Fig. \[fig:parameters\]. $\beta$ $\kappa_{\rm sea}$ Volume $N_{\rm traj}$ $r_0/a$ --------- -------------------- ----------------- ---------------- ----------- ------------- 5.20 0.13420 $16^3\times 32$ O(5000) 4.077(70) 0.5847(12) 5.20 0.13500 $16^3\times 32$ O(8000) 4.754(45) 0.4148(13) 5.20 0.13550 $16^3\times 32$ O(8000) 5.041(53) 0.2907(15) 5.25 0.13460 $16^3\times 32$ O(5800) 4.737(50) 0.4932(10) 5.25 0.13520 $16^3\times 32$ O(8000) 5.138(55) 0.3821(13) 5.25 0.13575 $24^3\times 48$ O(5900) 5.532(40) 0.25638(70) 5.29 0.13400 $16^3\times 32$ O(4000) 4.813(82) 0.5767(11) 5.29 0.13500 $16^3\times 32$ O(5600) 5.227(75) 0.42057(92) 5.29 0.13550 $24^3\times 48$ O(2000) 5.566(64) 0.32688(70) 5.40 0.13500 $24^3\times 48$ O(3700) 6.092(67) 0.40301(43) 5.40 0.13560 $24^3\times 48$ O(3500) 6.381(53) 0.31232(67) 5.40 0.13610 $24^3\times 48$ O(3500) 6.714(64) 0.22120(80) : Lattice parameters: gauge coupling $\beta$, sea quark hopping parameter $\kappa_{\rm sea}$, lattice volume, number of trajectories, $r_0/a$ and pseudoscalar meson mass.[]{data-label="table:parameters"} The quark matrix elements for the renormalization constants are computed using a momentum source [@QCDSF4]. Performing the Fourier transform at the source suppresses the effect of fluctuations: The statistical error in this case is $\propto(VN_{\rm conf})^{-1/2}$ for $N_{\rm conf}$ configurations on a lattice of volume $V$, resulting in small statistical uncertainties even for a small number of configurations, at least five in our case. Hence, the main source of statistical uncertainty in our final results is from the calculation of the bare matrix elements, not the $Z$ values. Nucleon matrix elements are determined from the ratio of three-point to two-point correlation functions $$\label{eq:ratio} {\cal R}(t,\tau;\vec{p};{\cal O})\, = \frac{C_\Gamma (t,\tau;\vec{p},{\cal O})} {C_2(t,\vec{p})} \ ,$$ where $C_2$ is the unpolarized baryon two-point function with a source at time 0 and sink at time $t$, while the three-point function $C_\Gamma$ has an operator ${\cal O}$ insertion at time $\tau$. To improve our signal for non-zero momentum we average over both polarization/momentum combinations. Correlation functions are calculated on configurations taken at a distance of 5-10 trajectories using 4-8 different locations of the fermion source. We use binning to obtain an effective distance of 20 trajectories. The size of the bins has little effect on the error, which indicates auto-correlations are small. Computation of Renormalization Constants {#sec:renorm} ======================================== The twist-2 operator defined in Eq. (\[eq:os1\]) is renormalized multiplicatively with the renormalization factor $Z^{\{5\}}(a\mu)$, while the renormalization of the twist-3 operators in Eqs. (\[eq:o5\]), (\[eq:o5-2\]) is more complicated due to the mixing effects described in Section \[sec:operators\]. Since the renormalization of ${\cal O}^{[5]}_1$ and ${\cal O}^{[5]}_2$ is identical (up to lattice artefacts) we consider only ${\cal O}^{[5]}_1$. The calculation of the non-perturbative renormalization factors is a non-trivial exercise, the full details of which are beyond the scope of this paper. Here we restrict ourselves to a short outline of the procedure. More details can be found in Section 5.2.3 of Ref. [@timid], and a fuller account will be given in a forthcoming publication. Firstly, a chiral extrapolation of the non-perturbative renormalization factors is performed at fixed $\beta$ and fixed momentum. The extrapolation is performed linearly in $(r_0 m_{\rm PS})^2 = ((r_0/a)am_{\rm PS})^2$, where for each value of $\beta$ we use the chirally extrapolated value of $r_0/a$ (see Table 3 of Ref. [@Gockeler:2005rv]). We then apply continuum perturbation theory to calculate the renormalization group invariant renormalization factor $Z_{\rm RGI}$ from the chirally extrapolated $Z$s [@timid]. This can be done in various schemes, e.g., the $\overline{\rm MS}$ scheme, and should lead for any scheme to the same momentum-independent value of $Z_{\rm RGI}$, at least for sufficiently large momenta. For this step, we use $r_0\Lambda_{\overline{\rm MS}}=0.617$ [@Gockeler:2005rv]. In Fig. \[fig:Za2\], we show the $\mu$-dependence of $Z^{\{5\}}_{\rm RGI}$ computed in the $\overline{\rm MS}$ scheme and in a continuum MOM scheme at $\beta=5.40$. While in both cases a reasonable plateau appears, the plateau values do not coincide exactly, and we take the difference as a measure of the uncertainty of our $Z$s, caused by our incomplete knowledge of the perturbative expansion. The final step requires $Z_{\rm RGI}$ to be converted to $Z_{\overline{\rm MS}}$ at some renormalization scale, which is done perturbatively, and the result depends on the value of $\Lambda_{\overline{\rm MS}}$ in physical units. From $r_0\Lambda_{\overline{\rm MS}}=0.617$ and $r_0 = 0.467$ fm we obtain $\Lambda_{\overline{\rm MS}} = 261$ MeV. [![$Z^{\{5\}}_{\rm RGI}$ calculated in the $\overline{\rm MS}$ scheme (circles) and in a MOM scheme (filled squares) at $\beta=5.40$. The scale is fixed using $r_0=0.467$fm. []{data-label="fig:Za2"}](./za2.eps "fig:"){width="0.98\hsize"}]{} As mentioned above, the renormalization of the twist-3 operator in Eqs. (\[eq:o5\]), (\[eq:o5-2\]) has further complications due to the mixing effects described in Section \[sec:operators\]. In this case it is unclear how to convert our MOM results to the $\overline{\mbox{MS}}$ scheme. So we shall stick to the MOM numbers. For the comparison of our results with experimental determinations this does not cause problems, because no QCD corrections have been taken into account in the analysis of the experiments and hence different schemes are not distinguished. In Fig. \[fig:Zd2\] we plot the ratio $Z^\sigma(a\mu)/Z^{[5]}(a\mu)$ as a function of $\mu$ for $\beta = 5.40$. As expected, a plateau develops for larger values of $\mu$, and therefore the operator ${\cal O}^{[5]}(\mu)$ only depends on $\mu$ multiplicatively. [![The ratio $Z^{\sigma}(a\mu)/Z^{[5]}(a\mu)$ at $\beta = 5.40$[]{data-label="fig:Zd2"}](./zrat.eps "fig:"){width="0.98\hsize"}]{} Results for Reduced Matrix Elements {#sec:ME} =================================== In order to compute the reduced matrix elements in Eqs. (\[eq:twist2\]) and (\[eq:twist3\]), we calculate the ratio of three- to two-point correlation functions ${\cal R}$, as given in Eq. (\[eq:ratio\]), for the operators defined in Eqs. (\[eq:os1\])-(\[eq:o0\]). The bare operator matrix elements are obtained from the ratio ${\cal R}$ by $${\cal R}_{a_2} = \frac{1}{2\kappa_{\rm sea}}\frac{1}{6} M\, p\, a_2\, ,\ {\cal R}_{d_2} = \frac{1}{2\kappa_{\rm sea}}\frac{1}{3} M\, p\, d_2\, .$$ We define the continuum quark fields by $\sqrt{2\kappa_{\rm sea}}$ times the lattice quark fields. The factor for ${\cal R}_{d_2}$ is the same for all three operators ${\cal O}^{[5]}$, ${\cal O}^{\sigma}$ and ${\cal O}^0$. In Tables \[table:bareME1\] and \[table:bareME2\] we present our results for the bare matrix elements of the operators ${\cal O}^{\{5\}}$, ${\cal O}^{[5]}$, ${\cal O}^{\sigma}$ and ${\cal O}^0$ defined in Eqs. (\[eq:os1\])-(\[eq:o0\]) for $u$ and $d$ quarks in the proton. [ccdddddddd]{} $\beta$ & $\kappa_{\rm sea}$ & & & &\ 5.20 & 0.13420 & 0.142(18) & -0.0318(78) & -0.0143(23) & 0.0005(14)\ 5.20 & 0.13500 & 0.123(22) & -0.032(11) & -0.0329(59) & 0.0094(35)\ 5.20 & 0.13550 & 0.131(32) & -0.061(22) & -0.057(14) & 0.0064(59)\ 5.25 & 0.13460 & 0.113(12) & -0.0389(51) & -0.0165(25) & 0.0023(13)\ 5.25 & 0.13520 & 0.110(19) & -0.0281(74) & -0.0310(39) & 0.0069(17)\ 5.25 & 0.13575 & 0.1107(74) & -0.0345(47) & -0.0575(28) & 0.0074(15)\ 5.29 & 0.13400 & 0.1141(77) & -0.0255(35) & -0.0033(11) & -0.00009(63)\ 5.29 & 0.13500 & 0.0989(90) & -0.0281(45) & -0.0252(19) & 0.0046(11)\ 5.29 & 0.13550 & 0.1228(65) & -0.0302(26) & -0.0468(23) & 0.00783(92)\ 5.40 & 0.13500 & 0.1195(44) & -0.0227(24) & -0.02135(99) & 0.00232(61)\ 5.40 & 0.13560 & 0.1238(63) & -0.0331(34) & -0.0445(26) & 0.0069(11)\ 5.40 & 0.13610 & 0.127(13) & -0.0277(60) & -0.0674(48) & 0.0103(25) [ccdddddddd]{} $\beta$ & $\kappa_{\rm sea}$ & & & &\ 5.20 & 0.13420 & -0.220(19) & 0.046(8) & -0.0312(46) & 0.0096(22)\ 5.20 & 0.13500 & -0.305(29) & 0.077(13) & -0.039(10) & 0.0145(49)\ 5.20 & 0.13550 & -0.395(60) & 0.080(21) & -0.063(14) & 0.0194(75)\ 5.25 & 0.13460 & -0.252(17) & 0.045(6) & -0.0371(34) & 0.0150(28)\ 5.25 & 0.13520 & -0.239(23) & 0.063(10) & -0.0329(61) & 0.0131(42)\ 5.25 & 0.13575 & -0.353(13) & 0.0638(44) & -0.0463(39) & 0.0141(20)\ 5.29 & 0.13400 & -0.213(9) & 0.0379(35) & -0.0322(23) & 0.0086(12)\ 5.29 & 0.13500 & -0.258(13) & 0.0518(42) & -0.0312(34) & 0.0118(21)\ 5.29 & 0.13550 & -0.338(10) & 0.0651(36) & -0.0390(25) & 0.0120(13)\ 5.40 & 0.13500 & -0.301(8) & 0.0595(33) & -0.0396(18) & 0.01231(84)\ 5.40 & 0.13560 & -0.385(15) & 0.0723(50) & -0.0502(26) & 0.0137(15)\ 5.40 & 0.13610 & -0.420(25) & 0.087(9) & -0.0411(60) & 0.0178(39) The corresponding renormalized (reduced) matrix elements for the renormalization scale $\mu^2 = 5 \, \mbox{GeV}^2$ are given in Tables \[table:renME1\] and \[table:renME2\]. While the superscripts $(u)$ and $(d)$ again refer to $u$ and $d$ quarks in the proton, the matrix elements for proton and neutron targets are denoted by $(p)$ and $(n)$, respectively. For $a_2$ the latter are given by $$\begin{aligned} a_2^{(p)} &=& {\cal Q}^{(u)\,2} a_2^{(u)} + {\cal Q}^{(d)\,2} a_2^{(d)} , \\ a_2^{(n)} &=& {\cal Q}^{(d)\,2} a_2^{(u)} + {\cal Q}^{(u)\,2} a_2^{(d)} \end{aligned}$$ and similarly for $d_2$. The renormalized values of $d_2^{(f)}$ for $f=u,d$ in the proton are calculated from $$d_2^{(f)} = Z^{[5]} d_2^{[5](f)} + \frac{1}{a} Z^\sigma d_2^{\sigma(f)} \, .$$ In the lines for $\kappa_{\rm sea} = \kappa_c$, Tables \[table:renME1\] and \[table:renME2\] contain results in the chiral limit, obtained by an extrapolation linear in $(r_0 m_{\rm PS})^2$. The scale has been fixed from the value of $r_0/a$ at the respective quark masses using $r_0 = 0.467 \, \mbox{fm}$. Alternatively, we could have worked with the chirally extrapolated values of $r_0/a$. This would increase $d_2^{(p)}$ and $d_2^{(u)}$ by up to twice the statistical error but would leave the other observables almost unaffected. On the other hand, setting $r_0 = 0.5 \, \mbox{fm}$ or varying $r_0\Lambda_{\overline{\rm MS}}$ between 0.572 and 0.662 (corresponding to the combined statistical and systematic errors given in Ref. [@Gockeler:2005rv]) leads only to rather small changes in the final results. [![The chirally extrapolated reduced matrix element $a_2$ for the proton target renormalized at the scale $\mu^2 \equiv Q^2 = 5$ GeV$^2$ as a function of the lattice spacing $a$. The crosses denote phenomenological determinations.[]{data-label="fig:a2p"}](./a2p.eps "fig:"){width="0.99\hsize"}]{} [![The chirally extrapolated reduced matrix element $a_2$ for the neutron target renormalized at the scale $\mu^2 \equiv Q^2 = 5$ GeV$^2$ as a function of the lattice spacing $a$. The cross denotes the phenomenological value.[]{data-label="fig:a2n"}](./a2n.eps "fig:"){width="0.99\hsize"}]{} Let us first focus on the results for the twist-2 matrix element $a_2$. In Fig. \[fig:a2p\] we show the chirally extrapolated renormalized results for $a_2$ in the proton in the $\overline{\mbox{MS}}$ scheme as a function of the lattice spacing $a$. It should however be noted that the data at $\beta = 5.20$, i.e., those for the largest lattice spacing are to be considered with caution, because potentially they are affected by lattice artefacts. For $a_2$ the dependence on the quark mass turns out to be rather small. On the other hand, we do not attempt a continuum extrapolation of the chirally extrapolated results. Instead we take the value at our smallest lattice spacing ($\beta = 5.4$) as our best estimate: $a_2^{(p)}=0.077(12)$. This is consistent with earlier quenched results [@QCDSF1], indicating that quenching effects are small. At the physical pion mass, we compare with two results taken from the literature which are obtained from an analysis of experimental data. The larger value is taken from an earlier analysis performed by Abe [*et al.*]{} [@exp2], while the lower point is extracted from a recent analysis by Osipenko [*et al.*]{} [@Osipenko:2005nx] with the help of the perturbative Wilson coefficient. In the $\overline{\mbox{MS}}$ scheme with anticommuting $\gamma_5$, we use the two-loop expression for the Wilson coefficient described in Ref. [@Zijlstra:1993sh]. To avoid large logarithms, we set $Q^2=\mu^2= 5$ GeV$^2$ to obtain $$\label{eq:wilson} e^{(f)}_{1,2} = {\cal Q}^{(f)2} \times 1.03075\, .$$ We do not see exact agreement between our chirally extrapolated value and those obtained from experimental data, but there are still several sources of systematic error in our final number. Firstly, our simulation only involves the calculation of connected quark diagrams. That is, we do not consider the (computationally expensive) case where an operator couples to a disconnected quark loop, although such disconnected diagrams are not expected to contribute in the large $x$ region. Secondly, our results are restricted to the heavy pion world, $m_{\rm PS}>550$ MeV. In this region we observe a linear dependence of our results on $m_{\rm PS}^2$. A more advanced functional form guided by chiral perturbation theory, such as those proposed for the moments of unpolarized nucleon structure functions [@Detmold] or nucleon magnetic moments [@chiral], may be required. One such form has been suggested in [@Detmold:2002nf], but only for iso-vector matrix elements. So we attempt to gain an estimate of the systematic uncertainty due to our linear extrapolation by comparing results for $a_2^{(u-d)}$ in the chiral limit using both a linear extrapolation and the form proposed in [@Detmold:2002nf] $$\begin{aligned} a_2^{(u-d)}(m_\pi^2) &=& a_2^{(u-d)} \left( 1+c_{\rm LNA}\, m_\pi^2 \log\frac{m_\pi^2}{m_\pi^2 + \mu^2} \right) \nonumber \\ &&\qquad + b_2\frac{m_\pi^2}{m_\pi^2 + m_b^2}\ , \label{eq:chextrap}\end{aligned}$$ where the authors recommend a preferred value for the LNA coefficient as $c_{\rm LNA} = -(0.48 g_A^2 + 1)/(4\pi f_\pi)^2$ and $b_2$ is constrained by the heavy quark limit to be $$b_2^{(u-d)} = \frac{5}{27} - a_2^{(u-d)} (1-\mu^2 c_{\rm LNA} )\ .$$ We set $\mu=0.25$ GeV as proposed in [@Detmold:2002nf] and find at $\beta=5.29$, $a_2^{(u-d)} = 0.214(29)$ employing a linear extrapolation and $a_2^{(u-d)} = 0.183(9)$ using Eq. (\[eq:chextrap\]), suggesting there is a $15\%$ systematic error in our linear extrapolation. Finally, we have not considered finite size effects [@Detmold:2005pt] in this work, and our data do not yet allow us to perform a decent continuum extrapolation. [ccdddd]{} $\beta$ & $\kappa_{\rm sea}$ & & & &\ 5.20 & 0.13420 & 0.194(27) & -0.044(11) & 0.0360(59) & -0.0113(29)\ 5.20 & 0.13500 & 0.168(32) & -0.044(15) & 0.039(12) & -0.0082(65)\ 5.20 & 0.13550 & 0.179(45) & -0.083(30) & 0.034(28) & -0.015(11)\ 5.20 & $\kappa_c$ & 0.154(65) & -0.079(37) & 0.040(31) & -0.011(14)\ 5.25 & 0.13460 & 0.154(19) & -0.0532(76) & 0.0335(53) & -0.0070(24)\ 5.25 & 0.13520 & 0.150(27) & -0.038(10) & 0.0109(79) & -0.0047(34)\ 5.25 & 0.13575 & 0.151(13) & -0.0472(70) & 0.0024(54) & -0.0050(25)\ 5.25 & $\kappa_c$ & 0.149(24) & -0.042(12) & -0.0169(89) & -0.0036(41)\ 5.29 & 0.13400 & 0.159(14) & -0.0356(53) & 0.0468(27) & -0.0094(13)\ 5.29 & 0.13500 & 0.138(15) & -0.0392(67) & 0.0284(43) & -0.0064(20)\ 5.29 & 0.13550 & 0.171(13) & -0.0421(43) & 0.0201(44) & -0.0056(17)\ 5.29 & $\kappa_c$ & 0.167(24) & -0.0469(84) & -0.0008(70) & -0.0026(28)\ 5.40 & 0.13500 & 0.170(12) & -0.0323(39) & 0.0499(27) & -0.0127(13)\ 5.40 & 0.13560 & 0.176(13) & -0.0471(55) & 0.0401(57) & -0.0097(22)\ 5.40 & 0.13610 & 0.181(21) & -0.0394(88) & 0.019(10) & -0.0094(46)\ 5.40 & $\kappa_c$ & 0.187(28) & -0.056(11) & 0.010(12) & -0.0056(50) [ccdddd]{} $\beta$ & $\kappa_{\rm sea}$ & & & &\ 5.20 & 0.13420 & 0.081(12) & 0.0022(55) & 0.0148(26) & -0.0010(14)\ 5.20 & 0.13500 & 0.070(14) & -0.0008(75) & 0.0166(55) & 0.0008(32)\ 5.20 & 0.13550 & 0.070(20) & -0.017(14) & 0.013(13) & -0.0028(58)\ 5.20 & $\kappa_c$ & 0.058(29) & -0.020(18) & 0.017(14) & -0.0002(71)\ 5.25 & 0.13460 & 0.0627(82) & -0.0065(36) & 0.0141(24) & 0.0006(12)\ 5.25 & 0.13520 & 0.063(12) & -0.0004(53) & 0.0043(36) & -0.0009(18)\ 5.25 & 0.13575 & 0.0620(58) & -0.0041(31) & 0.0005(24) & -0.0019(13)\ 5.25 & $\kappa_c$ & 0.062(10) & -0.0024(53) & -0.0079(40) & -0.0035(21)\ 5.29 & 0.13400 & 0.0668(61) & 0.0019(25) & 0.0198(12) & 0.00105(64)\ 5.29 & 0.13500 & 0.0570(65) & -0.0021(31) & 0.0119(19) & 0.00031(99)\ 5.29 & 0.13550 & 0.0715(57) & 0.0003(19) & 0.0083(20) & -0.00028(89)\ 5.29 & $\kappa_c$ & 0.069(10) & -0.0015(38) & -0.0006(31) & -0.0012(15)\ 5.40 & 0.13500 & 0.0720(50) & 0.0045(17) & 0.0208(12) & -0.00009(63)\ 5.40 & 0.13560 & 0.0731(58) & -0.0014(24) & 0.0168(25) & 0.0001(11)\ 5.40 & 0.13610 & 0.0760(93) & 0.0026(43) & 0.0072(46) & -0.0021(23)\ 5.40 & $\kappa_c$ & 0.077(12) & -0.0048(53) & 0.0039(54) & -0.0013(26) Our results for $a_2$ in the neutron are shown in Fig. \[fig:a2n\]. They are hardly different from zero. Taking again the value for $\beta = 5.4$ as our best estimate, we end up with $a_2^{(n)}=-0.005(5)$, in agreement with the result from the analysis of Abe [*et al.*]{} [@exp2]. From $a_2^{(p)}$ and $a_2^{(n)}$ in the chiral limit we calculate (see Eq. (\[eq:ope-g1\])) the second moment of the polarized structure function $g_1$ for the proton and neutron. Using the Wilson coefficient given in Eq. (\[eq:wilson\]) we find $$\begin{aligned} \int_0^1\mbox{d}x\, x^2 g_1^p(x,Q^2) \!\!&=&\!\! \frac{1.03075}{4} a_2^p = 0.0170(18) \, , \\ \int_0^1\mbox{d}x\, x^2 g_1^n(x,Q^2) \!\!&=&\!\! \frac{1.03075}{4} a_2^p = -0.0013(8) .\end{aligned}$$ [![The chiral extrapolation of the reduced matrix element $d_2$ for the proton target renormalized at the scale $\mu^2 \equiv Q^2 = 5$ GeV$^2$.[]{data-label="fig:d2pc"}](./d2p.chiex.eps "fig:"){width="0.99\hsize"}]{} [![The chiral extrapolation of the reduced matrix element $d_2$ for the neutron target renormalized at the scale $\mu^2 \equiv Q^2 = 5$ GeV$^2$.[]{data-label="fig:d2nc"}](./d2n.chiex.eps "fig:"){width="0.99\hsize"}]{} We now turn our attention to the second moment of $g_2$. We find that our data for $d_2$ also exhibit a linear behavior in $m_{\rm PS}^2$. While this is not unexpected at the large pion masses where our simulations are performed, this linear behavior will not necessarily continue near the chiral limit. Unfortunately, the dependence of $d_2$ on the pion mass near the chiral limit is not yet known. Therefore in this work we perform only a linear extrapolation of $d_2$ to the chiral limit. In Figs. \[fig:d2pc\] and \[fig:d2nc\] we plot some of the data versus $(r_0 m_{\rm PS})^2$ together with the linear extrapolations. The chirally extrapolated results for $d_2$ in the proton and neutron are shown in Figs. \[fig:d2p\] and \[fig:d2n\], respectively. At our smallest lattice spacing we obtain in the chiral limit $$\begin{aligned} d_2^{(p)} &=& \phantom{-} 0.004(5), \\ d_2^{(n)} &=& - 0.001(3).\end{aligned}$$ The errors are statistical only. Taking the behavior of $a_2^{(u-d)}$ as a guide, the chiral extrapolation might introduce a $15\%$ systematic uncertainty. For $d_2^{(p)}$ the other systematic uncertainties discussed above would amount to an additional error of about 0.005, while $d_2^{(n)}$ is almost unaffected. Our result for the proton agrees very well with the experimental number [@exp], while for the neutron the experimental result differs from ours by two standard deviations. A more precise experimental value would be most desirable in case of the neutron. [![The chirally extrapolated reduced matrix element $d_2$ for the proton target renormalized at the scale $\mu^2 \equiv Q^2 = 5$ GeV$^2$ as a function of the lattice spacing $a$. The cross denotes the phenomenological value.[]{data-label="fig:d2p"}](./d2p.eps "fig:"){width="0.99\hsize"}]{} [![The chirally extrapolated reduced matrix element $d_2$ for the neutron target renormalized at the scale $\mu^2 \equiv Q^2 = 5$ GeV$^2$ as a function of the lattice spacing $a$. The cross denotes the phenomenological value.[]{data-label="fig:d2n"}](./d2n.eps "fig:"){width="0.99\hsize"}]{} From Eq. (\[eq:mom-g2\]), the moments of $g_2$ receive contributions from $g_1$ and $\overline{g_2}$, the second of which contains a mass dependent term and a gluon insertion dependent term. From Eq. (\[eq:g2bar\]), the second moment of $\overline{g_2}$ is (dropping the explicit $Q^2$ dependence) $$\frac{1}{6}d_2 = \int_0^1\mbox{d}x\, x^2 \overline{g_2}(x) = \int_0^1\mbox{d}x\, x \frac{2}{3}\bigg[ \frac{m}{M}h_T(x) + \xi(x)\bigg] \, , \label{eq:mom2-g2bar}$$ so if $d_2$ vanishes in the chiral limit, then $\int_0^1 \mathrm d x x \xi (x) $ must also vanish. Our results lead us to conclude that for the $n=2$ moment the Wandzura-Wilczek relation [@WW] $$\int_0^1 \mbox{d}x\, x^2 g_2(x,Q^2) = - \frac{2}{3} \int_0^1 \mbox{d}x\, x^2 g_1(x,Q^2)$$ is satisfied within errors for both proton and neutron targets. From the expression in Eq. (\[eq:g2bar\]), we also expect the first moment of $\overline{g_2}$ to vanish in the chiral limit. Combining these two observations with the Burkhardt-Cottingham sum rule [@Burkhardt:1970ti], $\int_0^1 g_2(x) \mbox{d}x=0$, and the knowledge that from elastic scattering processes $g_2$ receives non-trivial higher-twist contributions at $x=1$ (see, for example, Eqs. (4), (5) of [@Osipenko:2005nx]), we expect that there should be some sort of smooth transition at intermediate $x$, which presents an interesting challenge for the planned experiments at JLab [@JLab]. Conclusions {#sec:conclusions} =========== We have calculated the second moments of the proton and neutron’s spin-dependent $g_1$ and $g_2$ structure functions in lattice QCD with two flavors of ${\cal O}(a)$-improved Wilson fermions. A key feature of our investigation is the use of non-perturbative renormalization and the inclusion of operator mixing in our extraction of the twist-2 and twist-3 matrix elements. Our result for $a_2^{(p)}=0.077(12)$ for the proton is somewhat larger than what follows from analyses of experimental data, while for the corresponding result for the neutron, we find a small but negative value, $a_2^{(n)}=-0.005(5)$, in agreement with experiment. Note that the errors are purely statistical and do not include any systematic uncertainties, although we estimate a systematic uncertainty of approximately $15\%$ arising from the chiral extrapolation. For the twist-3 matrix element, $d_2$, our results agree very well with experiment and are consistent with zero, leading us to the conclusion that higher-twist effects occur only at large or intermediate $x$. Acknowledgments {#acknowledgments .unnumbered} =============== J.Z. would like to thank W. Detmold for useful discussions regarding the chiral extrapolation of $a_2$. The numerical calculations have been done on the Hitachi SR8000 at LRZ (Munich), on the Cray T3E at EPCC (Edinburgh) under PPARC grant PPA/G/S/1998/00777 [@UKQCD] and on the APE[*1000*]{} at DESY (Zeuthen). We thank the operating staff for support. This work was supported in part by the DFG (Forschergruppe Gitter-Hadronen-Phänomenologie) and by the EU Integrated Infrastructure Initiative Hadron Physics (I3HP) under contract RII3-CT-2004-506078. [99]{} R. L. Jaffe, Comments Nucl. Part. Phys.  [**19**]{}, 239 (1990); R. L. Jaffe and X. D. Ji, Phys. Rev. D [**43**]{}, 724 (1991); J. Blumlein and N. Kochelev, Nucl. Phys. B [**498**]{}, 285 (1997) \[arXiv:hep-ph/9612318\]. S. Wandzura and F. Wilczek, Phys. Lett. B [**72**]{}, 195 (1977). J. L. Cortes, B. Pire and J. P. Ralston, Z. Phys. C [**55**]{}, 409 (1992). K. Abe [*et al.*]{} \[E143 collaboration\], Phys. Rev. D [**58**]{}, 112003 (1998) \[arXiv:hep-ph/9802357\]. P. L. Anthony [*et al.*]{} \[E155 Collaboration\], Phys. Lett. B [**553**]{}, 18 (2003) \[arXiv:hep-ex/0204028\]. B. Ehrnsperger, L. Mankiewicz and A. Schäfer, Phys. Lett. B [**323**]{}, 439 (1994) \[arXiv:hep-ph/9311285\]. M. Göckeler [*et al.*]{}, Phys. Rev. D [**63**]{}, 074506 (2001) \[arXiv:hep-lat/0011091\]. M. Göckeler [*et al.*]{}, Phys. Rev. D [**53**]{}, 2317 (1996) \[arXiv:hep-lat/9508004\]. M. Göckeler [*et al.*]{}, Nucl. Phys. B [**472**]{}, 309 (1996) \[arXiv:hep-lat/9603006\]. G. Martinelli, C. Pittori, C. T. Sachrajda, M. Testa and A. Vladikas, Nucl. Phys. B [**445**]{}, 81 (1995) \[arXiv:hep-lat/9411010\]. M. Göckeler [*et al.*]{}, Nucl. Phys. B [**544**]{}, 699 (1999) \[arXiv:hep-lat/9807044\]. M. Baake, B. Gemünden and R. Oedingen, J. Math. Phys.  [**23**]{}, 944 (1982) \[Erratum-ibid.  [**23**]{}, 2595 (1982)\]; J. E. Mandula, G. Zweig and J. Govaerts, Nucl. Phys. B [**228**]{}, 109 (1983). M. Göckeler, R. Horsley, D. Pleiter, P. E. L. Rakow and G. Schierholz \[QCDSF Collaboration\], arXiv:hep-ph/0410187. M. Göckeler [*et al.*]{}, arXiv:hep-ph/0502212. M. Osipenko [*et al.*]{}, Phys. Rev. D [**71**]{}, 054007 (2005) \[arXiv:hep-ph/0503018\]. W. Detmold [*et al.*]{}, Phys. Rev. Lett.  [**87**]{}, 172001 (2001) \[arXiv:hep-lat/0103006\]; J. W. Chen and X. Ji, Phys. Rev. Lett.  [**87**]{}, 152002 (2001) \[Erratum-ibid.  [**88**]{}, 249901 (2002)\] \[arXiv:hep-ph/0107158\]; D. Arndt and M. J. Savage, Nucl. Phys. A [**697**]{}, 429 (2002) \[arXiv:nucl-th/0105045\]. M. Göckeler [*et al.*]{}, \[QCDSF Collaboration\], Phys. Rev. D [**71**]{}, 034508 (2005) \[arXiv:hep-lat/0303019\]; D. B. Leinweber, D. H. Lu and A. W. Thomas, Phys. Rev. D [**60**]{}, 034014 (1999) \[arXiv:hep-lat/9810005\]; E. J. Hackett-Jones, D. B. Leinweber and A. W. Thomas, Phys. Lett. B [**489**]{}, 143 (2000) \[arXiv:hep-lat/0004006\]; T. R. Hemmert and W. Weise, Eur. Phys. J. A [**15**]{}, 487 (2002) \[arXiv:hep-lat/0204005\]; R. D. Young, D. B. Leinweber and A. W. Thomas, Phys. Rev. D [**71**]{}, 014001 (2005) \[arXiv:hep-lat/0406001\]. W. Detmold, W. Melnitchouk and A. W. Thomas, Phys. Rev. D [**66**]{}, 054501 (2002) \[arXiv:hep-lat/0206001\]. W. Detmold and C. J. Lin, Phys. Rev. D [**71**]{}, 054510 (2005) \[arXiv:hep-lat/0501007\]. E. B. Zijlstra and W. L. van Neerven, Nucl. Phys. B [**417**]{}, 61 (1994) \[Erratum-ibid. B [**426**]{}, 245 (1994)\]. H. Burkhardt and W. N. Cottingham, Annals Phys.  [**56**]{} (1970) 453. Z.E. Meziani, private communication. C. R. Allton [*et al.*]{} \[UKQCD Collaboration\], Phys. Rev. D [**65**]{}, 054502 (2002) \[arXiv:hep-lat/0107021\].
--- abstract: 'We compute numerically the yrast line for harmonically trapped boson systems with a weak repulsive contact interaction, studying the transition to a vortex state as the angular momentum $L$ increases and approaches $N$, the number of bosons. The $L=N$ eigenstate is indeed dominated by particles with unit angular momentum, but the state has other significant components beyond the pure vortex configuration. There is a smooth crossover between low and high $L$ with no indication of a quantum phase transition. Most strikingly, the energy and wave function appear to be analytical functions of $L$ over the entire range $2\le L \le N$. We confirm the structure of low-$L$ states proposed by Mottelson, as mainly single-particle excitations with two or three units of angular momentum.' address: 'Institute for Nuclear Theory, Department of Physics, University of Washington, Seattle, WA 98195, USA' author: - 'George F. Bertsch and Thomas Papenbrock' title: Yrast line for weakly interacting trapped bosons --- The low-lying excitations of atomic Bose Einstein condensates in harmonic traps [@Anderson; @Bradley; @Ketterle] are of considerable experimental and theoretical interest [@Stringari]. Recently, Mottelson proposed a theory for the yrast line of weakly interacting $N$-boson systems [@Mottelson], i.e. the ground states at nonvanishing angular momentum $L$. Physical arguments led him to assume that the yrast states are excited upon acting on the ground state $|0\rangle$ of vanishing angular momentum with a collective operator $Q_\lambda=\sum_{p=1}^Nz_p^\lambda$ that is a sum of single-particle operators acting on the coordinates $z_p=x_p+iy_p$ of the $p^{\rm th}$ particle. For angular momenta $L\ll N$ the yrast states are found to be dominated by quadrupole ($\lambda=2$) and octupole ($\lambda=3$) modes. Assuming a vortex structure of the yrast states with $L\approx N$ then led to the prediction of a quantum phase transition in Fock space when passing from the low angular momentum regime $L\ll N$ to the regime of high angular momenta $L\approx N$. The reason for this is behavior is the approximate orthogonality of the collective states $Q_\lambda|0\rangle$ and the single-particle oscillator states of the vortex line in the regime $N^{1/2}\ll L$. These results have been obtained for harmonically trapped bosons with a weak repulsive contact interaction. The case of an attractive interaction has been studied by Wilkin [*et al.*]{} [@Wilkin]. In this case the total angular momentum is carried by the center of mass motion, and there are no excitations corresponding to relative motion. This is not unexpected since internal excitations would increase the energy of the yrast state. It is the purpose of this letter to present an independent numerical computation of the yrast line and to compare with Mottelson’s results [@Mottelson]. In particular we want to focus on the transition from low to high angular momentum yrast states. The investigation of this transition is of interest not only for the physics of Bose-Einstein condensates. Localization in Fock space is under investigation also in molecular [@Wolynes] and condensed matter physics [@Altshuler; @Carlos]. The numerical computation has the advantage that it does not rely on the assumptions made in the analytical calculation. However, with our numerical methods it is limited to angular momenta below about $\L\approx 50$. Most interestingly, our numerical results suggest that the yrast line and the corresponding wave functions can be presented by rather simple analytical expressions. Let us consider $N$ bosons in a two-dimensional harmonic trap interacting via a contact interaction[^1]. We are interested in the yrast line in the perturbative regime of weak interactions. Note however that experimental studies of trapped condensates are often in a regime where the interaction energy is comparable to the trapping potential, and this may introduce qualitatively different physics. We write the Hamiltonian as \[ham\] =\_0 + . Here \[H0\] \_0=\_j j \_j\^\_j is the one-body oscillator Hamiltonian and \[Hint\] =g\_[i,j,k,l]{}V\_[ijkl]{}\_i\^\_j\^\_k \_l. the two-body interaction. The operators $\hat{a}_m$ and $\hat{a}_m^\dagger$ annihilate and create one boson in the single-particle oscillator state $|m\rangle$ with energy $m\hbar\omega$ and angular momentum $m\hbar$, respectively and fulfill bosonic commutation rules. The ground state energy is set to zero. Up to some irrelevant overall constant the matrix elements are given by $V_{ijkl}=2^{-k-l}(k+l)!/(i!j!k!l!)^{1/2}$ and vanish for $i+j\ne k+l$. For total angular momentum $L$ the Fock space is spanned by states $|\alpha\rangle\equiv|n_0,n_1,\ldots,n_k\rangle$ with $\sum_{i=0,k}n_i=N$, $\hat{a}_j^\dagger\hat{a}_j|n_0,n_1,\ldots,n_k\rangle =n_j|n_0,n_1,\ldots,n_k\rangle$ and $\sum_{j=0,k}j n_j=L$. Here $n_j$ denotes the occupation of the $j^{\rm th}$ single particle state $|j\rangle$. For vanishing coupling $g$ the basis states are degenerate in energy, and the problem thus consists in diagonalizing the two-body interaction $\hat{V}$ inside the Fock space basis. To set up the matrix we act with the operator (\[Hint\]) onto one initial basis state with angular momentum $L$ and onto all states created by this procedure until the the Fock space is exhausted [@PB]. The resulting matrix is sparse, and the yrast state is computed using a Lanczos algorithm [@Arpack]. We restrict ourselves to $L\le 50$ corresponding to a maximal Fock space dimension of about $d_L\approx 2\cdot 10^5$. The yrast line, i.e. the ground state energies as a function of the angular momentum may be written as $E(L)=L\hbar\omega + g\epsilon_L$. Fig. \[fig1\] shows the $L$-dependence of the energies $\epsilon_L$ for systems of $N=$ 25 and 50 bosons. The energies $\epsilon_L$ simply decrease linearly with increasing angular momentum for $L\le N$. In fact to machine precision, the energy function is found to described by an algebraic expression, $$\epsilon_L = { N ( 2N-L-2)\over 2}$$ At fixed angular momentum $L$ and for $L\ll N$ the energies $g\epsilon_l$ increase as expected with the square of the number of bosons $N$. Notice in the figure that there is a kink in the slope at $N=L$. This is a hint of condensation into a vortex state: in macroscopic superfluids, the state for $L=N$ would have a condensate of unit angular momentum and would be lower in energy than neighboring yrast states. We next investigate the structure of the wave functions of yrast states. We would like to know how complex the states are and how well they can be described by single-particle operators acting on simple states. To address the question of the complexity of the states in the Fock basis, we take the wave function amplitudes $c_\alpha^{(L)}$ in the Fock representation of the state $$|L\rangle=\sum_{\alpha=1}^{d_L} c_\alpha^{(L)} |\alpha\rangle$$ and compute the inverse participation ratio [@Kaplan] $$I_L\equiv \sum_{\alpha=1}^{d_L} |c_\alpha^{(L)}|^4.$$ The $I_L$ is the first nontrivial moment of the distribution of wave function intensities $|c_\alpha^{(L)}|^2$. Its inverse $1/I_L$ measures the number of basis states $|\alpha\rangle$ that have significant overlap with the yrast state $|L\rangle$. Fig \[fig2\] shows a plot of $1/I_L$ and the Fock space dimension $d_L$ as a function of angular momentum $L$ for a system of $N=50$ bosons. The $1/I_L$ is seen to be much smaller than the dimensionality of the Fock space. Even where the participating is greatest, at midvalues of $L$, only about 30 states are active participants. A similar behavior of quantum non-ergodicity has been found previously in numerical studies [@PB]. Notice that the inverse participation ratio decreases strongly as $N=L$ is approached. This shows that the yrast state becomes simpler, again hinting at the formation of a vortex condensate. Examining the coefficients for $N=25$ in detail, the largest amplitude at $L=N$ is in fact the vortex state, $|\alpha\rangle = | 0 N 0 ... 0>$, but it has less than half the probability of the complete wave function. Interestingly, our numerically obtained yrast state $|L=N=25\rangle$ agrees with the conjecture given by Wilkin [*et al.*]{} [@Wilkin], i.e. $|L=N\rangle=\prod_{p=1}^N(z_p-z_c)\,|0\rangle$ with $z_c=N^{-1}\sum_{p=1}^N z_p $ being the center of mass. Based on our numerical wave functions, we can generalize this conjecture. We believe that all the yrast states for $2\le L\le N$ are given by the formula \[MASTER\] |L= \_[p\_1&lt;p\_2&lt;…&lt;p\_L]{} (z\_[p\_1]{}-z\_c)(z\_[p\_2]{}-z\_c)…(z\_[p\_L]{}-z\_c)|0&gt; We have verified that this formula is correct (up to machine precision) by comparison with the numerically obtained yrast states for $N=25$. Since the operator acting on the ground state is translationally invariant no quanta of the center of mass motion are excited. Notice that there is a natural termination of the construction at $L=N$. To further examine the structure of the yrast states $|L\rangle$ we show a plot of the occupation numbers $n_j^{(L)}\equiv\langle L|\hat{a}_j^\dagger\hat{a}_j|L\rangle$ for $j=0,1,2,3$ in Fig. \[fig3\] for a system of $N=50$ bosons. At very low angular momenta the yrast states are dominated by single particle oscillator states with two or three units of angular momentum. This is in agreement with Mottelson’s results [@Mottelson]. However, at larger angular momentum $L$ the dominant fraction is carried by single particle states with one unit of angular momentum. Note that the occupation numbers $n_j^{(L)}$ are very small for $j > 3$. This analysis confirms the results found for the inverse participation ratio. Note also that the observables $n_j^{(L)}$ are very smooth functions of $L$. If there were a quantum phase transition at large $L\approx N/2$, we would expect to see some precursor in these observables. In conclusion, our numerical study strongly indicates that there is no quantum phase transition to a vortex state for trapped condensates in the limit that the interaction potential is small compared to the oscillator frequency. The strongest evidence is the apparent existence of analytic expressions for the energies and the wave functions on the yrast line for $2\le L\le N$ One might speculate that these states are contained in a dynamical symmetry group, but we have no idea how this might come about[^2]. We have also examined the structure of the yrast states and the matrix elements between them, finding that the observables vary smoothly with $L$, for $L$ not too small. We acknowledge conversations with B. Mottelson. This work was supported by the Dept. of Energy under Grant DE-FG-06-90ER40561. M. N. Anderson, J. R. Ensher, M. R. Matthews, C. E. Wieman, and E. A. Cornell, Science [**269**]{}, 198 (1995) C. C. Bradley, C. A. Sacket, J. J. Tollet, and R. G. Hulet, , 1687 (1995) K. B. Davis, M.-O. Mewees, M. R. Andrews, N. J. van Druten, D. S. Durfee, D. M. Kurn, and W. Ketterle, , 3969 (1995) For a review see, e.g., F. Dalfovo, S. Giorgini, L. P. Pitaevskii, and S. Stringari, , 463 (1999) B. Mottelson, e-print cond-mat/9905053 N. K. Wilkin, J. M. Gunn, and R. A. Smith, , 2265 (1998) D. M. Leitner and P. G. Wolynes, Chem. Phys. Lett. [**258**]{}, 18 (1996) B. L. Altshuler, Y. gefen, A. Kamenev, and L. S. Levitov, , 2803 (1997) C. Mej[í]{}a-Monasterio, J. Richert, T. Rupp, and H. A. Weidenmüller, , 5189 (1998) T. Papenbrock and G. F. Bertsch, , 4854 (1998) R. B. Lehoucq, D. C. Sorensen, and Y. Yang, [*ARPACK User’s Guide: Solution to large scale eigenvalue problems with implicitly restarted Arnoldi methods*]{}, FORTRAN code available under http://www.caam.rice.edu/software/ARPACK/ L. Kaplan, Nonlinearity [**12**]{}, R1 (1999) L.P. Pitaevshii and A. Rosch, Phys. Rev. A55 R853 (1998). [^1]: The results obtained below extend to the three dimensional problem for $L=L_z$. [^2]: We note also that there is another symmetry group [@pi98], $SO(2,1)$ , that produces relationships between energies of different states within a single $L$ subspace.
--- abstract: 'In earlier work [@HR05b], we proposed a logic that extends the Logic of General Awareness of Fagin and Halpern [-@FH] by allowing quantification over primitive propositions. This makes it possible to express the fact that an agent knows that there are some facts of which he is unaware. In that logic, it is not possible to model an agent who is uncertain about whether he is aware of all formulas. To overcome this problem, we keep the syntax of the earlier paper, but allow models where, with each world, a possibly different language is associated. We provide a sound and complete axiomatization for this logic and show that, under natural assumptions, the quantifier-free fragment of the logic is characterized by exactly the same axioms as the logic of Heifetz, Meier, and Schipper [-@HMS08b].' author: - | [**Joseph Y. Halpern**]{}\ Computer Science Department\ Cornell University\ Ithaca, NY, 14853, U.S.A.\ [email protected] [**Leandro C. Rêgo**]{}\ Statistics Department\ Federal University of Pernambuco\ Recife, PE, 50740-040, Brazil\ [email protected] bibliography: - 'z.bib' - 'joe.bib' title: Reasoning About Knowledge of Unawareness Revisited --- \[THEOREM\][Fact]{} INTRODUCTION {#intro} ============ Adding awareness to standard models of epistemic logic has been shown to be useful in describing many situations (see [@FH; @HMS03] for some examples). One of the best-known models of awareness is due to Fagin and Halpern [-@FH] (FH from now on). They add an awareness operator to the language, and associate with each world in a standard possible-worlds model of knowledge a set of formulas that each agent is aware of. They then say that an agent *explicitly knows* a formula $\phi$ if $\phi$ is true in all worlds that the agent considers possible (the traditional definition of knowledge, going back to Hintikka [-@Hi1]) and the agent is aware of $\phi$. In the economics literature, going back to the work of Modica and Rustichini [-@MR94; -@MR99] (MR from now on), a somewhat different approach is taken. A possibly different set ${{\cal L}}(s)$ of primitive propositions is associated with each world $s$. Intuitively, at world $s$, the agent is aware only of formulas that use the primitive propositions in ${{\cal L}}(s)$. A definition of knowledge is given in this framework, and the agent is said to be aware of $\phi$ if, by definition, $K_i \phi \lor K_i \neg K_i \phi$ holds. Heifetz, Meier, and Schipper [-@HMS03; -@HMS08b] (HMS from now on), extend the ideas of MR to a multiagent setting. This extension is nontrivial, requiring lattices of state spaces, with projection functions between them. As we showed in earlier work [@Hal34; @HR05], the work of MR and HMS can be seen as a special case of the FH approach, where two assumptions are made on awareness: awareness is [*generated by primitive propositions*]{}, that is, an agent is aware of a formula iff he is aware of all primitive propositions occurring in it, and agents know what they are aware of (so that they are aware of the same formulas in all worlds that they consider possible). As we pointed out in [@HR05b] (referred to as HR from now on), if awareness is generated by primitive propositions, then it is impossible for an agent to (explicitly) know that he is unaware of a specific fact. Nevertheless, an agent may well be aware that there are relevant facts that he is unaware of. For example, primary-care physicians know that specialists are aware of things that could improve a patient’s treatment that they are not aware of; investors know that investment fund companies may be aware of issues involving the financial market that could result in higher profits that they are not aware of. It thus becomes of interest to model knowledge of lack of awareness. HR does this by extending the syntax of the FH approach to allow quantification, making it possible to say that an agent knows that there exists a formula of which the agent is unaware. A complete axiomatization is provided for the resulting logic. Unfortunately, the logic has a significant problem if we assume the standard properties of knowledge and awareness: it is impossible for an agent to be uncertain about whether he is aware of all formulas. In this paper, we deal with this problem by considering the same language as in HR (so that we can express the fact that an agent knows that he is not aware of all formulas, using quantification), but using the idea of MR that there is a different language associated with each world. As we show, this slight change makes it possible for an agent to be uncertain about whether he is aware of all formulas, while still being aware of exactly the same formulas in all worlds he considers possible. We provide a natural complete axiomatization for the resulting logic. Interestingly, knowledge in this logic acts much like explicit knowledge in the original FH framework, if we take “awareness of $\phi$” to mean $K_i(\phi \lor \neg\phi)$; intuitively, this is true if all the primitive propositions in $\phi$ are part of the language at all worlds that $i$ considers possible. Under minimal assumptions, $K_i(\phi \lor \neg \phi)$ is shown to be equivalent to $K_i \phi \lor K_i \neg K_i \phi$: in fact, the quantifier-free fragment of the logic that just uses the $K_i$ operator is shown to be characterized by exactly the same axioms as the HMS approach, and awareness can be defined the same way. Thus, we can capture the essence of MR and HMS approach using simple semantics and being able to reason about knowledge of lack of awareness. Board and Chung [-@BC09] independently pointed out the problem of the HR model and proposed the solution of allowing different languages at different worlds. They also consider a model of awareness with quantification, but they use first-order modal logic, so their quantification is over domain elements. Moreover, they take awareness with respect to domain elements, not formulas; that is, agents are (un)aware of objects (i.e., domain elements), not formulas. They also allow different domains at different worlds; more precisely, they allow an agent to have a subjective view of what the set of objects is at each world. Sillari [-@Sil08] uses much the same approach as Board and Chung [-@BC09]. That is, he has a first-order logic of awareness, where the quantification and awareness is with respect to domain elements, and also allows from different subjective domains at each world. The rest of the paper is organized as follows. In Section \[sec:awaofunawa\], we review the HR model of knowledge of unawareness. In Section \[sec:newmodel\], we present our new logic and axiomatize it in Section \[sec:axioms\]. In Section \[sec:compare\], we compare our logic with that of HMS and discuss awareness more generally. All proofs are left to the [full paper, which can be found at www.cs.cornell.edu/home/halpern/papers/tark09.pdf.]{} [the appendix.]{} THE HR MODEL {#sec:awaofunawa} ============ In this section, we briefly review the relevant results of [@HR05b]. The syntax of the logic is as follows: given a set $\{1, \ldots, n\}$ of agents, formulas are formed by starting with a countable set $\Phi = \{p, q, \ldots\}$ of primitive propositions and a countable set $\X$ of variables, and then closing off under conjunction ($\land$), negation ($\neg$), the modal operators $K_i, A_i, X_i$, $i = 1, \ldots, n$. We also allow for quantification over variables, so that if $\phi$ is a formula, then so is $\forall x \phi$. Let ${{\cal L}^{\forall,K,X,A}_n}(\Phi,\X)$ denote this language and let ${{\cal L}^{K,X,A}_n}(\Phi)$ be the subset of formulas that do not mention quantification or variables. As usual, we define $\phi \lor \psi$, $\phi \rimp \psi$, and $\exists x\varphi$ as abbreviations of $\neg (\neg \phi \land \neg \psi)$, $\neg \phi \lor \psi$, and $\neg\forall x\neg\varphi$, respectively. The intended interpretation of $A_i\varphi$ is “$i$ is aware of $\varphi$”. Essentially as in first-order logic, we can define inductively what it means for a variable $x$ to be [*free*]{} in a formula $\varphi$. Intuitively, an occurrence of a variable is free in a formula if it is not bound by a quantifier. A formula that contains no free variables is called a [*sentence*]{}. We are ultimately interested in sentences. If $\psi$ is a formula, let $\varphi[x/\psi]$ denote the formula that results by replacing all free occurrences of the variable $x$ in $\phi$ by $\psi$. (If there is no free occurrence of $x$ in $\varphi$, then $\varphi[x/\psi]=\varphi$.) In quantified modal logic, the quantifiers are typically taken to range over propositions (intuitively, sets of worlds), but this does not work in our setting because awareness is syntactic; when we write, for example, $\forall x A_i x$, we essentially mean that $A_i \phi$ holds for all *formulas* $\phi$. However, there is another subtlety. If we define $\forall x \phi$ to be true if $\phi[x/\psi]$ is true for *all* formulas $\psi$, then there are problems giving semantics to a formula such as $\phi = \forall x (x)$, since $\phi[x/\phi] = \phi$. We avoid these difficulties by taking the quantification to be over quantifier-free sentences. (See [@HR05b] for further discussion.) We give semantics to sentences in ${{\cal L}^{\forall,K,X,A}_n}(\Phi,\X)$ in awareness structures. A tuple $M =({S}$, $\pi$, ${\cal K}_1$, $\ldots$, ${\cal K}_n$, ${\cal A}_1$, $\dots$, ${\cal A}_n)$ is an [*awareness structure for $n$ agents (over $\Phi$)*]{} if ${S}$ is a set of worlds, $\pi: {S}\times \Phi \rightarrow \{{\bf true},{\bf false}\}$ is an interpretation that determines which primitive propositions are true at each world, ${\cal K}_i$ is a binary relation on ${S}$ for each agent $i = 1, \ldots, n$, and ${\cal A}_i$ is a function associating a set of sentences with each world in $S$, for $i= 1,...,n$. Intuitively, if $(s,t) \in \K_i$, then agent $i$ considers world $t$ possible at world $s$, while ${\cal A}_i(s)$ is the set of sentences that agent $i$ is aware of at world $s$. We are often interested in awareness structures where the $\K_i$ relations satisfy some properties of interest, such as reflexivity, transitivity, or the *Euclidean* property (if $(s,t), (s,u) \in \K_i$, then $(t,u) \in \K_i$). It is well known that these properties of the relation correspond to properties of knowledge of interest (see Theorem \[thm:awofunaaxiomswithoutK\] and the following discussion). We often abuse notation and define $\K_i(s) = \{t: (s,t) \in \K_i\}$, thus writing $t \in \K_i(s)$ rather than $(s,t) \in \K_i$. This notation allows us to view a binary relation $\K_i$ on ${S}$ as a *possibility correspondence*, that is, a function from ${S}$ to $2^{{S}}$. (The use of possibility correspondences is more standard in the economics literature than binary relations, but they are clearly essentially equivalent.) Semantics is given to sentences in ${{\cal L}^{\forall,K,X,A}_n}(\Phi,\X)$ by induction on the number of quantifiers, with a subinduction on the length of the sentence. Truth for primitive propositions, for $\neg$, and for $\wedge$ is defined in the usual way. The other cases are defined as follows:[^1] $$\begin{array}{l} (M,s)\sat K_i \varphi \mbox{ if } (M,t)\sat \varphi \mbox{ for all }t \in \K_i(s) \\ (M,s)\sat A_i\varphi\mbox{ if }\varphi\in {\cal A}_i(s)\\ (M,s)\sat X_i\varphi\mbox{ if }(M,s)\sat A_i\varphi\mbox{ and }(M,s)\sat K_i\varphi \\ (M,s)\sat \forall x\varphi\mbox{ if }(M,s) \sat \phi[x/\psi], \forall\psi \in {{\cal L}^{K,X,A}_n}(\Phi). \end{array}$$ There are two standard restrictions on agents’ awareness that capture the assumptions typically made in the game-theoretic literature [@MR99; @HMS03; @HMS08b]. We describe these here in terms of the awareness function, and then characterize them axiomatically. - Awareness is [*generated by primitive propositions (agpp)* ]{} if, for all agents $i$, $\phi \in \A_i(s)$ iff all the primitive propositions that appear in $\phi$ are in $\A_i(s) \inter \Phi$. - [*Agents know what they are aware of (ka)* ]{} if, for all agents $i$ and all worlds $s,t$ such that $(s,t) \in \K_i$ we have that $\A_i(s)~=~\A_i(t)$. For ease of exposition, we restrict in this paper to structures that satisfy ${\mathit{agpp}}$ and ${\mathit{ka}}$. If $C$ is a (possibly empty) subset of $\{r,t,e\}$, then $\M_n^{C}(\Phi,\X)$ is the set of all awareness structures such that awareness satisfies ${\mathit{agpp}}$ and ${\mathit{ka}}$ and the possibility correspondence is reflexive ($r$), transitive ($t$), and Euclidean ($e$) if these properties are in $C$. A sentence $\varphi\in {{\cal L}^{\forall,K,X,A}_n}(\Phi,\X)$ is said to be [*valid*]{} in awareness structure $M$, written $M \sat \phi$, if $(M,s)\not\sat\neg\varphi$ for all $s\in S$. (This notion is called *weak validity* in [@HR05]. For the semantics we are considering here, weak validity is equivalent to the standard notion of validity, where a formula is valid in an awareness structure if it is true at all worlds in that structure. However, in the next section, we modify the semantics to allow some formulas to be undefined at some worlds; with this change, the two notions do not coincide. As we use weak validity in the next section, we use the same definition here for the sake of uniformity.) A sentence is valid in a class $\M$ of awareness structures, written $\M \sat \phi$, if it is valid for all awareness structures in $\M$, that is, if $M \sat \phi$ for all $M \in \M$. In [@HR05b], we gave sound and complete axiomatizations for both the language ${{\cal L}^{\forall,K,X,A}_n}(\Phi,\X)$ and the language ${{\cal L}^{\forall,X,A}_n}(\Phi,\X)$, which does not mention the implicit knowledge operator $K_i$ (and the quantification is thus only over sentences in ${{\cal L}^{X,A}_n}(\Phi)$). The latter language is arguably more natural (since agents do not have access to the implicit knowledge modeled by $K_i$), but some issues become clearer when considering both. We start by describing axioms for the language ${{\cal L}^{\forall,K,X,A}_n}(\Phi,\X)$, and then describe how they are modified to deal with ${{\cal L}^{\forall,X,A}_n}(\Phi,\X)$. Given a formula $\phi$, let $\Phi(\phi)$ be the set of primitive propositions in $\Phi$ that occur in $\phi$. [Prop.]{} : All substitution instances of valid formulas of propositional logic. [AGPP.]{} : $A_i\phi \dimp \land_{p\in\Phi(\phi)} A_i p.$[^2] [KA.]{} : $A_i \phi \rimp K_i A_i \phi$ [NKA.]{} : $\neg A_i \phi \rimp K_i \neg A_i \phi$ [K.]{} : $(K_i\varphi\land K_i(\varphi\rimp\psi))\rimp K_i\psi$. [T.]{} : $K_i\varphi\rimp \varphi$. [4.]{} : $K_i\varphi\rimp K_iK_i\varphi$. [5.]{} : $\neg K_i\varphi\rimp K_i\neg K_i\varphi$. [A0.]{} : $X_i\varphi\dimp K_i\varphi\land A_i\varphi$. [$1_{\forall}$.]{} : $\forall x\varphi\rimp\varphi[x/\psi]$ if $\psi$ is a quantifier-free sentence. [${\rm K}_{\forall}$.]{} : $\forall x(\varphi\rimp\psi)\rimp (\forall x \varphi\rimp\forall x\psi)$. [${\rm N}_{\forall}$.]{} : $\varphi\rimp\forall x\varphi$ if $x$ is not free in $\varphi$. [Barcan.]{} : $\forall xK_i\varphi\rimp K_i\forall x\varphi$. [MP.]{} : [F]{}rom $\varphi$ and $\varphi\rimp\psi$ infer $\psi$ (modus ponens). [Gen$_K$.]{} : [F]{}rom $\varphi$ infer $K_i \varphi$. [Gen$_{\forall}$.]{} : If $q$ is a primitive proposition, then from $\phi$ infer $\forall x\varphi[q/x]$. Axioms Prop, K, T, 4, 5 and inference rules MP and Gen$_K$ are standard in epistemic logics. A0 captures the relationship between explicit knowledge, implicit knowledge and awareness. Axioms 1$_\forall$, K$_\forall$, N$_\forall$ and inference rules Gen$_\forall$ are standard for propositional quantification.[^3] The Barcan axiom, which is well-known in first-order modal logic, captures the relationship between quantification and $K_i$. Axioms AGPP, KA, and NKA capture the properties of awareness being generated by primitive propositions and agents knowing which formulas they are aware of. Let ${\mathrm{AX}^{K,X,A,\forall}}$ be the axiom system consisting of all the axioms and inference rules in $\{$Prop, AGPP, KA, NKA, K, A0, 1$_\forall$, K$_\forall$, N$_\forall$, Barcan, MP, Gen$_K$, Gen$_\forall\}$. The language ${{\cal L}^{\forall,X,A}_n}$ without the modal operators $K_i$ has an axiomatization that is similar in spirit. Let K$_X$, T$_X$, 4$_X$, XA, and Barcan$_X$ be the axioms that result by replacing the $K_i$ in K, T, 4, KA, and Barcan, respectively, by $X_i$. Let 5$_X$ and Gen$_X$ be the axioms that result from adding awareness to 5 and Gen$_K$: [5$_{X}$.]{} : $(\neg X_i\varphi \land A_i\varphi) \rimp X_i\neg X_i\varphi$. [Gen$_X$.]{} : [F]{}rom $\varphi$ infer $A_i\varphi\Rightarrow X_i \varphi$. The analogue of axiom NKA written in terms of $X_i$, $\neg A_i \phi \rimp X_i \neg A_i \phi$, is not valid. To get completeness in models where agents know what they are aware of, we need the following axiom, which can be viewed as a weakening of NKA: $\neg \forall x A_i x\rimp X_i \neg \forall x A_i x$. Finally, consider the following axiom that captures the relation between explicit knowledge and awareness: [A0$_X$.]{} : $X_i\phi\rimp A_i\phi$. Let ${\mathrm{AX}^{X,A,\forall}}$ be the axiom system consisting of all the the axioms and inference rules in $\{$Prop, AGPP, XA, FA$_X$, K$_X$, A0$_X$, 1$_\forall$, K$_\forall$, N$_\forall$, Barcan$_X$, MP, Gen$_X$, Gen$_\forall\}$. The following result shows that the semantic properties $r, t, e$ are captured by the axioms T, 4, and 5, respectively in the language ${{\cal L}^{\forall,K,X,A}_n}$; similarly, these same properties are captured by T$_X$, 4$_X$, and 5$_X$ in the language ${{\cal L}^{\forall,X,A}_n}$. \[thm:awofunaaxiomswithoutK\] [[@HR05b]]{} If $\C$ (resp., $\C_X$) is a (possibly empty) subset of $\{\rm{T}, 4, 5\}$ (resp., $\{\rm{T}_X, 4_X, 5_X\}$) and if $C$ is the corresponding subset of $\{r, t, e\}$ then ${\mathrm{AX}^{K,X,A,\forall}}\union \C$ (resp., ${\mathrm{AX}^{X,A,\forall}}\union \C_X$) is a sound and complete axiomatization of the sentences in ${{\cal L}^{\forall,K,X,A}_n}(\Phi,\X)$ (resp. ${{\cal L}^{\forall,X,A}_n}(\Phi,\X)$) with respect to $\M_n^{C}(\Phi,\X)$. Consider the formula $\psi= \neg X_i \neg \forall x A_ix\land \neg X_i\forall x A_ix.$ The formula $\psi$ says that agent $i$ considers it possible that she is aware of all formulas and also considers it possible that she is not aware of all formulas. It is not hard to show $\psi$ is not satisfiable in any structure in $\M(\Phi,\X)$, so $\neg \psi$ is valid in awareness structures in $\M(\Phi,\X)$, It seems reasonable that an agent can be uncertain about whether there are formulas he is unaware of. In the next section, we show that a slight modification of the HR approach using ideas of MR, allows this, while still maintaining the desirable properties of the HR approach. THE NEW MODEL {#sec:newmodel} ============= We keep the syntax of Section \[sec:awaofunawa\], but, following MR, we allow different languages to be associated with different worlds. Define an [*extended awareness structure for $n$ agents (over $\Phi$)*]{} to be a tuple $M =(S$, ${{\cal L}}$, $\pi$, ${\cal K}_1$, $\ldots$, ${\cal K}_n,{\cal A}_1$, $\dots$, ${\cal A}_n)$, where $M =(S$, $\pi$, ${\cal K}_1$, $\ldots$, ${\cal K}_n,{\cal A}_1$, $\dots$, ${\cal A}_n)$ is an awareness structure and ${{\cal L}}$ maps worlds in $S$ to nonempty subsets of $\Phi$. Intuitively, ${{\cal L}^{\forall,K,X,A}_n}({{\cal L}}(s),\X)$ is the language associated with world $s$. We require that $\A_i(s) \subseteq {{\cal L}^{\forall,K,X,A}_n}({{\cal L}}(s),\X)$, so that an agent can be aware only of sentences that are in the language of the current world. We still want to require that ${\mathit{agpp}}$ and ${\mathit{ka}}$; this means that if $(s,t) \in \K_i$, then $\A_i(s) \subseteq {{\cal L}^{\forall,K,X,A}_n}({{\cal L}}(t),\X)$. But ${{\cal L}}(t)$ may well include primitive propositions that the agent is not aware of at $s$. It may at first seem strange that an agent considers possible a world whose language includes formulas of which he is not aware. (Note that, in general, this happens in the HR approach too, even though there we require that ${{\cal L}}(s) = {{\cal L}}(t)$.) But, in the context of knowledge of lack awareness, there is an easy explanation for this: the fact that $\A_i(s)$ is a strict subset of the sentences in ${{\cal L}^{\forall,K,X,A}_n}({{\cal L}}(t),\X)$ is just our way of modeling that the agent considers it possible that there are formulas of which he is unaware; he can even “name” or “label” these formulas, although he may not understand what the names refer to. If the agent considers possible a world $t$ where $\A_i(s)$ consists of every sentence in ${{\cal L}^{\forall,K,X,A}_n}({{\cal L}}(t),\X)$, then the agent considers it possible that he is aware of all formulas. The formula $\psi$ in Section \[sec:awaofunawa\] is satisfied at a world $s$ where agent $i$ considers possible a world $t_1$ such that $\A_i(s)$ consists of all sentences in ${{\cal L}^{\forall,K,X,A}_n}({{\cal L}}(t_1),\X)$ and a world $t_2$ such that $\A_i(s)$ does not contain some sentence in ${{\cal L}^{\forall,K,X,A}_n}({{\cal L}}(t_2),\X)$. Note that we can also describe worlds where agent 1 considers it possible that agents 2 and 3 are aware of the same formulas, although both are aware of formulas that he (1) is not aware of, and other more complicated relationships between the awareness of agents. See Section \[sec:compare\] for further discussion of awareness of unawareness in this setting. The truth relation is defined for formulas in ${{\cal L}^{\forall,K,X,A}_n}(\Phi,\X)$ just as in Section \[sec:awaofunawa\], except that for a formula $\phi$ to be true at a world $s$, we also require that $\phi \in {{\cal L}^{\forall,K,X,A}_n}({{\cal L}}(s),\X)$, so we just add this condition everywhere. Thus, for example, - $(M,s)\sat p$ if $p\in{{\cal L}}(s)$ and $\pi(s,p)= {\bf true}$; - $(M,s)\sat \neg \phi$ if $\phi\in {{\cal L}^{\forall,K,X,A}_n}({{\cal L}}(s),\X)$ and $(M,s)\not\sat \phi$. - $(M,s)\sat \forall x\varphi$ if $\varphi\in {{\cal L}^{\forall,K,X,A}_n}({{\cal L}}(s),\X)$ and\ $(M,s) \sat \phi[x/\psi]$ for all $\psi \in {{\cal L}^{K,X,A}_n}({{\cal L}}(s))$. We leave it to the reader to make the obvious changes to the remaining clauses. If $C$ be a (possibly empty) subset of $\{r,t,e\}$, $\N_n^{C}(\Phi,\X)$ be the set of all extended awareness structures such that awareness satisfies ${\mathit{agpp}}$ and ${\mathit{ka}}$ and the possibility correspondence is reflexive, transitive, and Euclidean if these properties are in $C$. We say that a formula $\phi$ is *valid in a class $\N$ of extended awareness structures* if, for all extended awareness structures $M\in \N$ and worlds $s$ such that $\Phi(\phi) \subseteq {{\cal L}}(s)$, $(M,s) \sat \phi$. (This is essentially the notion of weak validity defined in [@HR05].) AXIOMATIZATION {#sec:axioms} ============== In this section, we provide a sound and complete axiomatization of the logics described in the previous section. It turns out to be easier to start with the language ${{\cal L}^{\forall,X,A}_n}(\Phi,\X)$. All the axioms and inference rules of ${\mathrm{AX}^{X,A,\forall}}$ continue to be sound in extended awareness structures, except for Barcan$_X$ and FA$_X$. In a world $s$ where ${{\cal L}}(s) = p$ and agent 1 is aware of $p$, it is easy to see that $\forall x X_i A_i x$ holds. But if agent 1 considers possible a world $t$ such that ${{\cal L}}(t) = \{p,q\}$, it is easy to see that $X_i \forall x A_i x$ does not hold at $s$. Similarly, if in world $t$, agent 1 considers $s$ possible, then $\neg \forall x A_i x$ holds at $t$, but $X_i \neg \forall x A_i x$ does not. Thus, Barcan$_X$ does not hold at $s$, and FA$_X$ does not hold at $t$. We instead use the following variants of Barcan$_X$ and FA$_X$, which are sound in this framework: [Barcan$^*_X$.]{} : $(A_i (\forall x \phi) \land \forall x (A_i x \rimp X_i \phi)) \rimp X_i (\forall x A_i x \rimp \forall x \phi)$. [FA$^*_X$.]{} : $\forall x \neg A_i x \rimp X_i \forall x \neg A_i x$. Let ${\mathrm{AX}^{X,A,\forall}_e}$ be the result of replacing FA$_X$ and Barcan$_X$ in ${\mathrm{AX}^{X,A,\forall}}$ by FA$_X^*$ and Barcan$_X^*$ (the $e$ here stands for “extended”). \[thm:compwithoutK\] If $\C_X$ is a (possibly empty) subset of $\{\rm{T}_X, 4_X, 5_X\}$ and $C$ is the corresponding subset of $\{r,t, e\}$, then ${\mathrm{AX}^{X,A,\forall}_e}\union \C_X$ is a sound and complete axiomatization of the language ${{\cal L}^{\forall,X,A}_n}(\Phi,\X)$ with respect to $\N_n^{C}(\Phi,\X)$. The completeness proof is similar in spirit to that of HR, with some additional complications arising from the interaction between quantification and the fact that different languages are associated with different worlds. What is surprisingly difficult in this case is soundness, specifically, for MP. For suppose that $M$ is a structure in $\N_n(\Phi,\X)$ such that neither $\neg \phi$ nor $\neg(\phi \rimp \psi)$ are true at any world in $M$. We want to show that $\neg \psi$ is not true at any world in $M$. This is easy to show if $\Phi(\psi) \subset \Phi(\phi)$. For if $s$ is a world such that $\Phi(\psi) \subseteq {{\cal L}}(s)$, it must be the case that both $\phi$ and $\phi \rimp \psi$ are true at $s$, and hence so is $\psi$. However, if $\phi$ has some primitive propositions that are not in $\psi$, it is a priori possible that $\neg \psi$ holds at a world where neither $\phi$ nor $\phi \rimp \psi$ is defined. Indeed, this can happen if $\Phi$ is finite. For example, if $\Phi = \{p,q\}$, then it is easy to construct a structure $M \in \N_n(\Phi,X)$ where both $A_ip \land A_i q$ and $(A_i p \land A_i q) \rimp \forall x A_i x$ are never false, but $\forall x A_i x$ is false at some world in $M$. As we show, this cannot happen if $\Phi$ is infinite. This in turn involves proving a general substitution property: if $\phi$ is valid and $\psi$ is a quantifier-free sentence, then $\phi[q/\psi]$ is valid. (We remark that the substitution property also fails if $\Phi$ is finite.) See the [appendix]{} [full paper]{} for details. [Proofs for all other results stated in this abstract can also be found in the full paper.]{} Using different languages has a greater impact on the axioms for $K_i$ than it does for $X_i$. For example, as we would expect, Barcan does not hold, for essentially the same reason that Barcan$_X$ does not hold. More interestingly, NKA, 5, and Gen$_K$ do not hold either. For example, if $\neg K_i p$ is true at a world $s$ because $p \notin {{\cal L}}(t)$ for some world $t$ that $i$ considers possible at $s$, then $K_i \neg K_i p$ will not hold at $s$, even if the $\K_i$ relation is an equivalence relation. Indeed, the properties of $K_i$ in this framework become quite close to the properties of the explicit knowledge operator $X_i$ in the original FH framework, provided we define the appropriate variant of awareness. Let $A_i^*(\phi)$ be an abbreviation for the formula $K_i(\phi \lor \neg \phi)$. Intuitively, the formula $A_i^*(\phi)$ captures the property that $\phi$ is defined at all worlds considered possible by agent $i$. Let AGPP$^*$, XA$^*$, A0$^*$, 5$^*$, Barcan$^*$, FA$^*$, and Gen$^*$ be the result of replacing $X_i$ by $K_i$ and $A_i$ by $A_i^*$ in AGPP, XA, A0$_X$, 5$_X$, Barcan$^*_X$, FA$^*_X$, and Gen$_X$, respectively. It is easy to see that AGPP$^*$, A0$^*$, and Gen$^*$ are valid in extended awareness structures; XA$^*$, 5$^*$, Barcan$^*$, and FA$^*$ are not. For example, suppose that $p$ is defined in all worlds that agent $i$ considers possible at $s$, so that $A_i^*p$ holds at $s$. If there is some world $t$ that agent $i$ considers possible at $s$ and a world $u$ that agent $i$ considers possible at $t$ where $p$ is not defined, then $A_i^*p$ does not hold at $t$, so $K_i A_i^*p$ does not hold at $s$. It is easy to show that XA$^*$ holds if the $\K_i$ relation is transitive. Similar arguments show that 5$^*$, Barcan$^*$, and FA$^*$ do not hold in general, but are valid if $\K_i$ is Euclidean and (in the case of Barcan$^*$ and FA$^*$) reflexive. We summarize these observations in the following proposition: \[pro:soundness\] - XA$^*$ is valid in $\N_n^{t}(\Phi,\X)$. - Barcan$^*$ is valid in $\N_n^{r,e}(\Phi,\X)$. - FA$^*$ is valid in $\N_n^{r,e}(\Phi,\X)$. - 5$^*$ is valid in $\N_n^{e}(\Phi,\X)$. In light of Proposition \[pro:soundness\], for ease of exposition, we restrict attention for the rest of this section to structures in $\N_n^{r,t,e}(\Phi,\X)$. Assuming that the possibility relation is an equivalence relation is standard when modeling knowledge in any case. Let ${\mathrm{AX}^{K,X,A,A^*,\forall}_e}$ be the result of replacing Gen$_K$ and Barcan in ${\mathrm{AX}^{K,X,A,\forall}}$ by Gen$^*$ and Barcan$^*$, respectively, and adding the axioms AGPP$^*$, A0$^*$, and FA$^*$ for reasoning about $A_i^*$. (We do not need the axiom XA$^*$; it follows from 4 in transitive structures.) Let ${\mathrm{AX}^{K,A^*,\forall}_e}$ consist of the axioms in ${\mathrm{AX}^{K,X,A,A^*,\forall}_e}$ except for those that mention $X_i$ or $A_i$; that is, ${\mathrm{AX}^{K,A^*,\forall}_e}= {\mathrm{AX}^{K,X,A,A^*,\forall}_e}- \{$AGPP, KA, NKA, A0$\}$. Note that ${\mathrm{AX}^{K,A^*,\forall}_e}$ is the result of replacing $X_i$ by $K_i$ and $A_i$ by $A_i^*$ in ${\mathrm{AX}^{X,A,\forall}_e}$ (except that the analogue of XA is not needed). Finally, let ${\mathrm{AX}^{K,A^*}_e}$ consist of the axioms and rules in ${\mathrm{AX}^{K,A^*,\forall}_e}$ except for the ones that mention quantification; that is, ${\mathrm{AX}^{K,A^*}_e}= \{$Prop, AGPP$^*$, K, Gen$^*$, A0$^*\}$. We use ${\mathrm{AX}^{K,A^*}_e}$ to compare our results to those of HMS. \[thm:compwithK\] - ${\mathrm{AX}^{K,X,A,A^*,\forall}_e}\union\{\rm{T},4,5^*\}$ is a sound and complete axiomatization of the sentences in ${{\cal L}^{\forall,K,X,A}_n}(\Phi,\X)$ with respect to $\N_n^{r,e, t}(\Phi,\X)$. - ${\mathrm{AX}^{K,A^*,\forall}_e}\union\{\rm{T},4,5^*\}$ is a sound and complete axiomatization of the sentences in ${{\cal L}^{\forall,K}_n}(\Phi,\X)$ with respect to $\N_n^{r,t,e}(\Phi,\X)$. - ${\mathrm{AX}^{K,A^*}_e}\union\{\rm{T},4,5^*\}$ is a sound and complete axiomatization of ${{\cal L}^{K}_n}(\Phi)$ with respect to $\N_n^{r,t,e}(\Phi)$. Since, as we observed above, ${\mathrm{AX}^{K,A^*,\forall}_e}$ is essentially the result of replacing $X_i$ by $K_i$ and $A_i$ by $A_i^*$ in ${\mathrm{AX}^{X,A,\forall}_e}$, Theorem \[thm:compwithK\](b) makes precise the sense in which $K_i$ acts like $X_i$ with respect to $A_i^*$. DISCUSSION {#sec:compare} ========== Just as in our framework, in the HMS and MR approach, a (propositional) language is associated with each world. However, HMS and MR define awareness of $\phi$ as an abbreviation of $K_i\phi \lor K_i\neg K_i\phi$. In order to compare our approach to that of HMS and MR, we first compare the definitions of awareness. Let $A_i'\phi$ be an abbreviation for the formula $K_i\phi \lor K_i\neg K_i\phi$. The following result says that for extended awareness structures that are Euclidean, $A_i^*\phi$ is equivalent to $A_i'\phi$. \[prop:equivdef\] If $M =(S,{{\cal L}},\pi,{\cal K}_1,...,{\cal K}_n,$ ${\cal A}_1,\dots,{\cal A}_n)$ is a Euclidean extended awareness structure, then for all $s\in S$ and all sentences $\phi\in{{\cal L}^{\forall,K,X,A}_n}(\Phi,\X)$, $$(M,s)\sat A_i^*\phi \dimp A_i'\phi.$$ Suppose that $(M,s)\sat K_i(\phi\lor\neg \phi) \land \neg K_i \phi$. It follows that $\Phi(\phi)\subseteq {{\cal L}}(s)$, $\Phi(\phi)\subseteq {{\cal L}}(t)$ for all $t$ such that $(s,t) \in \K_i$, and that there exists a world $t$ such that $(s,t)\in\K_i$ and $(M,t)\sat \neg\phi$. Let $u$ be an arbitrary world such that $(s,u)\in \K_i$. Since $\K_i$ is Euclidean, it follows that $(u,t)\in\K_i$. Thus, $(M,u)\sat \neg K_i\phi$, so $(M,s)\sat K_i\neg K_i\phi$. It follows that $(M,s) \sat A_i' \phi$, as desired. For the converse, suppose that $(M,s)\sat A_i'\phi$. If either $(M,s)\sat K_i\phi$ or $(M,s)\sat K_i \neg K_i\phi$, then $\Phi(\phi)\subseteq {{\cal L}}(s)$, and if $(s,t)\in\K_i$, we have that $\Phi(\phi)\subseteq {{\cal L}}(t)$. Therefore, $(M,s)\sat A_i^*\phi$. In [@HR05], we showed that ${\mathrm{AX}^{K,A^*}_e}\union \{\mathrm{T},4,5^*\}$ provides a sound and complete axiomatization of the structures used by HMS where the possibility relations are Euclidean, transitive, and reflexive, with one difference: $A_i'$ is used for awareness instead of $A_i^*$. However, by Proposition \[prop:equivdef\], in $\N_n^{e}$, $A_i^*$ and $A_i'$ are equivalent. Thus, for the class of structures of most interest, we are able to get all the properties of the HMS approach; moreover, we can extend to allow for reasoning about knowledge of unawareness. It is not clear how to capture knowledge of unawareness directly in the HMS approach. It remains to consider the relationship between $A_i$ and $A_i^*$. Let $\A_i^*(s)$ be the set of sentences that are defined at all worlds considered possible by agent $i$ in world $s$; that is, $\phi \in \A_i^*(s)$ iff $(M,s) \sat A_i^* \phi$. Assuming that agents know what they are aware of, we have that if $(s,t) \in \K_i$, then $\A_i(s) = \A_i(t)$. Thus, it follows that $\A_i(s)\subseteq \A_i^*(s)$. For if $\phi \in \A_i(s)$, then $\Phi(\phi)\subseteq{{\cal L}}(t)$ for all $t$ such that $(s,t) \in \K_i$, so $(M,s) \sat A_i^*(\phi)$. We get the opposite inclusion by assuming the following natural connection between an agent’s awareness function and the language in the worlds that he considers possible: - [**LA:**]{} If $p \notin \A_i(s)$, then $p \notin {{\cal L}}(t)$ for some $t$ such that $(s,t) \in \K_i$. It is immediate that in models that satisfy [**LA**]{} (and ${\mathit{agpp}}$), $\A_i(s) \supseteq \A_i^*(s)$ for all agents $i$ and worlds $s$. Thus, under minimal assumptions, $\A_i^*(s) = \A_i(s)$. The bottom line here is that under the standard assumptions in the economics literature, together with the minimal assumption [**LA**]{}, all the notions of awareness coincide. We do not need to consider a syntactic notion of awareness at all. However, as pointed out by FH, there are other notions of awareness that may be relevant; in particular, a more computational notion of awareness is of interest. For such a notion, an axiom such as AGPP does not seem appropriate. We leave the problem of finding axioms that characterize a more computational notion of awareness in this framework to future work. We conclude with some comments on awareness and language. If we think of propositions $p \in {{\cal L}}(t) - \A_i(s)$ as just being labels or names for concepts that agent $i$ is not aware of but $i$ understands other agents might be aware of, [**LA**]{} is just saying that $i$ should not use the same label in all worlds that he considers possible. It is important that an agent can use different labels for formulas that he is unaware of. A world where an agent is unaware of two primitive propositions is different from a world where an agent is unaware of only one primitive proposition. For example, to express the fact that in world $s$ agent agent 1 considers it possible that (1) there is a formula that he is unaware that agent 2 is aware of and (2) there is a formula that both he and agent 2 are unaware of that agent 3 is aware of, agent 1 needs to consider possible a world $t$ with at least two primitive propositions in ${{\cal L}}(t) - \A_1(s)$. Needless to say, reasoning about such lack of awareness might be critical in a decision-theoretic context. The fact that the primitive propositions that an agent is not aware of are simply labels means that switching the labels does not affect what the agent knows or believes. More precisely, given a model $M = ({S}, {{\cal L}},\K_1, \ldots, \K_n, \A_1, \ldots, \A_n, \pi)$, let $M'$ be identical to $M$ except that the roles of the primitive propositions $p$ and $p'$ are interchanged. More formally, $M' = ({S}, {{\cal L}}', \K_1, \ldots, \K_n, \A_1', \ldots, \A_n',\pi')$, where, for all worlds $s \in {S}$, we have - ${{\cal L}}(s) - \{p,p'\} = {{\cal L}}'(s) - \{p,p'\}$; - $p \in {{\cal L}}'(s)$ iff $p' \in {{\cal L}}(s)$, and $p' \in {{\cal L}}'(s)$ iff $p \in {{\cal L}}(s)$; - $\pi(s,q) = \pi'(s,q)$ for all $q \in {{\cal L}}(s) - \{p,p'\}$; - if $p \in {{\cal L}}(s)$, then $\pi(s,p) = \pi'(s,p')$, and if $p' \in {{\cal L}}(s)$, then $\pi(s,p') = \pi'(s,p)$; - if $\phi$ is a formula that mentions neither $p$ nor $p'$, then $\phi \in \A_i(s)$ iff $\phi \in \A_i'(s)$; - for any formula $\phi$ that mentions either $p$ or $p'$, $\phi \in \A_i(s)$ iff $\phi[p \leftrightarrow p'] \in \A_i'(s)$, where $\phi[p \leftrightarrow p']$ is the result of replacing all occurrences of $p$ in $\phi$ by $p'$ and all occurrences of $p'$ by $p$. It is easy to see that for all worlds $s$, $(M,s) \sat \phi$ iff $(M',s) \sat \phi[p \leftrightarrow p']$. In particular, this means that if neither $p$ nor $p'$ is in ${{\cal L}}(s)$, then for all formulas, $(M,s) \sat \phi$ iff $(M',s) \sat \phi$. Thus, switching labels of propositions that are not in ${{\cal L}}(s)$ has no impact on what is true at $s$. We remark that the use of labels here is similar in spirit to our use of *virtual moves* in [@HR06] to model moves that a player is aware that he is unaware of. Although switching labels of propositions that are not in ${{\cal L}}(s)$ has no impact on what is true at $s$, changing the truth value of a primitive proposition that an agent is not aware at $s$ may have some impact on what the agent explicitly knows at $s$. Note that we allow agents to have some partial information about formulas that they are unaware of. We certainly want to allow agent 1 to know that there is a formula that agent 2 is aware of that he (agent 1) is unaware of; indeed, capturing a situation like this was one of our primary motivations for introducing knowledge of lack of awareness. But we also want to allow agent 1 to know that agent 2 is not only aware of the formula, but knows that it is true; that is, we want $X_1(\exists x (\neg A_1 (x) \land K_2(x)))$ to be consistent. There may come a point when an agent has so much partial information about a formula he is unaware of that, although he cannot talk about it explicitly in his language, he can describe it sufficiently well to communicate about it. When this happens in natural language, people will come up with a name for a concept and add it to their language. We have not addressed the dynamics of language change here, but we believe that this is a topic that deserves further research. PROOFS ====== We first prove Theorem \[thm:compwithoutK\]. As we said in the main text, proving soundness turns out to be nontrivial, so we being by showing that MP, Barcan$^*_X$, and Gen$_\forall$ are sound. (Soundness of the remaining axioms is straightforward. For MP, we need some preliminary lemmas. \[lem:satiswithoutq\] If $\phi$ is a sentence in ${{\cal L}^{\forall,K,X,A}_n}(\Phi,\X)$ that does not mention $q$ and is satisfiable in $\N_n(\Phi,\X)$, then it is satisfiable in an extended awareness structure $M=(S,{{\cal L}}(s),\pi,\K_1,\ldots,\K_n,\A_1,\ldots,\A_n) \in \N_n(\Phi,\X)$ such that $q\notin {{\cal L}}(s)$ for every $s\in S$. Let $\tau:\Phi\rightarrow\Phi$ be a 1-1 function. For a sentence $\psi$, let $\tau(\psi)$ be the result of replacing every primitive proposition $q$ in $\psi$ by $\tau(q)$. Given an extended awareness structure $M^\tau(S,{{\cal L}}(s),\pi,\K_1,\ldots,\K_n,\A_1,\ldots,\A_n)$, let $M'=(S,{{\cal L}}^\tau(s),\pi^\tau,\K_1,\ldots,\K_n,\A^\tau_1,\ldots,\A^\tau_n)$ be the extended awareness structure that results from “translating” $M$ by $\tau$; formally: ${{\cal L}}'(s)=\{\tau(p):p\in {{\cal L}}(s)\}$, $\pi'(s,\tau(p))=\pi(s,p)$, and $\A'_i(s)=\{\tau(\psi):\psi\in\A_i(s)\}$. We now prove that $(M,s)\sat \psi$ iff $(M^\tau,s)\sat \tau(\psi)$ by induction in the structure of $\psi$. All the cases are straightforward and left to the reader except the case $\psi$ has the form $\forall x \psi'$. In this case, we have that $(M,s)\sat \psi$ iff $(M,s)\sat \psi'[x/\beta]$ for all $\beta\in{{\cal L}^{K,X,A}_n}({{\cal L}}(s))$. By the induction hypothesis, $(M,s)\sat \psi'[x/\beta]$ for all $\beta\in{{\cal L}^{K,X,A}_n}({{\cal L}}(s))$ iff $(M^\tau,s)\sat \tau(\psi'[x/\beta])$ for all $\beta\in{{\cal L}^{K,X,A}_n}({{\cal L}}(s))$. Since $\tau(\psi'[x/\beta])=\tau(\psi')[x/\tau(\beta)]$ and, by construction of ${{\cal L}}^\tau$, for all $\gamma\in{{\cal L}^{K,X,A}_n}({{\cal L}}^\tau(s))$ there exists $\beta\in{{\cal L}^{K,X,A}_n}({{\cal L}}(s))$ such that $\gamma=\tau(\beta)$, it follows that $(M^\tau,s)\sat \tau(\psi'[x/\beta])$ for all $\beta\in{{\cal L}^{K,X,A}_n}({{\cal L}}(s))$ iff $(M^\tau,s)\sat \tau(\psi')[x/\gamma])$ for all $\gamma\in{{\cal L}^{K,X,A}_n}({{\cal L}}^\tau(s))$. The latter statement is true iff $(M^\tau,s)\sat \tau(\psi)$. To complete the proof of the lemma, suppose that $\phi$ is a sentence that does not mention $q$ and that $(M,s)\sat \phi$. Let $\tau$ be a 1-1 function such that $\tau(p)=p$ for every $p$ that occurs in $\phi$ and such that there exists no $r\in \Phi$ such that $\tau(r)=q$. (Here we are using the fact that $\Phi$ is an infinite set.) Note that $\phi=\tau(\phi)$. Thus, the claim implies that $(M',s)\sat \phi$ and by construction $q\notin{{\cal L}}'(s)$ for every $s\in S$. *Substitution* is a standard property of most propositional logics. It says that if $\phi$ is valid, then so is $\phi[q/\psi]$. Substitution in full generality is not valid in our framework, because of the semantics of quantification. For example, although $\forall x \neg A_i x \rimp \neg A_i q$ is valid, $\forall x \neg A_i x \rimp \neg A_i (\forall x A_i x)$ is not. As we now show, if we restrict to quantifier-free substitutions, we preserve validity. But this result depends on the fact that $\Phi$ is infinite. For example, if $\Phi = \{p,q\}$, then $\phi = A_ip \land A_i q \rimp \forall x A_i x$ is valid, but $\phi[q/p] = A_ip \land A_i p \rimp \forall x A_i x$ is not valid. We first prove that a slightly weaker version of Substitution holds (in which $q$ cannot appear in $\psi$), and then prove Substitution. \[lem:wsub\] If $\phi$ is a sentence valid in $\N_n(\Phi,\X)$, $q$ is a primitive proposition, and $\psi$ is an arbitrary quantifier-free sentence that does not mention $q$, then $\phi[q/\psi]$ is valid in $\N_n(\Phi,\X)$. Suppose, by way of contradiction, that $\phi[q/\psi]$ is not valid. Then $\neg\phi[q/\psi]$ is satisfiable. By Lemma \[lem:satiswithoutq\], there exists an extended awareness structure $M=(S,{{\cal L}}(s),\pi,\K_1,\ldots,\K_n,\A_1,\ldots,\A_n)$ and a world $s^*\in S$ such that $(M,s^*)\sat \neg \phi[q/\psi]$ and $q\notin {{\cal L}}(s)$ for every $s\in S$. Let $M'$ extends $M$ by defining $q$ as $\psi$; more precisely, $M'=(S,{{\cal L}}',\pi',\K_1,\ldots,\K_n,\A'_1,\ldots,\A'_n)$, where - ${{\cal L}}'(s)={{\cal L}}(s)\union\{q\}$ if $\psi\in{{\cal L}^{K,X,A}_n}({{\cal L}}(s))$, and ${{\cal L}}'(s)={{\cal L}}(s)$ otherwise; - $\pi'(s,p)=\pi(s,p)$ for every $p\in{{\cal L}}(s)$ and if $q\in {{\cal L}}'(s)$, then $\pi'(s,q)={\bf true}$ iff $(M,s)\sat\psi$; - $\A'_i(s)=\A_i(s)$ if $\psi\notin\A_i(s)$, and $\A'_i(s)$ is the smallest set generated by primitive propositions that includes $\A_i(s)\union \{q\}$ otherwise. Intuitively, we are just extending $M$ by defining $q$ so that it agrees with $\psi$ everywhere. We claim that for every sentence $\sigma$, if $\psi\in{{\cal L}^{K,X,A}_n}({{\cal L}}(s))$, then the following are equivalent: - $(M',s)\sat \sigma$ - $(M',s)\sat \sigma[q/\psi]$ - $(M,s)\sat \sigma[q/\psi].$ We first observe that if $\sigma'$ is a quantifier-free sentence that does not mention $q$, then for all worlds $s \in S$, we have that $(M,s) \sat \sigma$ iff $(M',s) \sat \sigma'$. (The formal proof is by a straightforward induction on $\sigma'$. We now prove the claim by induction in the structure of $\sigma$. For the base case, note that if $\sigma$ is the primitive proposition $q$, then the equivalence between (b) and (c) follows from the observation above. All cases are straightforward except the case where $\sigma$ has the form $\forall x \sigma'$. To see that (a) implies (b), suppose that $(M',s)\sat \forall x\sigma'$. Then $(M',s)\sat \sigma'[x/\beta]$ for all $\beta\in{{\cal L}^{K,X,A}_n}({{\cal L}}'(s))$. By the induction hypothesis, $(M',s)\sat (\sigma'[x/\beta])[q/\psi]$. Note that $\sigma'[x/\beta][q/\sigma] = ((\sigma'[q/\psi])[x/\beta])[q/\psi]$. Thus, applying the induction hypothesis again, it follows that $(M',s) \sat (\sigma'[q/\psi])[x/\beta]$ for all $\beta \in {{\cal L}^{K,X,A}_n}({{\cal L}}'(s))$. Therefore, $(M',s)\sat \forall x\sigma'[q/\psi]$. This shows that (a) implies (b). To see that (b) implies (c), suppose that $(M',s)\sat \forall x\sigma'[q/\psi]$. Thus, $(M',s)\sat (\sigma'[q/\psi])[x/\beta]$ for all $\beta\in{{\cal L}^{K,X,A}_n}({{\cal L}}'(s))$. Since ${{\cal L}^{K,X,A}_n}({{\cal L}}(s))\subseteq {{\cal L}^{K,X,A}_n}({{\cal L}}'(s))$, by the induction hypothesis, it follows that $(M,s)\sat (\sigma'[q/\psi])[x/\beta]$ for all $\beta\in{{\cal L}^{K,X,A}_n}({{\cal L}}(s))$. Thus, $(M,s)\sat \forall x\sigma'[q/\psi]$. Finally, to see that (c) implies (a), suppose that $(M,s)\sat \forall x\sigma'[q/\psi]$. We want to show that $(M',s)\sat \forall x \sigma'$, or equivalently, that $(M',s)\sat \sigma'[x/\beta]$ for all $\beta \in {{\cal L}^{K,X,A}_n}({{\cal L}}(s'))$. Choose $\beta \in {{\cal L}^{K,X,A}_n}({{\cal L}}(s'))$. So choose $\beta \in {{\cal L}^{K,X,A}_n}({{\cal L}}(s'))$. By the induction hypothesis, $(M',s)\sat \sigma'[x/\beta]$ iff $(M',s)\sat (\sigma'[x/\beta])[q/\psi]$ iff $(M,s)\sat (\sigma'[x/\beta])[q/\psi]$. Since $(\sigma'[x/\beta])[q/\psi] = \sigma'[q/\sigma](x/\beta[q/\sigma])$, and $(M,s) \sat \sigma'[q/\sigma](x/\beta[q/\sigma])$ since $(M,s) \sat \forall x \sigma'[q/\sigma]$, by assumption, the desired result follows. Since, by assumption, $(M,s^*)\sat \neg\phi[q/\psi]$, it follows from the claim above that $(M',s^*)\sat \neg\phi$, a contradiction. \[cor:subst\] If $\phi$ is a sentence valid in $\N_n(\Phi,\X)$, $q$ is a primitive proposition, and $\psi$ is an arbitrary quantifier-free sentence, then $\phi[q/\psi]$ is valid in $\N_n(\Phi,\X)$. Choose a primitive proposition $r$ that does not appear in $\psi$ or $\phi$. By Weak Substitution (Proposition \[lem:wsub\]), $\phi' = \phi[q/r]$ is valid. Applying Weak Substitution again, $\phi'[r/\psi] = \phi[q/\psi]$ is valid. We are finally ready to prove the soundness of MP. \[MPsound\] If $\phi \rimp \psi$ and $\phi$ are both valid in an awareness structure $M$, then so is $\phi$. Suppose, by way of contradiction, then $\phi \rimp \psi$ and $\phi$ are valid in $M$, and, for some world $s$ in $M$, we have that $(M,s) \sat \neg \phi$. It must be the case that $\psi\notin {{\cal L}^{\forall,K,X,A}_n}({{\cal L}}(s),\X)$, while $\phi\in {{\cal L}^{\forall,K,X,A}_n}({{\cal L}}(s),\X)$. Let $q_1, \ldots, q_k$ be the primitive propositions that are mentioned in $\psi$ but are not in ${{\cal L}}(s)$. Note that none of $q_1, \ldots, q_k$ can appear in $\phi$. Since, by assumption, ${{\cal L}}(s)$ is non-empty, let $p\in {{\cal L}}(s)$, and let $\psi' = \psi[q_1/p, \ldots, q_k/p]$. By Weak Substitution, $\psi'$ and $\psi' \rimp \phi$ are valid. But $\psi'$ and $\phi$ are in ${{\cal L}^{\forall,K,X,A}_n}({{\cal L}}(s),\X)$. Thus, we must have $(M,s) \sat \psi'$ and $(M,s) \sat \psi' \rimp \phi$, so $(M,s) \sat \phi$, a contradiction. The following two results prove the soundness of Gen$_\forall$ and Barcan$^*_X$. \[thm:Genforall\] If $\phi$ is a valid sentence in $\N_n(\Phi,\X)$ and $q$ is an arbitrary primitive proposition, then $\forall x\phi[q/x]$ is valid in $\N_n(\Phi,\X)$. Suppose not. Then there exists an extended awareness structure in $M\in \N_n(\Phi,\X)$ and a world $s$ such that $(M,s)\sat \neg \forall x \phi[q/x]$. Thus, there exists a formula $\psi\in {{\cal L}^{K,X,A}_n}({{\cal L}}(s))$ such that $(M,s)\sat \neg (\phi[q/x])[x/\psi]$. Thus, $\phi[q/\psi]$ is not valid. By Substitution, it follows that $\phi$ is not valid either, a contradiction. \[thm:WB\] $(A_i (\forall x \phi) \land \forall x (A_i x \rimp X_i \phi)) \rimp X_i (\forall x A_i x \rimp \forall x \phi)$ is valid in $\N_n(\Phi,\X)$. Suppose that $(M,s) \sat (A_i (\forall x \phi)\land \forall x (A_i x \rimp X_i \phi))$. Since awareness is generated by primitive propositions, $(M,s) \sat A_i (\forall x A_i x \rimp \forall x \phi)$. Suppose, by way of contradiction, that $ (M,s) \sat \neg X_i (\forall x A_i x \rimp \forall x \phi)$. Then there must exist some world $t$ such that $(s,t)\in\K_i$ and $ (M,t) \sat \neg (\forall x A_i x \rimp \forall x \phi)$. Thus, $(M,t) \sat \forall x A_i x$ and $ (M,t) \sat \neg \forall x \phi$. Since $ (M,t) \sat \neg \forall x \phi$, it follows that there exists $\psi\in{{\cal L}^{X,A}_n}({{\cal L}}(t))$ such that $(M,t) \sat \neg \phi[x/\psi]$. Since $(M,t) \sat \forall x A_i x$, we must have $(M,t) \sat A_i \psi$. Since $\A_i(s) = \A_i(t)$, we also have $(M,s) \sat A_i \psi$. Since $(M,s) \sat \forall x (A_i x \rimp X_i \phi)$, it follows that $(M,s) \sat X_i \phi[x/\psi]$. Thus, $(M,t) \sat \phi[x/\psi]$, a contradiction. With these results in hand, we can now prove Theorem \[thm:compwithoutK\]. We repeat the theorem here for the convenience of the reader. If $\C_X$ is a (possibly empty) subset of $\{\rm{T}_X, 4_X, 5_X\}$ and $C$ is the corresponding subset of $\{r,t, e\}$, then ${\mathrm{AX}^{X,A,\forall}_e}\union \C_X$ is a sound and complete axiomatization of the language ${{\cal L}^{\forall,X,A}_n}(\Phi,\X)$ with respect to $\N_n^{C}(\Phi,\X)$. Corollary \[MPsound\] and Propositions \[thm:Genforall\] and \[thm:WB\] show the soundness of MP, Gen$_\forall$, and Barcan$^*_X$, respectively. The proof of soundness for the other axioms and rules is standard and left to the reader. The soundness of ${\mathrm{AX}^{X,A,\forall}_e}\union \C_X$ follows easily. We now consider completeness. As we said in the main text, the proof is quite similar in spirit to that of Theorem \[thm:awofunaaxiomswithoutK\] given in HR. We focus here on the differences. We give the remainder of the proof only for the case $\C_X=\emptyset$; the other cases follow using standard techniques (see, for example, [@FHMV; @HC96]). As usual, the idea of the completeness proof is to construct a canonical model $M^c$ where the worlds are maximal consistent sets of sentences. It is then shown that if $s_V$ is the world corresponding to the maximal consistent set $V$, then $(M^c,s_V)\sat \varphi$ iff $\phi\in V$. As observed in HR, this will not quite work in the presence of quantification, since there may be a maximal consistent set $V$ of sentences such that $\neg\forall x\phi\in V$, but $\phi[x/\psi]$ for all $\psi\in {{\cal L}^{K,X,A}_n}(\Phi)$. That is, there is no witness to the falsity of $\forall x \phi$ in $V$. This problem was dealt with in HR by restricting to maximal consistent sets $V$ that are *acceptable* in the sense that if $\neg \forall x \phi \in V$, then $\neg \phi[x/q] \in V$ for infinitely many primitive propositions $q \in \Phi$. (Note that this notion of acceptability also requires $\Phi$ to be infinite.) Because here we have possibly different languages associated different worlds, we need to consider acceptability and maximality with respect to a language. A set $\Gamma$ is [*acceptable with respect to $L \subseteq \Phi$*]{} if $\phi\in{{\cal L}^{\forall,X,A}_n}(L,\X)$ and $\Gamma\vdash \phi[x/q]$ for all but finitely many primitive propositions $q \in L$, then $\Gamma\vdash \forall x \phi$. If AX is an axiom system, a set $\Gamma$ is [*maximal AX-consistent set of sentences with respect to $L \subseteq \Phi$*]{} if $\Gamma$ is a set of sentences contained in ${{\cal L}^{\forall,X,A}_n}(L,\X)$ and, for all sentences $\phi\in {{\cal L}^{\forall,X,A}_n}(L,\X)$, if $\Gamma\union\{\phi\}$ is $AX$-consistent, then $\phi\in\Gamma$. The following four lemmas are essentially Lemmas A.4, A.5, A.6, and A.7 in HR. Since the proofs are essentially identical, we do not repeat them here. \[Claim4K\] If $\Gamma$ is a finite set of sentences, then $\Gamma$ is acceptable with respect to every subset $L \subseteq \Phi$ that contains infinitely many primitive propositions. \[Claim5K\] If $\Gamma$ is acceptable with respect to $L$ and $\tau$ is a sentence in ${{\cal L}^{\forall,X,A}_n}(L,\X)$, then $\Gamma \union \{\tau\}$ is acceptable with respect to $L$. \[Claim6X\] If $\Gamma\subseteq {{\cal L}^{\forall,X,A}_n}(L,\X)$ is an acceptable ${\mathrm{AX}^{X,A,\forall}_e}$-consistent set of sentences with respect to $L$, then $\Gamma$ can be extended to a set of sentences that is acceptable and maximal ${\mathrm{AX}^{X,A,\forall}_e}$-consistent with respect to $L$. Let $\Gamma/X_i=\{\phi:X_i\phi\in\Gamma\}$. \[LemmaA3X\] If $\Gamma$ is a a set of sentences that is maximal ${\mathrm{AX}^{X,A,\forall}_e}$-consistent with respect to $L$ containing $\neg X_i\phi$ and $A_i\phi$, then $\Gamma/X_i\cup\{\neg\phi\}$ is ${\mathrm{AX}^{X,A,\forall}_e}$-consistent. Lemma A.14 in HR shows that if $\Gamma$ is an acceptable maximal consistent set that contains $A_i \phi$ and $\neg X_i \phi$, then $\Gamma/X_i \union \{\neg \phi\}$ can be extended to an acceptable maximal consistent set $\Delta$. (Lemma A.8 proves a similar result for the $K_i$ operator.) The following lemma proves an analogous result, but here we must work harder to take the language into account. That is, we have to define the language $L'$ with respect to which $\Delta$ is maximal and acceptable. As usual, we say that $L$ is *co-infinite* if $\Phi - L$ is infinite. \[LemmaA4X\] If $\Gamma$ is an acceptable maximal ${\mathrm{AX}^{X,A,\forall}_e}$-consistent set of sentences with respect to $L$, where $L$ is infinite and co-infinite, $\neg X_i\phi\in \Gamma$, and $A_i\phi\in\Gamma$, then there exist an infinite and co-infinite set $L'\subseteq\Phi$ and a set $\Delta$ of sentences that is acceptable, maximal ${\mathrm{AX}^{X,A,\forall}_e}$-consistent with respect to $L'$ and contains $\Gamma/X_i \union \{\neg\phi\}$. Moreover, $A_i\psi \in \Delta$ iff $A_i\psi\in\Gamma$ for all formulas $\psi$. By Lemma \[LemmaA3X\], $\Gamma/X_i\cup\{\neg\phi\}$ is ${\mathrm{AX}^{X,A,\forall}_e}$-consistent. We define a subset $L'\subseteq\Phi$ and construct a set $\Delta$ of sentences that is acceptable and maximal ${\mathrm{AX}^{X,A,\forall}_e}$-consistent with respect to $L'$ such that $\Delta$ contains $\Gamma/X_i\cup\{\neg\phi\}$ and $A_i\phi \in \Delta$ iff $A_i\phi\in\Gamma$ for all formulas $\phi$. We consider two cases: (1) $\Gamma/X_i \union \{\neg \phi \} \vdash \forall xA_i x$; and (2) $\Gamma/X_i \union \{\neg \phi \} \not\vdash \forall xA_i x$. If $\Gamma/X_i \union \{\neg \phi \} \vdash \forall xA_i x$, then define $L'=\{q:A_iq\in\Gamma\}$. Note that since $\Gamma \vdash A_i\phi$, it follows that every primitive proposition $q$ in $\phi$ must be in $L'$, as is every primitive proposition in a formula in $\Gamma/X_i$. $L'$ must be infinite, for if it were finite, then we would have that $\Gamma \vdash A_i q$ for only finitely many primitive propositions in $L$. Since $\Gamma$ is a maximal ${\mathrm{AX}^{X,A,\forall}_e}$-consistent set, it must be the case that $\Gamma \vdash \neg A_i q $ for all but finitely many primitive propositions $q \in L$. Since $\Gamma$ is acceptable with respect to $L$, $\Gamma \vdash \forall x \neg A_i x$. Thus, axiom FA$_X^*$ implies that $\forall x \neg A_i x\in \Gamma/X_i$, which is a contradiction, since by assumption $\Gamma/X_i \union\{\neg \phi\}\vdash \forall x A_i x$. Thus, $L'$ must be infinite. Since $L'$ is a subset of $L$, it is clearly co-infinite, since $L$ is. We prove that $\Gamma/X_i \union \{\neg \phi \}$ is acceptable with respect to $L'$ in this case. Suppose that $\psi\in{{\cal L}^{\forall,X,A}_n}(L',\X)$ and $$\label{eq1} \mbox{$\Gamma/X_i \union \{\neg \phi \}\vdash \psi[x/q]$ for all but finitely many $q\in L'$.}$$ We want to show that $\Gamma/X_i \union \{\neg \phi \}\vdash \forall x \psi$. It follows from (\[eq1\]) that $\Gamma/X_i \vdash \neg\phi\rimp \psi[x/q]$ for all but finitely many $q\in L'$. Since every primitive proposition in $\psi$ is in $L' = \{q: A_iq\in \Gamma\}$, and $A_i \phi \in \Gamma$, it easily follows that $\Gamma \vdash X_i (\neg\phi\rimp \psi[x/q])$ for all but finitely many $q \in L'$. Since $L' = \{q: A_i q \in \Gamma\}$, it follows that $\Gamma \vdash A_i q \rimp X_i (\neg\phi\rimp \psi[x/q])$ for all but finitely many $q\in L$. Since $\Gamma$ is acceptable with respect to $L$, we have that $$\label{eq2}\Gamma \vdash \forall x (A_i x \rimp X_i(\neg\phi\rimp \psi)).$$ Again using the fact that $\Gamma \vdash A_i q$ for all $q$ in $\psi$ and $\Gamma \vdash A_i \phi$, from AGPP we have that $$\label{eq3} \Gamma \vdash A_i \forall x (\neg \phi \rimp \psi).$$ From Barcan$^*_X$, (\[eq2\]), and (\[eq3\]), it follows that $\Gamma \vdash X_i(\forall x A_i x \rimp \forall x (\neg\phi\rimp \psi))$. Thus, $\Gamma/X_i \vdash \forall x A_i x \rimp \forall x (\neg\phi\rimp \psi)$. Since $\Gamma/X_i \union \{\neg \phi \} \vdash \forall x A_i x$, it follows that $\Gamma/X_i \union \{\neg \phi \} \vdash \forall x (\neg\phi\rimp \psi)$. Since $\phi$ is a sentence, applying $K_\forall$ and $N_\forall$, it easily follows that $\Gamma/X_i \union \{\neg \phi \} \vdash \neg\phi\rimp \forall x\psi$. Thus, $\Gamma/X_i \union \{\neg \phi \}\vdash \forall x \psi$, as desired. Therefore, $\Gamma/X_i \union \{\neg \phi \}$ is a set of sentences that is acceptable with respect to $L'$ and ${\mathrm{AX}^{X,A,\forall}_e}$-consistent. Thus, by Lemma \[Claim6X\], there exists a set of sentences $\Delta$ containing $\Gamma/X_i \union \{\neg \phi \}$ that is acceptable and maximal ${\mathrm{AX}^{X,A,\forall}_e}$-consistent with respect to $L'$. Finally, we prove that $A_i\psi\in\Gamma$ iff $A_i\psi\in\Delta$. First, suppose that $A_i\psi\in\Gamma$. Then, XA implies that $X_iA_i\psi\in\Gamma$. Thus, $A_i\psi\in\Gamma/X_i\subseteq \Delta$. For the converse, suppose that $A_i\psi\in\Delta$. Since $\psi \in {{\cal L}^{\forall,X,A}_n}(L',\X)$, it must be the case that $\Gamma \vdash A_i q$ for every primitive proposition $q$ that appears in $\psi$; thus $\Gamma \vdash A_i \psi$. If $\Gamma/X_i\union \{\neg \phi \}\not\vdash \forall xA_i x$, define $L'=\{q:A_iq\in\Gamma\}\union L''$, where $L''$ is an infinite and co-infinite set of primitive propositions not occurring in $\Gamma \union \{\phi\}$ (which exists, since, by assumption, $\Phi-L$ is infinite). It can be easily seen that $L'$ is infinite and co-infinite. Since $\Gamma/X_i\union \{\neg \phi \}$ is ${\mathrm{AX}^{X,A,\forall}_e}$-consistent, $\Gamma/X_i\union \{\neg \phi \}\not\vdash \forall xA_i x$ implies that $\Gamma/X_i \union \{\neg\phi,\neg\forall xA_ix\}$ is ${\mathrm{AX}^{X,A,\forall}_e}$-consistent. To see that $\Gamma/X_i \union \{\neg\phi\}$ is acceptable with respect to $L'$, suppose that $\psi\in{{\cal L}^{\forall,X,A}_n}(L',\X)$ and $\Gamma/X_i \union \{\neg\phi\} \vdash \psi[x/q]$ for all but finitely many $q\in L'$. There must be some $q\in L'$ not mentioned in $\Gamma/X_i$ or $\phi$ such that $\Gamma/X_i \union \{\neg\phi\} \vdash \psi[x/q]$. Since $\Gamma/X_i\union \{\neg\phi\}\vdash \psi[x/q]$, it follows that there exists a subset $\{\beta_1\ldots,\beta_n\}\subseteq\Gamma/X_i\union \{\neg\phi\}$ such that ${\mathrm{AX}^{X,A,\forall}_e}\vdash \beta \Rightarrow \psi[x/q]$, where $\beta=\beta_1\land\cdots\land\beta_n$. Since $q$ does not occur in $\beta$ or $\phi$, by Gen$_\forall$, we have ${\mathrm{AX}^{X,A,\forall}_e}\vdash \forall x(\beta\Rightarrow \psi)$. Since $\beta$ is a sentence, applying $K_\forall$ and $N_\forall$, it easily follows that ${\mathrm{AX}^{X,A,\forall}_e}\vdash \beta\Rightarrow \forall x\psi$, which implies that $\Gamma/X_i\union \{\neg\phi\}\vdash \forall x\psi$, as desired. Finally, since $\Gamma/X_i \union \{\neg\phi\}$ is acceptable with respect to $L'$, Lemma \[Claim5K\] implies that $\Gamma/X_i \union \{\neg\phi,\neg\forall xA_ix\}$ is acceptable with respect to $L'$. Let $\psi_1,\psi_2,\ldots$ be an enumeration of the set of sentences in ${{\cal L}^{\forall,X,A}_n}(L',\X)$ such that if $\psi_k$ is of the form $\neg\forall x\phi$, then there must exist a $j<k$ such that $\psi_j$ is of the form $\forall x\phi$ and if $\psi_k$ is a formula that contains a primitive proposition $q\in L''$, then there must exist a $j<k$ such that $\psi_j$ is of the form $\neg A_i q$. The construction continues exactly as in the proof of Lemma \[Claim6X\], where we take $\Delta_0=\Gamma/X_i \union \{\neg \phi,\neg \forall x A_ix\}$. Note that by construction, if $\psi_j=\neg A_i q$ for some $q\in L''$, then $q$ does not occur in $\Delta_{j-1}'$. We claim that $\Delta_{j-1}'\union\{\neg A_i q\}$ is ${\mathrm{AX}^{X,A,\forall}_e}$-consistent. For suppose otherwise. Then, as above, there exists a subset $\{\beta_1,\ldots,\beta_n\}\subseteq \Delta_{j-1}'$ such that ${\mathrm{AX}^{X,A,\forall}_e}\vdash \beta\Rightarrow \forall x A_i x$ Since $\{\beta_1,\ldots,\beta_n,\neg\forall x A_ix\}\subseteq \Delta_{j-1}'$, it follows that $\Delta_{j-1}'$ is not ${\mathrm{AX}^{X,A,\forall}_e}$-consistent, a contradiction. Therefore, $\Delta$ is a set of sentences that is acceptable and maximal ${\mathrm{AX}^{X,A,\forall}_e}$-consistent with respect to $L'$ and includes $\Gamma/X_i\union {\neg \phi}\union\{\neg A_iq:q\in L''\}$. The proof that $A_i\psi\in\Gamma$ implies $A_i\psi\in\Delta$ is identical to the first case. For the converse, suppose that $A_i\psi\in\Delta$. Then, by AGPP, $A_iq \in \Delta$ for all primitive propositions $q$ that appear in $\psi$. The construction of $\Delta$ guarantees that, for all primitive propositions in $L'$, we have $A_iq \in \Delta$ iff $A_i q \in \Gamma$. Since $\Gamma$ is maximal $\bf{X}_n^{\forall}$-consistent with respect to $L$, AGPP implies that $A_i\psi\in\Gamma$. \[LemmaA5X\] If $\varphi$ is a ${\mathrm{AX}^{X,A,\forall}_e}$-consistent sentence, then $\varphi$ is satisfiable in $\N_n^{{\mathit{agpp}}, {\mathit{ka}}, \emptyset}(\Phi,\X)$. As usual, we construct a canonical model where the worlds are maximal consistent sets of formulas. However, now the worlds must also explicitly include the language. For technical reasons, we also assume that the language is infinite and coninfinite. Let $M^{c}= ({S},{{\cal L}},{\cal K}_1,...,{\cal K}_n,\A_1,\ldots,\A_n,\pi)$ be a canonical extended awareness structure constructed as follows - ${S}=\{(s_V,L):V$ is a set of sentences that is acceptable and maximal ${\mathrm{AX}^{X,A,\forall}_e}$-consistent with respect to $L$, where $L\subseteq\Phi$ is infinite and co-infinite}; - ${{\cal L}}((s_V,L))=L$; - $ \pi((s_V,L),p)= \left\{ \begin{array}{ll} {\bf true} & \mbox{if $p \in V$}, \\ {\bf false} & \mbox{if $p\in (L-V)$}; \\ \end{array} \right. $ - $\A_i((s_V,L))=\{\phi:A_i\phi\in V\}$; - $\K_i((s_V,L))= \{(s_W,L'):V/X_i\subseteq W\mbox{ and } A_i\phi\in W\mbox{ iff }A_i\phi\in V\mbox{ for all formulas }\phi\}$. We show that if $\psi\in {{\cal L}^{\forall,X,A}_n}(L,\X)$ is a sentence, then $$\begin{aligned} \label{eq:consiffsatis1} (M^{c},(s_V,L))\sat\psi\mbox{\ \ iff\ \ }\psi\in V.\end{aligned}$$ Note that this claim suffices to prove Lemma \[LemmaA5X\] since, for all $L\subseteq\Phi$ that is infinite and co-infinite, if $\varphi\in {{\cal L}^{\forall,X,A}_n}(L,\X)$ is a ${\mathrm{AX}^{X,A,\forall}_e}$-consistent sentence, by Lemmas \[Claim4K\] and \[Claim6X\], it is contained in a set of sentences that is acceptable and maximal ${\mathrm{AX}^{X,A,\forall}_e}$-consistent with respect to $L$. We prove (\[eq:consiffsatis1\]) by induction of the depth of nesting of $\forall$, with a subinduction on the length of the sentence. The details are standard and left to the reader. For the case of $X_i \phi$, we need Lemma \[LemmaA4X\]. If $\phi$ is consistent, by Lemmas \[Claim4K\] and \[Claim6X\], then $\phi$ there is a set $L \subseteq \Phi$ that is infinite and co-infinite and contains $\Phi(\phi)$ and a set $V$ of sentences that is acceptable and maximal ${\mathrm{AX}^{X,A,\forall}_e}$-consistent with respect to $L$ such that $\phi \in V$. By the argument above, $(M,(s_V,L)) \sat \phi$, showing that $\phi$ is satisfiable, as desired. To finish the completeness proof, suppose that $\phi$ is valid in $\N^{{\mathit{agpp}}, {\mathit{ka}}, \emptyset}_{n}(\Phi,\X)$. Since $\phi$ is a sentence, it follows that $\neg\phi$ is a sentence and is not satisfiable in $\N^{{\mathit{agpp}}, {\mathit{ka}}, \emptyset}_{n}(\Phi,\X)$. So, by Lemma \[LemmaA5X\], $\neg\phi$ is not ${\mathrm{AX}^{X,A,\forall}_e}$-consistent. Thus, $\phi$ is provable in ${\mathrm{AX}^{X,A,\forall}_e}$. - XA$^*$ is valid in $\N_n^{t}(\Phi,\X)$. - Barcan$^*$ is valid in $\N_n^{r,e}(\Phi,\X)$. - FA$^*$ is valid in $\N_n^{r,e}(\Phi,\X)$. - 5$^*$ is valid in $\N_n^{e}(\Phi,\X)$. For part (a), suppose that $(M,s)\sat A_i^*\phi$, where $M \in \N_n^t(\Phi,\X)$. Thus, $(M,s) \sat K_i (\phi \lor \neg \phi)$. Since the axiom 4 is valid in structures in $\N_n^t(\Phi,\X)$, it follows that $(M,s) \sat K_i K_i (\phi \lor \neg \phi)$, that is, $(M,s) \sat K_i A_i^* \phi$. For part (b), suppose that $(M,s) \sat A_i^* (\forall x \phi)\land \forall x (A_i^* x \rimp K_i \phi)$, where $M \in \N_n^{r,e}(\Phi,\X)$. It easily follows that $(M,s) \sat A_i^* (\forall x A_i^* x \rimp \forall x \phi)$. Suppose, by way of contradiction, that $ (M,s) \sat \neg K_i (\forall x A_i^* x \rimp \forall x \phi)$. Then there must exist some world $t$ such that $(s,t)\in\K_i$ and $ (M,t) \sat \neg (\forall x A_i^* x \rimp \forall x \phi)$. Thus, $(M,t) \sat \forall x A_i^* x$ and $ (M,t) \sat \neg \forall x \phi$. Since $ (M,t) \sat \neg \forall x \phi$, it follows that there exists $\psi\in{{\cal L}^{K,X,A}_n}({{\cal L}}(t))$ such that $(M,t) \sat \neg \phi[x/\psi]$. Since $(M,t) \sat \forall x A_i^* x$, we must have $(M,t) \sat A_i^* \psi$. Thus, for every world $u$ such that $(t,u)\in \K_i$, it follows that $\psi\in {{\cal L}^{K,X,A}_n}({{\cal L}}(u))$. Suppose that $(s,v) \in \K_i$. Since $\K_i$ is Euclidean and $(s,t) \in \K_i$, it follows that $(t,v) \in \K_i$ and, by the observation above, that $\psi\in {{\cal L}^{K,X,A}_n}({{\cal L}}(v))$. Since $\K_i$ is reflexive and Euclidean, it follows that $(t,s) \in \K_i$, so the argument above also shows that $\psi\in {{\cal L}^{K,X,A}_n}({{\cal L}}(s))$. Thus, $(M,s) \sat A_i^* \psi$. Since $(M,s) \sat \forall x (A_i^* x \rimp K_i \phi)$, it follows that $(M,s) \sat K_i \phi[x/\psi]$. Thus, $(M,t) \sat \phi[x/\psi]$, a contradiction. Finally, for part (c), suppose that $(M,s)\sat \forall x \neg A_i^* x$, where $M \in \N_n^{r,e}(\Phi,\X)$. Thus, for every primitive proposition $p\in {{\cal L}}(s)$, there exists some $t_p$ such that $(s,t_p)\in \K_i$ and $p\notin {{\cal L}}(t_p)$. Let $u$ be an arbitrary world such that $(s,u)\in \K_i$. Let $\phi$ be an arbitrary quantifier-free sentence in ${{\cal L}^{\forall,K,X,A}_n}({{\cal L}}(u),\X)$. If $\Phi(\phi)\inter {{\cal L}}(s) \ne \emptyset$, suppose that $p \in \Phi(\phi) \inter {{\cal L}}(s)$. By assumption, $p \notin {{\cal L}}(t_p)$. Since $\K_i$ is Euclidean, $(u,t_p) \in \K_i$. Thus, $(M,u) \sat \neg A_i^* \phi$. If $\Phi(\phi)\inter {{\cal L}}(s) = \emptyset$, note that since $\K_i$ is reflexive and Euclidean, the fact that $(s,s)$ and $(s,u)$ are in $\K_i$ implies that $(u,s) \in \K_i$. Hence, we again have that $(M,u) \sat \neg A_i^* \phi$. The proof of part (d) is standard, and left to the reader. - ${\mathrm{AX}^{K,X,A,A^*,\forall}_e}\union\{\rm{T},4,5^*\}$ is a sound and complete axiomatization of the sentences in ${{\cal L}^{\forall,K,X,A}_n}(\Phi,\X)$ with respect to $\N_n^{r,t,e}(\Phi,\X)$. - ${\mathrm{AX}^{K,A^*,\forall}_e}\union\{\rm{T},4,5^*\}$ is a sound and complete axiomatization of the sentences in ${{\cal L}^{\forall,K}_n}(\Phi,\X)$ with respect to $\N_n^{r,t,e}(\Phi,\X)$. - ${\mathrm{AX}^{K,A^*}_e}\union\{\rm{T},4,5^*\}$ is a sound and complete axiomatization of ${{\cal L}^{K}_n}(\Phi)$ with respect to $\N_n^{r,t,e}(\Phi)$. The proof of part (a) is identical to the proof of Theorem \[thm:compwithoutK\], except that $X_i$ and $A_i$ are replaced by $K_i$ and $A_i^*$, respectively, and in Lemma \[LemmaA5X\], another step is needed in the induction to deal with $X_i$ that uses the extra axiom A0 in the standard way. For part (b), note that since $X_i$ and $A_i$ are not part of the language the axioms of ${\mathrm{AX}^{K,X,A,A^*,\forall}_e}$ that mention these operators are not needed in the induction of Lemma \[LemmaA5X\]. Therefore, the proof is the same. The proof of part (c) is similar to that of Theorem \[thm:compwithoutK\], except that the following lemma is used instead of Lemma \[LemmaA5X\]. \[LemmaA5K\] If $\varphi$ is a ${\mathrm{AX}^{K,A^*}_e}\union\{\rm{T},4,5^*\}$-consistent sentence in ${{\cal L}^{K}_n}(\Phi)$, then $\varphi$ is satisfiable in $\N_n^{r,t,e}(\Phi)$. Let $M^{c}= ({S},{{\cal L}}, {\cal K}_1,...,{\cal K}_n,\A_1,\ldots,\A_n,\pi)$ be a canonical extended awareness structure constructed as follows - ${S}=\{(s_V,L):V$ is a set of sentences in ${{\cal L}^{K}_n}(L)$ that is maximal ${\mathrm{AX}^{K,A^*}_e}\union\{\rm{T},4,5^*\}$-consistent with respect to $L$ and $L\subseteq\Phi$}; - ${{\cal L}}((s_V,L))=L$; - $ \pi((s_V,L),p)= \left\{ \begin{array}{ll} {\bf true} & \mbox{if $p \in V$}, \\ {\bf false} & \mbox{if $p\in (L-V)$}; \\ \end{array} \right. $ - $\A_i((s_V,L))$ is arbitrary; - ${\cal K}_i((s_V,L))= \{(s_W,L):V/K_i\subseteq W\}$. It is easy to see that $M^c\in \N_n^{r,t,e}(\Phi)$. As usual, to prove Lemma \[LemmaA5K\], we now show that for every $\psi\in {{\cal L}^{K}_n}(L)$, $$\begin{aligned} \label{eq:consiffsatis2} (M^{c},(s_V,L))\sat\psi\mbox{\ \ iff\ \ }\psi\in V.\end{aligned}$$ We prove (\[eq:consiffsatis2\]) by induction on the length of the formula. All the cases are standard, except for the case that $\psi=K_i\psi'$. In this case, if $\psi\in V$, then $\psi'\in W$ for every $W$ such that $(s_W,L')\in \K_i((s_V,L))$. By the induction hypothesis, $(M^c,(s_W,L'))\sat \psi'$ for every $(s_W,L')\in \K_i((s_V,L))$, so $(M^c,(s_V,L))\sat K_i\psi'$. If $\psi\notin V$, since $\psi\in{{\cal L}^{K}_n}(L)$, it follows that $\neg\psi\in V$. If $A_i^*\psi'\notin V$, then $\psi'$ is not defined at some world $(s_W,L')\in\K_i((s_V,L))$ which implies that $(M^c,(s_V,L))\not\sat \psi$. If $A_i^*\psi'\in V$, then we need to show that $V/K_i\cup\{\neg\psi'\}$ is ${\mathrm{AX}^{K,A^*}_e}\union\{\rm{T},4,5^*\}$-consistent. Suppose not. Then there exists a subset $\{\beta_1,\ldots,\beta_k\}\subseteq V/K_i$ such that $${\mathrm{AX}^{K,A^*}_e}\union\{\rm{T},4,5^*\}\vdash \beta\rimp \psi',$$ where $\beta=\beta_1\land\cdots\land\beta_k$. By Gen$^*$, it follows that $${\mathrm{AX}^{K,A^*}_e}\union\{\rm{T},4,5^*\}\vdash A_i^*(\beta\rimp \psi')\rimp K_i(\beta\rimp \psi').$$ Since $\{\beta_1,\ldots,\beta_k\}\subseteq V/K_i$, it follows that $\{K_i\beta_1,\ldots,K_i\beta_k\}\subseteq V$. Thus, by A0$^*$, we have $\{A_i^*\beta_1,\ldots,A_i^*\beta_k\}\subseteq V$. Thus, $A_i^*(\beta\rimp \psi')\in V$ and $K_i\beta \in V$. Therefore, $K_i\psi'\in V$, a contradiction. Since $V/K_i\cup\{\neg\psi'\}\subseteq {{\cal L}^{K}_n}(L)$ and is ${\mathrm{AX}^{K,A^*}_e}\union\{\rm{T},4,5^*\}$-consistent, it follows that there exists a set of sentences $W$ that is maximal ${\mathrm{AX}^{K,A^*}_e}\union\{\rm{T},4,5^*\}$-consistent with respect to $L$ and contains $V/K_i\cup\{\neg\psi'\}$. Thus, $(s_W,L)\in \K_i((s_V,L))$ and, by the induction hypothesis, $(M^c,(s_W,L))\not\sat \psi'$. Thus, $(M^c,(s_V,L))\not\sat \psi$. ### Acknowledgments {#acknowledgments .unnumbered} The first author is supported in part by NSF grants ITR-0325453, IIS-0534064, and IIS-0812045, and by AFOSR grants FA9550-08-1-0438 and FA9550-05-1-0055. The second author is supported in part by FACEPE under grants APQ-0150-1.02/06 and APQ-0219-3.08/08, and by MCT/CNPq under grant 475634/2007-1. [^1]: HR gives semantics to arbitrary formulas, including formulas with free variables. This requires using *valuations* that give meaning to free variables. By restricting to sentences, which is all we are ultimately interested in, we are able to dispense with valuations here, and thus simplify the presentation of the semantics. [^2]: As usual, the empty conjunction is taken to be the vacuously true formula $\true$, so that $A_i \phi$ is vacuously true if no primitive propositions occur in $\phi$. We remark that in the conference version of HR, an apparently weaker version of AGPP called *weak generation of awareness by primitive propositions* is used. However, this is shown in HR to be equivalent to AGPP if the agent is aware of at least one primitive proposition, so AGPP is used in the final version of HR, and we use it here as well. [^3]: Since we gave semantics not just to sentences, but also to formulas with free variables in [@HR05b], we were able to use a simpler version of Gen$_\forall$ that applies to arbitrary formulas: from $\phi$ infer $\forall x \phi$. Note that all the other axioms and inference rules apply without change to formulas as well as sentences.
--- abstract: 'Reinforcement learning (RL) agents optimize only and are indifferent to anything left out inadvertently. This means that we must not only specify what *to* do, but also the much larger space of what *not* to do. It is easy to forget these preferences, since these preferences are *already* satisfied in our environment. This motivates our key insight: *when a robot is deployed in an environment that humans act in, the state of the environment is already optimized for what humans want*. We can therefore use this implicit preference information from the state to fill in the blanks. We develop an algorithm based on Maximum Causal Entropy IRL and use it to evaluate the idea in a suite of proof-of-concept environments designed to show its properties. We find that information from the initial state can be used to infer both side effects that should be avoided as well as preferences for how the environment should be organized. Our code can be found at <https://github.com/HumanCompatibleAI/rlsp>.' author: - | Rohin Shah [^1]\  UC Berkeley\ Dmitrii Krasheninnikov [^2]\ University of Amsterdam\ Jordan Alexander\ Stanford University\  \  \ Pieter Abbeel\ UC Berkeley\ Anca D. Dragan\ UC Berkeley\ bibliography: - 'references.bib' title: Preferences Implicit in the State of the World --- Introduction ============ Typically when learning about what people want and don’t want, we look to human action as evidence: what reward they specify [@IRD], how they perform a task [@ziebart2010modeling; @AIRL], what choices they make [@christiano2017deep; @ActivePreferenceBasedLearning], or how they rate certain options [@ActiveRewardLearning]. Here, we argue that there is an additional source of information that is potentially rather helpful, but that we have been ignoring thus far: > *The key insight of this paper is that when a robot is deployed in an environment that humans have been acting in, the state of the environment is already optimized for what humans want*. For example, consider an environment in which a household robot must navigate to a goal location without breaking any vases in its path, illustrated in Figure \[fig:paper-summary\]. The human operator, Alice, asks the robot to go to the purple door, forgetting to specify that it should also avoid breaking vases along the way. However, since the robot has been deployed in a state that only contains unbroken vases, it can infer that while acting in the environment (prior to robot’s deployment), Alice was using one of the relatively few policies that do not break vases, and so must have cared about keeping vases intact. ![An illustration of learning preferences from an initial state. Alice attempts to accomplish a goal in an environment with an easily breakable vase in the center. The robot observes the state of the environment, $s_0$, after Alice has acted for some time from an even earlier state $s_{-T}$. It considers multiple possible human reward functions, and infers that states where vases are intact usually occur when Alice’s reward penalizes breaking vases. In contrast, it doesn’t matter much what the reward function says about carpets, as we would observe the same final state either way. Note that while we consider a specific $s_{-T}$ for clarity here, the robot could also reason using a distribution over $s_{-T}$.[]{data-label="fig:paper-summary"}](images/alice-irl-figure-small.pdf){width="14cm"} The initial state $s_0$ can contain information about arbitrary preferences, including tasks that the robot should actively perform. For example, if the robot observes a basket full of apples near an apple tree, it can reasonably infer that Alice wants to harvest apples. However, $s_0$ is particularly useful for inferring which side effects humans care about. Recent approaches avoid unnecessary side effects by penalizing changes from an inaction baseline [@RelativeReachability; @AUP]. However, this penalizes *all* side effects. The inaction baseline is appealing precisely because the initial state has already been optimized for human preferences, and action is more likely to ruin $s_0$ than inaction. If our robot infers preferences from $s_0$, it can avoid negative side effects while allowing positive ones. This work is about highlighting the potential of this observation, and as such makes unrealistic assumptions, such as known dynamics and hand-coded features. Given just $s_0$, these assumptions are necessary: without dynamics, it is hard to tell whether some feature of $s_0$ was created by humans or not. Nonetheless, we are optimistic that these assumptions can be relaxed, so that this insight can be used to improve deep RL systems. We suggest some approaches in our discussion. Our contributions are threefold. First, we identify the state of the world at initialization as a source of information about human preferences. Second, we leverage this insight to derive an algorithm, Reward Learning by Simulating the Past (RLSP), which infers reward from initial state based on a Maximum Causal Entropy [@ziebart2010modeling] model of human behavior. Third, we demonstrate the properties and limitations of this idea on a suite of proof-of-concept environments: we use it to avoid side effects, as well as to learn implicit preferences that require active action. In Figure \[fig:paper-summary\] the robot moves to the purple door without breaking the vase, despite the lack of an explicit penalty for breaking vases. Related work ============ [**Preference learning.**]{} Much recent work has learned preferences from different sources of data, such as demonstrations [@ziebart2010modeling; @ramachandran2007bayesian; @GAIL; @AIRL; @GCL], comparisons [@christiano2017deep; @ActivePreferenceBasedLearning; @SurveyPBRL], ratings [@ActiveRewardLearning], human reinforcement signals [@TAMER; @DeepTAMER; @COACH], proxy rewards [@IRD], etc. We suggest preference learning with a new source of data: the state of the environment when the robot is first deployed. It can also be seen as a variant of Maximum Causal Entropy Inverse Reinforcement Learning [@ziebart2010modeling]: while inverse reinforcement learning (IRL) requires demonstrations, or at least state sequences without actions [@IRLFromStates; @OneShotHumanImitation], we learn a reward function from a single state, albeit with the simplifying assumption of known dynamics. [**Frame properties.**]{} The frame problem in AI [@PhilosophyAndAI] refers to the issue that we must specify what stays the same in addition to what changes. In formal verification, this manifests as a requirement to explicitly specify the many quantities that the program does not change [@FrameProblemPL]. Analogously, rewards are likely to specify what to do (the task), but may forget to say what not to do (the frame properties). One of our goals is to infer frame properties automatically. [**Side effects.**]{} An impact penalty can mitigate reward specification problems, since it penalizes unnecessary “large" changes [@LowImpactAI]. Compared to an inaction baseline of doing nothing, we could penalize a reduction in the number of reachable states [@RelativeReachability] or attainable utility [@AUP]. However, such approaches will penalize all irreversible effects, including ones that humans *want* to happen. In contrast, by taking a preference inference approach, we can infer which effects humans care about. [**Goal states as specifications.**]{} Desired behavior in RL can be specified with an explicitly chosen goal state [@DynamicGoalLearning; @UVFA; @RIG; @AGILE; @HER]. In our setting, the robot observes the *initial* state $s_0$ where it *starts* acting, which is not explicitly chosen by the designer, but nonetheless contains preference information. Preliminaries {#sec:preliminaries} ============= A finite-horizon Markov decision process (MDP) is a tuple $\mathcal M = \langle \mathcal S, \mathcal A, \mathcal T, r, T \rangle$, where $\mathcal S$ is the set of states, $\mathcal A$ is the set of actions, $\mathcal T: \mathcal S \times \mathcal A \times \mathcal S \mapsto [0,1]$ is the transition probability function, $r : \mathcal S \mapsto \mathbb{R}$ is the reward function, and $T \in \mathbb{Z}_{+}$ is the finite planning horizon. We consider MDPs where the reward is linear in features, and does not depend on action: $r(s ; \theta) = \theta^T f(s)$, where $\theta$ are the parameters defining the reward function and $f$ computes features of a given state. [**Inverse Reinforcement Learning (IRL).**]{} In IRL, the aim is to infer the reward function $r$ given an MDP without reward $\mathcal M \backslash r$ and expert demonstrations $\mathcal D= \{ \tau_1, ..., \tau_n \} $, where each $\tau_i = (s_{0}, a_{0}, ..., s_T, a_T)$ is a trajectory sampled from the expert policy acting in the MDP. It is assumed that each $\tau_i$ is feasible, so that $\mathcal{T}(s_{j+1} \mid s_j, a_j) > 0$ for every $j$. [**Maximum Causal Entropy IRL (MCEIRL).**]{} As human demonstrations are rarely optimal, @ziebart2010modeling models the expert as a Boltzmann-rational agent that maximizes total reward and causal entropy of the policy. This leads to the policy $\pi_t(a\mid s, \theta) = \exp(Q_t(s,a;\theta)-V_t(s;\theta))$, where $V_t(s;\theta)~=~\ln\sum_a exp(Q_t(s,a;\theta))$ plays the role of a normalizing constant. Intuitively, the expert is assumed to act close to randomly when the difference in expected total reward across the actions is small, but nearly always chooses the best action when it leads to a substantially higher expected return. The soft Bellman backup for the state-action value function $Q$ is the same as usual, and is given by $Q_t(s,a;\theta) = \theta^T f(s) + \sum_{s'} \mathcal T(s' \mid s, a) V_{t+1}(s';\theta)$. The likelihood of a trajectory $\tau$ given the reward parameters $\theta$ is: $$\label{eq:mceirl-obs-model} p(\tau \mid \theta) = p(s_{0}) \bigg( \prod_{t=0}^{T-1} \mathcal T(s_{t+1} \mid s_t, a_t) \pi_t(a_t \mid s_t, \theta) \bigg) \pi_T(a_T \mid s_T, \theta).$$ MCEIRL finds the reward parameters $\theta^*$ that maximize the log-likelihood of the demonstrations: $$\label{eq:mceirl-mle} \theta^* = \text{argmax}_{\theta} \ln p(\mathcal{D} \mid \theta) = \text{argmax}_{\theta} \sum_i \sum_t \ln \pi_t(a_{i,t} \mid s_{i, t}, \theta).$$ $\theta^*$ gives rise to a policy whose feature expectations match those of the expert demonstrations. Reward Learning by Simulating the Past {#sec:algorithms} ====================================== We solve the problem of learning the reward function of an expert Alice given a single final state of her trajectory; we refer to this problem as *IRL from a single state*. Formally, we aim to infer Alice’s reward $\theta$ given an environment $\mathcal{M} \backslash r$ and the last state of the expert’s trajectory $s_0$. [**Formulation.**]{} To adapt MCEIRL to the one state setting we modify the observation model from [Equation \[eq:mceirl-obs-model\]]{}. Since we only have a single end state $s_0$ of the trajectory $\tau_0 = (s_{-T}, a_{-T}, ..., s_0, a_0)$, we marginalize over all of the other variables in the trajectory: $$p(s_0 \mid \theta) = \sum\limits_{s_{-T}, a_{-T}, \dots s_{-1}, a_{-1}, a_0} p(\tau_0 \mid \theta),$$ where $p(\tau_0 \mid \theta)$ is given in [Equation \[eq:mceirl-obs-model\]]{}. We could invert this and sample from $p(\theta \mid s_0)$; the resulting algorithm is presented in Appendix \[appendix:sampling-algo\], but is relatively noisy and slow. We instead find the MLE: $$\label{eq:one-state-mceirl-objective} \theta^* = \text{argmax}_{\theta} \ln p(s_0 \mid \theta). $$ [**Solution.**]{} Similarly to MCEIRL, we use a gradient ascent algorithm to solve the IRL from one state problem. We explain the key steps here and give the full derivation in Appendix \[appendix:deriving-grad\]. First, we express the gradient in terms of the gradients of trajectories: $$\nabla_{\theta} \ln p(s_0 \mid \theta) = \sum\limits_{\tau_{-T:-1}} p(\tau_{-T:-1} \mid s_0, \theta) \nabla_{\theta} \ln p(\tau_{-T:0} \mid \theta).$$ This has a nice interpretation – compute the Maximum Causal Entropy gradients for each trajectory, and then take their weighted sum, where each weight is the probability of the trajectory given the evidence $s_0$ and current reward $\theta$. We derive the exact gradient for a trajectory instead of the approximate one in @ziebart2010modeling in Appendix \[appendix:mceirl-grad\] and substitute it in to get: $$\label{eq:one-state-mceirl-grad} \nabla_{\theta} \ln p(s_0) = \frac{1}{p(s_0)} \sum\limits_{\tau_{-T:-1}} \left[ p(\tau_{-T:-1}, s_0) \sum_{t=-T}^{-1} \left( f(s_t) + {\mathbb{E}_{s'_{t+1}}\left[\mathcal{F}_{t+1}(s'_{t+1})\right]} - \mathcal{F}_t(s_t) \right) \right],$$ where we have suppressed the dependence on $\theta$ for readability. $\mathcal{F}_t(s_t)$ denotes the expected features when starting at $s_t$ at time $t$ and acting until time $0$ under the policy implied by $\theta$. Since we combine gradients from simulated past trajectories, we name our algorithm Reward Learning by Simulating the Past (RLSP). The algorithm computes the gradient using dynamic programming, detailed in Appendix \[appendix:deriving-grad\]. We can easily incorporate a prior on $\theta$ by adding the gradient of the log prior to the gradient in [Equation \[eq:one-state-mceirl-grad\]]{}. Evaluation {#sec:evaluation} ========== Evaluation of RLSP is non-trivial. The inferred reward is very likely to assign state $s_0$ maximal reward, since it was inferred under the assumption that when Alice optimized the reward she ended up at $s_0$. If the robot then starts in state $s_0$, if a no-op action is available (as it often is), the RLSP reward is likely to incentivize no-ops, which is not very interesting. Ultimately, we hope to use RLSP to correct badly specified instructions or reward functions. So, we created a suite of environments with a true reward $R_{\text{true}}$, a specified reward $R_{\text{spec}}$, Alice’s first state $s_{-T}$, and the robot’s initial state $s_0$, where $R_{\text{spec}}$ ignores some aspect(s) of $R_{\text{true}}$. RLSP is used to infer a reward $\theta_{\text{Alice}}$ from $s_0$, which is then combined with the specified reward to get a final reward $\theta_{\text{final}} = \theta_{\text{Alice}} + \lambda \theta_{\text{spec}}$. (We considered another method for combining rewards; see Appendix \[appendix:tradeoff\] for details.) We inspect the inferred reward qualitatively and measure the expected amount of true reward obtained when planning with $\theta_{\text{final}}$, as a fraction of the expected true reward from the optimal policy. We tune the hyperparameter $\lambda$ controlling the tradeoff between $R_{\text{spec}}$ and the human reward for all algorithms, including baselines. We use a Gaussian prior over the reward parameters. Baselines --------- [**Specified reward policy $\pi_{\text{spec}}$.**]{} We act as if the true reward is exactly the specified reward. [**Policy that penalizes deviations $\pi_{\text{deviation}}$.**]{} This baseline minimizes change by penalizing deviations from the observed features $f(s_0)$, giving $R_{\text{final}}(s) = \theta_{\text{spec}}^T f(s) + \lambda || f(s) - f(s_0) ||$. [**Relative reachability policy $\pi_{\text{reachability}}$.**]{} Relative reachability [@RelativeReachability] considers a change to be negative when it decreases *coverage*, relative to what would have happened had the agent done nothing. Here, coverage is a measure of how easily states can be reached from the current state. We compare against the variant of relative reachability that uses undiscounted coverage and a baseline policy where the agent takes no-op actions, as in the original paper. Relative reachability requires known dynamics but not a handcoded featurization. A version of relative reachability that operates in feature space instead of state space would behave similarly. Comparison to baselines {#sec:envs} ----------------------- We compare RLSP to our baselines with the assumption of known $s_{-T}$, because it makes it easier to analyze RLSP’s properties. We consider the case of unknown $s_{-T}$ in Section \[sec:uniform-prior\]. We summarize the results in Table \[table:results\], and show the environments and trajectories in Figure \[workflow\]. ------------------------------- ------------------ ---------------- --------------------- ------------------- ------------------- -- **Side effects** **Env effect** **Implicit reward** **Unseen effect** Room Toy train Apple collection Far away vase Easy Hard $\pi_{\textbf{spec}}$ $\pi_{\textbf{deviation}}$ [[$\approx$]{}]{} $\pi_{\textbf{reachability}}$ [[$\approx$]{}]{} $\pi_{\textbf{RLSP}}$ ------------------------------- ------------------ ---------------- --------------------- ------------------- ------------------- -- : Performance of algorithms on environments designed to test particular properties.[]{data-label="table:results"} [0.1503]{} ![\[workflow\] Evaluation of RLSP on our environments. Silhouettes indicate the initial position of an object or agent, while filled in version indicate their positions after an agent has acted. The first row depicts the information given to RLSP. The second row shows the trajectory taken by the robot when following the policy $\pi_{\text{spec}}$ that is optimal for ${\theta_{\text{spec}}}$. The third row shows the trajectory taken when following the policy $\pi_{\text{RLSP}}$ that is optimal for $\theta_{\text{final}} = \theta_{\text{Alice}} + \lambda \theta_{\text{spec}}$. (a) Side effects: Room with vase (b) Distinguishing environment effects: Toy train (c) Implicit reward: Apple collection (d) Desirable side effect: Batteries (e) “Unseen” side effect: Room with far away vase.](images/envs-compressed/h-room.pdf "fig:"){width=".98\textwidth"} [0.25]{} ![\[workflow\] Evaluation of RLSP on our environments. Silhouettes indicate the initial position of an object or agent, while filled in version indicate their positions after an agent has acted. The first row depicts the information given to RLSP. The second row shows the trajectory taken by the robot when following the policy $\pi_{\text{spec}}$ that is optimal for ${\theta_{\text{spec}}}$. The third row shows the trajectory taken when following the policy $\pi_{\text{RLSP}}$ that is optimal for $\theta_{\text{final}} = \theta_{\text{Alice}} + \lambda \theta_{\text{spec}}$. (a) Side effects: Room with vase (b) Distinguishing environment effects: Toy train (c) Implicit reward: Apple collection (d) Desirable side effect: Batteries (e) “Unseen” side effect: Room with far away vase.](images/envs-compressed/h-train.pdf "fig:"){width=".98\textwidth"} [0.25]{} ![\[workflow\] Evaluation of RLSP on our environments. Silhouettes indicate the initial position of an object or agent, while filled in version indicate their positions after an agent has acted. The first row depicts the information given to RLSP. The second row shows the trajectory taken by the robot when following the policy $\pi_{\text{spec}}$ that is optimal for ${\theta_{\text{spec}}}$. The third row shows the trajectory taken when following the policy $\pi_{\text{RLSP}}$ that is optimal for $\theta_{\text{final}} = \theta_{\text{Alice}} + \lambda \theta_{\text{spec}}$. (a) Side effects: Room with vase (b) Distinguishing environment effects: Toy train (c) Implicit reward: Apple collection (d) Desirable side effect: Batteries (e) “Unseen” side effect: Room with far away vase.](images/envs-compressed/h-batteries.pdf "fig:"){width=".98\textwidth"} [0.15]{} ![\[workflow\] Evaluation of RLSP on our environments. Silhouettes indicate the initial position of an object or agent, while filled in version indicate their positions after an agent has acted. The first row depicts the information given to RLSP. The second row shows the trajectory taken by the robot when following the policy $\pi_{\text{spec}}$ that is optimal for ${\theta_{\text{spec}}}$. The third row shows the trajectory taken when following the policy $\pi_{\text{RLSP}}$ that is optimal for $\theta_{\text{final}} = \theta_{\text{Alice}} + \lambda \theta_{\text{spec}}$. (a) Side effects: Room with vase (b) Distinguishing environment effects: Toy train (c) Implicit reward: Apple collection (d) Desirable side effect: Batteries (e) “Unseen” side effect: Room with far away vase.](images/envs-compressed/h-apples.pdf "fig:"){width=".98\textwidth"} [0.15]{} ![\[workflow\] Evaluation of RLSP on our environments. Silhouettes indicate the initial position of an object or agent, while filled in version indicate their positions after an agent has acted. The first row depicts the information given to RLSP. The second row shows the trajectory taken by the robot when following the policy $\pi_{\text{spec}}$ that is optimal for ${\theta_{\text{spec}}}$. The third row shows the trajectory taken when following the policy $\pi_{\text{RLSP}}$ that is optimal for $\theta_{\text{final}} = \theta_{\text{Alice}} + \lambda \theta_{\text{spec}}$. (a) Side effects: Room with vase (b) Distinguishing environment effects: Toy train (c) Implicit reward: Apple collection (d) Desirable side effect: Batteries (e) “Unseen” side effect: Room with far away vase.](images/envs-compressed/h-room-bad.pdf "fig:"){width=".98\textwidth"} [0.1503]{} ![\[workflow\] Evaluation of RLSP on our environments. Silhouettes indicate the initial position of an object or agent, while filled in version indicate their positions after an agent has acted. The first row depicts the information given to RLSP. The second row shows the trajectory taken by the robot when following the policy $\pi_{\text{spec}}$ that is optimal for ${\theta_{\text{spec}}}$. The third row shows the trajectory taken when following the policy $\pi_{\text{RLSP}}$ that is optimal for $\theta_{\text{final}} = \theta_{\text{Alice}} + \lambda \theta_{\text{spec}}$. (a) Side effects: Room with vase (b) Distinguishing environment effects: Toy train (c) Implicit reward: Apple collection (d) Desirable side effect: Batteries (e) “Unseen” side effect: Room with far away vase.](images/envs-compressed/spec-room.pdf "fig:"){width=".98\textwidth"} [0.25]{} ![\[workflow\] Evaluation of RLSP on our environments. Silhouettes indicate the initial position of an object or agent, while filled in version indicate their positions after an agent has acted. The first row depicts the information given to RLSP. The second row shows the trajectory taken by the robot when following the policy $\pi_{\text{spec}}$ that is optimal for ${\theta_{\text{spec}}}$. The third row shows the trajectory taken when following the policy $\pi_{\text{RLSP}}$ that is optimal for $\theta_{\text{final}} = \theta_{\text{Alice}} + \lambda \theta_{\text{spec}}$. (a) Side effects: Room with vase (b) Distinguishing environment effects: Toy train (c) Implicit reward: Apple collection (d) Desirable side effect: Batteries (e) “Unseen” side effect: Room with far away vase.](images/envs-compressed/spec-train.pdf "fig:"){width=".98\textwidth"} [0.25]{} ![\[workflow\] Evaluation of RLSP on our environments. Silhouettes indicate the initial position of an object or agent, while filled in version indicate their positions after an agent has acted. The first row depicts the information given to RLSP. The second row shows the trajectory taken by the robot when following the policy $\pi_{\text{spec}}$ that is optimal for ${\theta_{\text{spec}}}$. The third row shows the trajectory taken when following the policy $\pi_{\text{RLSP}}$ that is optimal for $\theta_{\text{final}} = \theta_{\text{Alice}} + \lambda \theta_{\text{spec}}$. (a) Side effects: Room with vase (b) Distinguishing environment effects: Toy train (c) Implicit reward: Apple collection (d) Desirable side effect: Batteries (e) “Unseen” side effect: Room with far away vase.](images/envs-compressed/spec-batteries.pdf "fig:"){width=".98\textwidth"} [0.15]{} ![\[workflow\] Evaluation of RLSP on our environments. Silhouettes indicate the initial position of an object or agent, while filled in version indicate their positions after an agent has acted. The first row depicts the information given to RLSP. The second row shows the trajectory taken by the robot when following the policy $\pi_{\text{spec}}$ that is optimal for ${\theta_{\text{spec}}}$. The third row shows the trajectory taken when following the policy $\pi_{\text{RLSP}}$ that is optimal for $\theta_{\text{final}} = \theta_{\text{Alice}} + \lambda \theta_{\text{spec}}$. (a) Side effects: Room with vase (b) Distinguishing environment effects: Toy train (c) Implicit reward: Apple collection (d) Desirable side effect: Batteries (e) “Unseen” side effect: Room with far away vase.](images/envs-compressed/spec-apples.pdf "fig:"){width=".98\textwidth"} [0.15]{} ![\[workflow\] Evaluation of RLSP on our environments. Silhouettes indicate the initial position of an object or agent, while filled in version indicate their positions after an agent has acted. The first row depicts the information given to RLSP. The second row shows the trajectory taken by the robot when following the policy $\pi_{\text{spec}}$ that is optimal for ${\theta_{\text{spec}}}$. The third row shows the trajectory taken when following the policy $\pi_{\text{RLSP}}$ that is optimal for $\theta_{\text{final}} = \theta_{\text{Alice}} + \lambda \theta_{\text{spec}}$. (a) Side effects: Room with vase (b) Distinguishing environment effects: Toy train (c) Implicit reward: Apple collection (d) Desirable side effect: Batteries (e) “Unseen” side effect: Room with far away vase.](images/envs-compressed/spec-room-bad.pdf "fig:"){width=".98\textwidth"} [0.1503]{} ![\[workflow\] Evaluation of RLSP on our environments. Silhouettes indicate the initial position of an object or agent, while filled in version indicate their positions after an agent has acted. The first row depicts the information given to RLSP. The second row shows the trajectory taken by the robot when following the policy $\pi_{\text{spec}}$ that is optimal for ${\theta_{\text{spec}}}$. The third row shows the trajectory taken when following the policy $\pi_{\text{RLSP}}$ that is optimal for $\theta_{\text{final}} = \theta_{\text{Alice}} + \lambda \theta_{\text{spec}}$. (a) Side effects: Room with vase (b) Distinguishing environment effects: Toy train (c) Implicit reward: Apple collection (d) Desirable side effect: Batteries (e) “Unseen” side effect: Room with far away vase.](images/envs-compressed/irl-room.pdf "fig:"){width=".98\textwidth"} [0.25]{} ![\[workflow\] Evaluation of RLSP on our environments. Silhouettes indicate the initial position of an object or agent, while filled in version indicate their positions after an agent has acted. The first row depicts the information given to RLSP. The second row shows the trajectory taken by the robot when following the policy $\pi_{\text{spec}}$ that is optimal for ${\theta_{\text{spec}}}$. The third row shows the trajectory taken when following the policy $\pi_{\text{RLSP}}$ that is optimal for $\theta_{\text{final}} = \theta_{\text{Alice}} + \lambda \theta_{\text{spec}}$. (a) Side effects: Room with vase (b) Distinguishing environment effects: Toy train (c) Implicit reward: Apple collection (d) Desirable side effect: Batteries (e) “Unseen” side effect: Room with far away vase.](images/envs-compressed/irl-train.pdf "fig:"){width=".98\textwidth"} [0.25]{} ![\[workflow\] Evaluation of RLSP on our environments. Silhouettes indicate the initial position of an object or agent, while filled in version indicate their positions after an agent has acted. The first row depicts the information given to RLSP. The second row shows the trajectory taken by the robot when following the policy $\pi_{\text{spec}}$ that is optimal for ${\theta_{\text{spec}}}$. The third row shows the trajectory taken when following the policy $\pi_{\text{RLSP}}$ that is optimal for $\theta_{\text{final}} = \theta_{\text{Alice}} + \lambda \theta_{\text{spec}}$. (a) Side effects: Room with vase (b) Distinguishing environment effects: Toy train (c) Implicit reward: Apple collection (d) Desirable side effect: Batteries (e) “Unseen” side effect: Room with far away vase.](images/envs-compressed/irl-batteries-new.pdf "fig:"){width=".98\textwidth"} [0.15]{} ![\[workflow\] Evaluation of RLSP on our environments. Silhouettes indicate the initial position of an object or agent, while filled in version indicate their positions after an agent has acted. The first row depicts the information given to RLSP. The second row shows the trajectory taken by the robot when following the policy $\pi_{\text{spec}}$ that is optimal for ${\theta_{\text{spec}}}$. The third row shows the trajectory taken when following the policy $\pi_{\text{RLSP}}$ that is optimal for $\theta_{\text{final}} = \theta_{\text{Alice}} + \lambda \theta_{\text{spec}}$. (a) Side effects: Room with vase (b) Distinguishing environment effects: Toy train (c) Implicit reward: Apple collection (d) Desirable side effect: Batteries (e) “Unseen” side effect: Room with far away vase.](images/envs-compressed/irl-apples.pdf "fig:"){width=".98\textwidth"} [0.15]{} ![\[workflow\] Evaluation of RLSP on our environments. Silhouettes indicate the initial position of an object or agent, while filled in version indicate their positions after an agent has acted. The first row depicts the information given to RLSP. The second row shows the trajectory taken by the robot when following the policy $\pi_{\text{spec}}$ that is optimal for ${\theta_{\text{spec}}}$. The third row shows the trajectory taken when following the policy $\pi_{\text{RLSP}}$ that is optimal for $\theta_{\text{final}} = \theta_{\text{Alice}} + \lambda \theta_{\text{spec}}$. (a) Side effects: Room with vase (b) Distinguishing environment effects: Toy train (c) Implicit reward: Apple collection (d) Desirable side effect: Batteries (e) “Unseen” side effect: Room with far away vase.](images/envs-compressed/irl-room-bad.pdf "fig:"){width=".98\textwidth"} [**Side effects: Room with vase**]{} *(Figure \[room-env\])*. The room tests whether the robot can avoid breaking a vase as a side effect of going to the purple door. There are features for the number of broken vases, standing on a carpet, and each door location. Since Alice didn’t walk over the vase, RLSP infers a negative reward on broken vases, and a small positive reward on carpets (since paths to the top door usually involve carpets). So, $\pi_{\text{RLSP}}$ successfully avoids breaking the vase. The penalties also achieve the desired behavior: $\pi_{\text{deviation}}$ avoids breaking the vase since it would change the “number of broken vases” feature, while relative reachability avoids breaking the vase since doing so would result in all states with intact vases becoming unreachable. [**Distinguishing environment effects: Toy train**]{} *(Figure \[train-env\])*. To test whether algorithms can distinguish between effects caused by the agent and effects caused by the environment, as suggested in @RelativeReachability, we add a toy train that moves along a predefined track. The train breaks if the agent steps on it. We add a new feature indicating whether the train is broken and new features for each possible train location. As before, the specified reward only has a positive weight on the purple door, while the true reward also penalizes broken trains and vases. RLSP infers a negative reward on broken vases and broken trains, for the same reason as before. It also infers not to put any weight on any particular train location, even though it changes frequently, because it doesn’t help explain $s_0$. As a result, $\pi_{\text{RLSP}}$ walks over a carpet, but not a vase or a train. $\pi_{\text{deviation}}$ immediately breaks the train to keep the train location the same. $\pi_{\text{reachability}}$ deduces that breaking the train is irreversible, and so follows the same trajectory as $\pi_{\text{RLSP}}$. [**Implicit reward: Apple collection**]{} *(Figure \[apples-env\])*. This environment tests whether the algorithms can learn tasks implicit in $s_0$. There are three trees that grow apples, as well as a basket for collecting apples, and the goal is for the robot to harvest apples. However, the specified reward is zero: the robot must infer the task from the observed state. We have features for the number of apples in baskets, the number of apples on trees, whether the robot is carrying an apple, and each location that the agent could be in. $s_0$ has two apples in the basket, while $s_{-T}$ has none. $\pi_{\text{spec}}$ is arbitrary since every policy is optimal for the zero reward. $\pi_{\text{deviation}}$ does nothing, achieving zero reward, since its reward can never be positive. $\pi_{\text{reachability}}$ also does not harvest apples. RLSP infers a positive reward on apples in baskets, a negative reward for apples on trees, and a small positive reward for carrying apples. Despite the spurious weights, $\pi_{\text{RLSP}}$ harvests apples as desired. [**Desirable side effect: Batteries**]{} *(Figure \[batteries-env\])*. This environment tests whether the algorithms can tell when a side effect is allowed. We take the toy train environment, remove vases and carpets, and add batteries. The robot can pick up batteries and put them into the (now unbreakable) toy train, but the batteries are never replenished. If the train runs for 10 timesteps without a new battery, it stops operating. There are features for the number of batteries, whether the train is operational, each train location, and each door location. There are two batteries at $s_{-T}$ but only one at $s_0$. The true reward incentivizes an operational train and being at the purple door. We consider two variants for the task reward – an “easy” case, where the task reward equals the true reward, and a “hard” case, where the task reward only rewards being at the purple door. Unsurprisingly, $\pi_{\text{spec}}$ succeeds at the easy case, and fails on the hard case by allowing the train to run out of power. Both $\pi_{\text{deviation}}$ and $\pi_{\text{reachability}}$ see the action of putting a battery in the train as a side effect to be penalized, and so neither can solve the hard case. They penalize picking up the batteries, and so only solve the easy case if the penalty weight is small. RLSP sees that one battery is gone and that the train is operational, and infers that Alice wants the train to be operational and doesn’t want batteries (since a preference against batteries and a preference for an operational train are nearly indistinguishable). So, it solves both the easy and the hard case, with $\pi_{\text{RLSP}}$ picking up the battery, then staying at the purple door except to deliver the battery to the train. [**“Unseen” side effect: Room with far away vase**]{} *(Figure \[room-bad-env\])*. This environment demonstrates a limitation of our algorithm: it cannot identify side effects that Alice would never have triggered. In this room, the vase is nowhere close to the shortest path from the Alice’s original position to her goal, but is on the path to the robot’s goal. Since our baselines don’t care about the trajectory the human takes, they all perform as before: $\pi_{\text{spec}}$ walks over the vase, while $\pi_{\text{deviation}}$ and $\pi_{\text{reachability}}$ both avoid it. Our method infers a near zero weight on the broken vase feature, since it is not present on any reasonable trajectory to the goal, and so breaks it when moving to the goal. Note that this only applies when Alice is known to be at the bottom left corner at $s_{-T}$: if we have a uniform prior over $s_{-T}$ (considered in Section \[sec:uniform-prior\]) then we do consider trajectories where vases are broken. Comparison between knowing $s_{-T}$ vs. a distribution over $s_{-T}$ {#sec:uniform-prior} -------------------------------------------------------------------- So far, we have considered the setting where the robot knows $s_{-T}$, since it is easier to analyze what happens. However, typically we will not know $s_{-T}$, and will instead have some prior over $s_{-T}$. Here, we compare RLSP in two settings: perfect knowledge of $s_{-T}$ (as in Section \[sec:envs\]), and a uniform distribution over all states. [**Side effects: Room with vase**]{} *(Figure \[room-env\])* **and toy train** *(Figure \[train-env\])*. In both room with vase and toy train, RLSP learns a smaller negative reward on broken vases when using a uniform prior. This is because RLSP considers many more feasible trajectories when using a uniform prior, many of which do not give Alice a chance to break the vase, as in Room with far away vase in Section \[sec:envs\]. In room with vase, the small positive reward on carpets changes to a near-zero negative reward on carpets. With known $s_{-T}$, RLSP overfits to the few consistent trajectories, which usually go over carpets, whereas with a uniform prior it considers many more trajectories that often don’t go over carpets, and so it correctly infers a near-zero weight. In toy train, the negative reward on broken trains becomes slightly more negative, while other features remain approximately the same. This may be because when Alice starts out closer to the toy train, she has more of an opportunity to break it, compared to the known $s_{-T}$ case. [**Implicit preference: Apple collection**]{} *(Figure \[apples-env\])*. Here, a uniform prior leads to a smaller positive weight on the number of apples in baskets compared to the case with known $s_{-T}$. Intuitively, this is because RLSP is considering cases where $s_{-T}$ already has one or two apples in the basket, which implies that Alice has collected fewer apples and so must have been less interested in them. States where the basket starts with three or more apples are inconsistent with the observed $s_0$ and so are not considered. Following the inferred reward still leads to good apple harvesting behavior. [**Desirable side effects: Batteries**]{} *(Figure \[batteries-env\])*. With the uniform prior, we see the same behavior as in Apple collection, where RLSP with a uniform prior learns a slightly smaller negative reward on the batteries, since it considers states $s_{-T}$ where the battery was already gone. In addition, due to the particular setup the battery must have been given to the train two timesteps prior, which means that in any state where the train started with very little charge, it was allowed to die even though a battery could have been provided before, leading to a near-zero *positive* weight on the train losing charge. Despite this, RLSP successfully delivers the battery to the train in both easy and hard cases. [**“Unseen” side effect: Room with far away vase**]{} *(Figure \[room-bad-env\])*. With a uniform prior, we “see” the side effect: if Alice started at the purple door, then the shortest trajectory to the black door would break a vase. As a result, $\pi_{\text{RLSP}}$ successfully avoids the vase (whereas it previously did not). Here, uncertainty over the initial state $s_{-T}$ can counterintuitively *improve* the results, because it increases the diversity of trajectories considered, which prevents RLSP from “overfitting” to the few trajectories consistent with a known $s_{-T}$ and $s_0$. Overall, RLSP is quite robust to the use of a uniform prior over $s_{-T}$, suggesting that we do not need to be particularly careful in the design of that prior. Robustness to the choice of Alice’s planning horizon ---------------------------------------------------- We investigate how RLSP performs when assuming the wrong value of Alice’s planning horizon $T$. We vary the value of $T$ assumed by RLSP, and report the true return achieved by $\pi_{\text{RLSP}}$ obtained using the inferred reward and a *fixed* horizon for the robot to act. For this experiment, we used a uniform prior over $s_{-T}$, since with known $s_{-T}$, RLSP often detects that the given $s_{-T}$ and $s_0$ are incompatible (when $T$ is misspecified). The results are presented in Figure \[fig:horizon\]. [h]{}[0.305]{} ![image](images/horizon_0.pdf){width="30.50000%"} The performance worsens when RLSP assumes that Alice had a smaller planning horizon than she actually had. Intuitively, if we assume that Alice has only taken one or two actions ever, then even if we knew the actions they could have been in service of many goals, and so we end up quite uncertain about Alice’s reward. When the assumed $T$ is larger than the true horizon, RLSP correctly infers things the robot should *not* do. Knowing that the vase was not broken for longer than $T$ timesteps is more evidence to suspect that Alice cared about not breaking the vase. However, overestimated $T$ leads to worse performance at inferring implicit preferences, as in the Apples environment. If we assume Alice has only collected two apples in 100 timesteps, she must not have cared about them much, since she could have collected many more. The batteries environment is unusual – assuming that Alice has been acting for 100 timesteps, the only explanation for the observed $s_0$ is that Alice waited until the 98th timestep to put the battery into the train. This is not particularly consistent with any reward function, and performance degrades. Overall, $T$ is an important parameter and needs to be set appropriately. However, even when $T$ is misspecified, performance degrades gracefully to what would have happened if we optimized ${\theta_{\text{spec}}}$ by itself, so RLSP does not hurt. In addition, if $T$ is larger than it should be, then RLSP still tends to accurately infer parts of the reward that specify what not to do. Limitations and future work {#sec:discussion} =========================== [**Summary.**]{} Our key insight is that when a robot is deployed, the state that it observes has already been optimized to satisfy human preferences. This explains our preference for a policy that generally avoids side effects. We formalized this by assuming that Alice has been acting in the environment prior to the robot’s deployment. We developed an algorithm, RLSP, that computes a MAP estimate of Alice’s reward function. The robot then acts according to a tradeoff between Alice’s reward function and the specified reward function. Our evaluation showed that information from the initial state can be used to successfully infer side effects to avoid as well as tasks to complete, though there are cases in which we cannot infer the relevant preferences. While we believe this is an important step forward, there is still much work to be done to make this accurate and practical. [**Realistic environments.**]{} The primary avenue for future work is to scale to realistic environments, where we cannot enumerate states, we don’t know dynamics, and the reward function may be nonlinear. This could be done by adapting existing IRL algorithms [@AIRL; @GAIL; @GCL]. Unknown dynamics is particularly challenging, since we cannot learn dynamics from a single state observation. While acting in the environment, we would have to learn a dynamics model or an inverse dynamics model that can be used to simulate the past, and update the learned preferences as our model improves over time. Alternatively, if we use unsupervised skill learning [@VALOR; @DIAYN; @RIG] or exploration [@curiosity], or learn a goal-conditioned policy [@UVFA; @HER], we could compare the explored states with the observed $s_0$. [**Hyperparameter choice.**]{} While our evaluation showed that RLSP is reasonably robust to the choice of planning horizon $T$ and prior over $s_{-T}$, this may be specific to our gridworlds. In the real world, we often make long term hierarchical plans, and if we don’t observe the entire plan (corresponding to a choice of T that is too small) it seems possible that we infer bad rewards, especially if we have an uninformative prior over $s_{-T}$. We do not know whether this will be a problem, and if so how bad it will be, and hope to investigate it in future work with more realistic environments. [**Conflicts between $\theta_{\text{spec}}$ and $\theta_{\text{Alice}}$.**]{} RLSP allows us to infer $\theta_{\text{Alice}}$ from $s_0$, which we must somehow combine with $\theta_{\text{spec}}$ to produce a reward $\theta_{\text{final}}$ for the robot to optimize. $\theta_{\text{Alice}}$ will usually prefer the status quo of keeping the state similar to $s_0$, while $\theta_{\text{spec}}$ will probably incentivize some *change* to the state, leading to conflict. We traded off between the two by optimizing their sum, but this is not very principled and future work could improve upon this. For example, $\theta_{\text{Alice}}$ could be decomposed into $\theta_{\text{Alice,task}}$, which says which task Alice is performing (“go to the black door”), and $\theta_{\text{frame}}$, which consists of the frame conditions (“don’t break vases”). The robot then optimizes $\theta_{\text{frame}} + \lambda \theta_{\text{spec}}$. This requires some way of performing the decomposition. We could model the human as pursuing multiple different subgoals, or the environment as being created by multiple humans with different goals. $\theta_{\text{frame}}$ would be shared, while $\theta_{\text{task}}$ would vary, allowing us to distinguish between them. However, combination may not be the answer – instead, perhaps the robot ought to use the inferred reward to inform Alice of any conflicts and actively query her for more information. [**Learning tasks to perform.**]{} The apples and batteries environments demonstrate that RLSP can learn preferences that require the robot to actively perform a task. It is not clear that this is desirable, since the robot may perform an inferred task instead of the task Alice explicitly sets for it. [**Preferences that are not a result of human optimization.**]{} While the initial state is optimized for human preferences, this may not be a result of *human* optimization, as assumed in this paper. For example, we prefer that the atmosphere contain oxygen for us to breathe. The atmosphere meets this preference *in spite of* human action, and so RLSP would not infer this preference. While this is of limited relevance for household robots, it may become important for more capable AI systems. ### Acknowledgments {#acknowledgments .unnumbered} We thank the researchers at the Center for Human Compatible AI for valuable feedback. This work was supported by the Open Philanthropy Project and National Science Foundation Graduate Research Fellowship Grant No. DGE 1752814. {#appendix:mceirl-grad} Here, we derive an exact gradient for the maximum causal entropy distribution introduced in @ziebart2010modeling, as the existing approximation is insufficient for our purposes. Given a trajectory $\tau_T = s_0 a_0 \dots s_T a_T$, we seek the gradient $\nabla_{\theta} \ln p(\tau_T)$. We assume that the expert has been acting according to the maximum causal entropy IRL model given in [Section \[sec:preliminaries\]]{} (where we have dropped $\theta$ from the notation for clarity): $$\begin{aligned} \pi_t(a\mid s) &= \exp(Q_t(s,a) - V_t(s)), \\ V_t(s) &= \ln \sum_a \exp(Q_t(s,a)) & \text{for } 1 \leq t \leq T, \\ Q_t(s,a) &= \theta^T f(s) + \sum_{s'} \mathcal T(s' \mid s, a) V_{t+1}(s') & \text{for } 1 \leq t \leq T, \\ V_{T+1}(s) &= 0.\end{aligned}$$ In the following, unless otherwise specified, all expectations over states and actions use the probability distribution over trajectories from the above model, starting from the state and action just prior. For example, ${\mathbb{E}_{s'_T,a'_T}\left[X(s'_T, a'_T)\right]} = \sum_{s'_T,a'_T} \mathcal{T}(s'_T \mid s_{T-1},a_{T-1}) \pi_T(a'_T \mid s'_T) X(s'_T, a'_T)$. In addition, for all probability distributions over states and actions, we drop the dependence on $\theta$ for readability, so the probability of reaching state $s_T$ is written as $p(s_T)$ instead of $p(s_T \mid \theta)$. First, we compute the gradient of $V_t(s)$. We have $\nabla_{\theta} V_{T+1}(s) = 0$, and for $0 \leq t \leq T$: $$\begin{aligned} &\nabla_{\theta} V_t(s_t) \\&= \nabla_{\theta} \ln \sum\limits_{a'_t} \exp(Q_t(s_t, a'_t)) \\&= \frac{1}{\exp(V_t(s_t))} \sum\limits_{a'_t} \exp(Q_t(s_t, a'_t)) \nabla_{\theta} Q_t(s_t, a'_t) \\&= \frac{1}{\exp(V_t(s_t))} \sum\limits_{a'_t} \exp(Q_t(s_t, a'_t)) \nabla_{\theta} \left[ \theta^T f(s_t) + {\mathbb{E}_{s'_{t+1} \sim \mathcal{T}(\cdot \mid s_t, a'_t)}\left[V_{t+1}(s'_{t+1})\right]} \right] \\&= \sum\limits_{a'_t} \exp(Q_t(s_t, a'_t) - V_t(s_t)) \left[ f(s_t) + {\mathbb{E}_{s'_{t+1} \sim \mathcal{T}(\cdot \mid s_t, a'_t)}\left[\nabla_{\theta} V_{t+1}(s'_{t+1})\right]} \right] \\&= \sum\limits_{a'_t} \pi_t(a'_t \mid s_t) \left[ f(s_t) + {\mathbb{E}_{s'_{t+1} \sim \mathcal{T}(\cdot \mid s_t, a'_t)}\left[\nabla_{\theta} V_{t+1}(s'_{t+1})\right]} \right] \\&= f(s_t) + {\mathbb{E}_{a'_t, s'_{t+1}}\left[\nabla_{\theta} V_{t+1}(s'_{t+1})\right]}.\end{aligned}$$ Unrolling the recursion, we get that the gradient is the expected feature counts under the policy implied by $\theta$ from $s_t$ onwards, which we could prove using induction. Define: $$\mathcal{F}_t(s_t) \equiv f(s_t) + {\mathbb{E}_{a'_{t:T-1}, s'_{t+1:T}}\left[\sum\limits_{t'=t+1}^{T} f(s'_{t'})\right]}.$$ Then we have: $$\nabla_{\theta} V_t(s_t) = \mathcal{F}_t(s_t).$$ We can now calculate the gradient we actually care about: $$\begin{aligned} &\nabla_{\theta} \ln p(\tau_{T}) \\&= \nabla_{\theta} \left[ \ln p(s_0) + \sum_{t=0}^{T} \ln \pi_t(a_t \mid s_t) + \sum_{t=0}^{T-1} \ln \mathcal{T}(s_{t+1} \mid s_t, a_t) \right] \\&= \sum_{t=0}^{T} \nabla_{\theta} \ln \pi_t(a_t \mid s_t) & \text{only } \pi_t \text{ depends on } \theta \\&= \sum_{t=0}^{T} \nabla_{\theta} \left[ Q_t(s_t, a_t) - V_t(s_t) \right] \\&= \sum_{t=0}^{T} \nabla_{\theta} \left[ \theta^T f(s_t) + {\mathbb{E}_{s'_{t+1}}\left[V_{t+1}(s'_{t+1})\right]} - V_t(s_t) \right] \\&= \sum_{t=0}^{T} \left( f(s_t) + {\mathbb{E}_{s'_{t+1}}\left[\nabla_{\theta} V_{t+1}(s'_{t+1})\right]} - \nabla_{\theta} V_t(s_t) \right).\end{aligned}$$ The last term of the summation is $f(s_T) + {\mathbb{E}_{s'_{T+1}}\left[\nabla_{\theta} V_{T+1}(s'_{T+1})\right]} - \nabla_{\theta} V_T(s_T)$, which simplifies to $f(s_T) + 0 - \mathcal{F}_T(s_T) = f(s_T) - f(s_T) = 0$, so we can drop it. Thus, our gradient is: $$\label{eq:mceirl-grad} \nabla_{\theta} \ln p(\tau_{T}) = \sum_{t=0}^{T-1} \left( f(s_t) + {\mathbb{E}_{s'_{t+1}}\left[\mathcal{F}_{t+1}(s'_{t+1})\right]} - \mathcal{F}_t(s_t) \right).$$ This is the gradient we will use in Appendix \[appendix:deriving-grad\], but a little more manipulation allows us to compare with the gradient in @ziebart2010modeling. We reintroduce the terms that we cancelled above: $$\begin{aligned} \\&= \left( \sum_{t=0}^{T} f(s_t) \right) + \left( \sum_{t=0}^{T-1} {\mathbb{E}_{s'_{t+1}}\left[\mathcal{F}_{t+1}(s'_{t+1})\right]} \right) - \left( \mathcal{F}_{0}(s_{0}) + \sum_{t=0}^{T-1} \mathcal{F}_{t+1}(s_{t+1}) \right) \\&= \left( \sum_{t=0}^{T} f(s_t) \right) - \mathcal{F}_{0}(s_{0}) + \sum_{t=0}^{T-1} \left( {\mathbb{E}_{s'_{t+1}}\left[\mathcal{F}_{t+1}(s'_{t+1})\right]} - \mathcal{F}_{t+1}(s_{t+1}) \right).\end{aligned}$$ @ziebart2010modeling states that the gradient is given by the expert policy feature expectations minus the learned policy feature expectations, and in practice uses the feature expectations from demonstrations to approximate the expert policy feature expectations. Assuming we have $N$ trajectories $\{ \tau_i \}$, the gradient would be $\left( \frac{1}{N} \sum_i \sum_{t=0}^{T} f(s_{t,i}) \right) - {\mathbb{E}_{s_0}\left[\mathcal{F}_0(s_0)\right]}$. The first term matches our first term exactly. Our second term matches the second term in the limit of sufficiently many trajectories, so that the starting states $s_0$ follow the distribution $p(s_0)$. Our third term converges to zero with sufficiently many trajectories, since any $s_t, a_t$ pair in a demonstration will be present sufficiently often that the empirical counts of $s_{t+1}$ will match the expected proportions prescribed by $\mathcal{T}(\cdot \mid s_t, a_t)$. In a deterministic environment, we have $\mathcal{T}(s'_{t+1} \mid s_t, a_t) = 1[s'_{t+1} = s_{t+1}]$ since only one transition is possible. Thus, the third term is zero and even for one trajectory the gradient reduces to $\left( \sum_{t=0}^{T} f(s_t) \right) - \mathcal{F}_0(s_0)$. This differs from the gradient in @ziebart2010modeling only in that it computes feature expectations from the observed starting state $s_0$ instead of the MDP distribution over initial states $p(s_0)$. In a stochastic environment, the third term need not be zero, and corrects for the “bias” in the observed states $s_{t+1}$. Intuitively, when the expert chose action $a_t$, she did not know which next state $s'_{t+1}$ would arise, but the first term of our gradient upweights the particular next state $s_{t+1}$ that we observed. The third term downweights the future value of the observed state and upweights the future value of all other states, all in proportion to their prior probability $\mathcal{T}(s'_{t+1} \mid s_t, a_t)$. {#appendix:deriving-grad} This section provides a derivation of the gradient $\nabla_{\theta} \ln p(s_0)$, which is needed to solve $\text{argmax}_{\theta} \ln p(s_0)$ with gradient ascent. We provide the results first as a quick reference: $$\begin{aligned} \nabla_{\theta} \ln p(s_0) &= \frac{G_0(s_0)}{p(s_0)}, \\ p(s_{t+1}) &= \sum_{s_t, a_t} p(s_t) \pi_t(a_t \mid s_t) \mathcal{T}(s_{t+1} \mid s_t, a_t), \\ G_{t+1}(s_{t+1}) &= \sum\limits_{s_t, a_t} \mathcal{T}(s_{t+1} \mid s_{t}, a_{t}) \pi_t(a_t \mid s_t) \bigg( {p(s_t) g(s_t, a_t)} + G_{t}(s_t) \bigg), \\ g(s_t, a_t) &\equiv f(s_t) + {\mathbb{E}_{s'_{t+1}}\left[\mathcal{F}_{t+1}(s'_{t+1})\right]} - \mathcal{F}_t(s_t), \\ \mathcal{F}_{t-1}(s_{t-1}) &= f(s_{t-1}) + \sum\limits_{a'_{t-1}, s'_t} \pi_{t-1}(a'_{t-1} \mid s_{t-1}) \mathcal{T}(s'_t \mid s_{t-1}, a'_{t-1}) \mathcal{F}_t(s_t).\end{aligned}$$ Base cases: first, $p(s_{-T})$ is given, second, $G_{-T}(s_{-T}) = 0$, and third, $\mathcal{F}_0(s_0) = f(s_0)$. For the derivation, we start by expressing the gradient in terms of gradients of trajectories, so that we can use the result from Appendix \[appendix:mceirl-grad\]. Note that, by inspecting the final form of the gradient in Appendix \[appendix:mceirl-grad\], we can see that $\nabla_{\theta} p(\tau_{-T:0})$ is independent of $a_0$. Then, we have: $$\begin{aligned} \nabla_{\theta} \ln p(s_0) &= \frac{1}{p(s_0)} \nabla_{\theta} p(s_0) \\&= \frac{1}{p(s_0)} \sum\limits_{s_{-T:-1},a_{-T:0}} \nabla_{\theta} p(\tau_{-T:0}) \\&= \frac{1}{p(s_0)} \sum\limits_{s_{-T:-1},a_{-T:0}} p(\tau_{-T:0})\nabla_{\theta} \ln p(\tau_{-T:0}) \\&= \frac{1}{p(s_0)} \sum\limits_{s_{-T:-1},a_{-T:-1}} \left( p(\tau_{-T:-1}, s_0) \nabla_{\theta} \ln p(\tau_{-T:0}) \left( \sum\limits_{a_0} \pi_0(a_0 \mid s_0) \right) \right) \\&= \sum\limits_{s_{-T:-1},a_{-T:-1}} p(\tau_{-T:-1} \mid s_0) \nabla_{\theta} \ln p(\tau_{-T:0}).\end{aligned}$$ This has a nice interpretation – compute the gradient for each trajectory and take the weighted sum, where each weight is the probability of the trajectory given the evidence $s_0$ and current reward $\theta$. We can rewrite the gradient in [Equation \[eq:mceirl-grad\]]{} as $\nabla_{\theta} \ln p(\tau_{T}) = \sum_{t=0}^{T-1} g(s_t, a_t)$, where $$g(s_t, a_t) \equiv f(s_t) + {\mathbb{E}_{s'_{t+1}}\left[\mathcal{F}_{t+1}(s'_{t+1})\right]} - \mathcal{F}_t(s_t).$$ We can now substitute this to get: $$\begin{aligned} \nabla_{\theta} \ln p(s_0) &= \sum\limits_{s_{-T:-1},a_{-T:-1}} p(\tau_{-T:-1} \mid s_0) \left(\sum_{t=-T}^{-1} g(s_t, a_t) \right) \\&= \frac{1}{p(s_0)} \sum\limits_{s_{-T:-1},a_{-T:-1}} \left[ p(\tau_{-T:-1}, s_0) \sum_{t=-T}^{-1} g(s_t, a_t) \right] \\&= \frac{1}{p(s_0)} \sum\limits_{s_{-T:-1},a_{-T:-1}} \left[ p(\tau_{-T:-1}, s_0) \sum_{t=-T}^{-1} g(s_t, a_t) \right].\end{aligned}$$ Note that we can compute $p(s_t)$ since we are given the distribution $p(s_{-T})$ and we can use the recursive rule $p(s_{t+1}) = \sum_{s_t, a_t} p(s_t) \pi_t(a_t \mid s_t) \mathcal{T}(s_{t+1} \mid s_t, a_t)$. In order to compute $g(s_t, a_t)$ we need to compute $\mathcal{F}_{t}(s_t)$, which has base case $\mathcal{F}_0(s_0) = f(s_0)$ and recursive rule: $$\begin{aligned} & \mathcal{F}_{t-1}(s_{t-1}) \\&= f(s_{t-1}) + {\mathbb{E}_{a'_{t-1:-1},s'_{t:0}}\left[\sum_{t'=t}^0 f(s'_{t'})\right]} \\&= f(s_{t-1}) + \sum\limits_{a'_{t-1}, s'_t} \pi_{t-1}(a'_{t-1} \mid s_{t-1}) \mathcal{T}(s'_t \mid s_{t-1}, a'_{t-1}) \left[ f(s'_t) + {\mathbb{E}_{a'_{t:-1},s'_{t+1:0}}\left[\sum_{t'=t+1}^0 f(s'_{t'})\right]} \right] \\&= f(s_{t-1}) + \sum\limits_{a'_{t-1}, s'_t} \pi_{t-1}(a'_{t-1} \mid s_{t-1}) \mathcal{T}(s'_t \mid s_{t-1}, a'_{t-1}) \mathcal{F}_t(s_t).\end{aligned}$$ For the remaining part of the gradient, define $G_t$ such that $\nabla_{\theta} \ln p(s_0) = \frac{G_0(s_0)}{p(s_0)}$: $$G_t(s_t) \equiv \sum\limits_{s_{-T:t-1},a_{-T:t-1}} \left[ p(\tau_{-T:t-1}, s_t) \sum_{t'=-T}^{t-1} g(s_{t'}, a_{t'}) \right].$$ We now derive a recursive relation for $G$: $$\begin{aligned} & G_{t+1}(s_{t+1}) \\&= \sum\limits_{s_{-T:t},a_{-T:t}} \left[ p(\tau_{-T:t}, s_{t+1}) \sum_{t'=-T}^{t} g(s_{t'}, a_{t'}) \right] \\&= \sum\limits_{s_t, a_t} \sum\limits_{s_{-T:t-1},a_{-T:t-1}} \mathcal{T}(s_{t+1} \mid s_{t}, a_{t}) \pi_t(a_t \mid s_t) p(\tau_{-T:t-1}, s_{t}) \left( g(s_t, a_t, s_{t+1}) + \sum_{t'=-T}^{t-1} g(s_{t'}, a_{t'}) \right) \\&= \sum\limits_{s_t, a_t} \left[ \mathcal{T}(s_{t+1} \mid s_{t}, a_{t}) \pi_t(a_t \mid s_t) \left( \sum\limits_{s_{-T:t-1},a_{-T:t-1}} p(\tau_{-T:t-1}, s_{t}) \right) g(s_t, a_t) \right] \\&\quad + \sum\limits_{s_t, a_t} \left[ \mathcal{T}(s_{t+1} \mid s_{t}, a_{t}) \pi_t(a_t \mid s_t) \sum\limits_{s_{-T:t-1},a_{-T:t-1}} \left( p(\tau_{-T:t-1}, s_{t}) \sum_{t'=-T}^{t-1} g(s_{t'}, a_{t'}) \right) \right] \\&= \sum\limits_{s_t, a_t} \mathcal{T}(s_{t+1} \mid s_{t}, a_{t}) \pi_t(a_t \mid s_t) \bigg( {p(s_t) g(s_t, a_t)} + G_{t}(s_t) \bigg).\end{aligned}$$ For the base case, note that $$\begin{aligned} G_{-T+1}(s_{-T+1}) &= \sum\limits_{s_{-T},a_{-T}} \left[ p(s_{-T}, a_{-T}, s_{-T+1}) g(s_{-T}, a_{-T}, s_{-T+1}) \right] \\&= \sum\limits_{s_{-T}, a_{-T}} \mathcal{T}(s_{-T+1} \mid s_{-T}, a_{-T}) \pi_{-T}(a_{-T} \mid s_{-T}) \bigg( p(s_{-T}) g(s_{-T}, a_{-T}, s_{-T+1}) \bigg).\end{aligned}$$ Comparing this to the recursive rule, for the base case we can set $G_{-T}(s_{-T}) = 0$. {#appendix:sampling-algo} Instead of estimating the MLE (or MAP if we have a prior) using RLSP, we could approximate the entire posterior distribution. One standard way to address the computational challenges involved with the continuous and high-dimensional nature of $\theta$ is to use MCMC sampling to sample from $p(\theta \mid s_0) \propto p(s_0 \mid \theta) p(\theta)$. The resulting algorithm resembles Bayesian IRL [@ramachandran2007bayesian] and is presented in Algorithm \[alg:sampling\]. While this algorithm is less efficient and noisier than RLSP, it gives us an estimate of the full posterior distribution. In our experiments, we collapsed the full distribution into a point estimate by taking the mean. Initial experiments showed that the algorithm was slower and noisier than the gradient-based RLSP, so we did not test it further. However, in future work we could better leverage the full distribution, for example to create risk-averse policies, to identify features that are uncertain, or to identify features that are certain but conflict with the specified reward, after which we could actively query Alice for more information. MDP $\mathcal{M}$, prior $p(\theta)$, step size $\delta$ $\theta \gets \text{random sample}(p(\theta))$ $\pi, V = \text{soft value iteration}(\mathcal{M}, \theta)$ $\theta' \gets \text{random sample}(\mathcal{N}(\theta, \delta))$ $\pi', V' = \text{soft value iteration}(\mathcal{M}, \theta')$ $p' \gets p(s_0 \mid \theta') p(\theta')$ have generated the desired number of samples \[alg:sampling\] Combining the specified reward with the inferred reward {#appendix:tradeoff} ======================================================= **Comparison of the methods for combining** $\theta_{\text{spec}}$ **and** $\theta_{\text{H}}$ ![Comparison of the Additive and Bayesian methods. We show how the percentage of true reward obtained by $\pi_{\text{RLSP}}$ varies as we change the tradeoff between ${\theta_{\text{Alice}}}$ and ${\theta_{\text{spec}}}$. The zero temperature case corresponds to traditional value iteration; this often leads to identical behavior and so the lines overlap. So, we also show the results when planning with soft value iteration, varying the softmax temperature, to introduce some noise into the policy. Overall, there is not much difference between the two methods. We did not include the Apples environment because ${\theta_{\text{spec}}}$ is uniformly zero and the Additive and Bayesian methods do exactly the same thing.[]{data-label="fig:prior-vs-addition"}](images/prior_vs_addition_all_x3_new.pdf "fig:"){width="14cm"} In Section \[sec:evaluation\], we evaluated RLSP by combining the reward it infers with a specified reward to get a final reward $\theta_{\text{final}} = \theta_{\text{Alice}} + \lambda \theta_{\text{spec}}$. As discussed in Section \[sec:discussion\], the problem of combining $\theta_{\text{Alice}}$ and $\theta_{\text{spec}}$ is difficult, since the two rewards incentivize different behaviors and will conflict. The *Additive* method above is a simple way of trading off between the two. Both RLSP and the sampling algorithm of Appendix \[appendix:sampling-algo\] can incorporate a prior over $\theta$. Another way to combine the two rewards is to condition the prior on $\theta_{\text{spec}}$ before running the algorithms. In particular, we could replace our prior $P(\theta_{\text{Alice}})$ with a new prior $P(\theta_{\text{Alice}} \mid \theta_{\text{spec}})$, such as a Gaussian distribution centered at $\theta_{\text{spec}}$. When we use this prior, the reward returned by RLSP can be used as the final reward $\theta_{\text{final}}$. It might seem like this is a principled *Bayesian* method that allows us to combine the two rewards. However, the conflict between the two reward functions still exists. In this formulation, it arises in the new prior $P(\theta_{\text{Alice}} \mid \theta_{\text{spec}})$. Modeling this as a Gaussian centered at $\theta_{\text{spec}}$ suggests that before knowing $s_0$, it seems likely that $\theta_{\text{Alice}}$ is very similar to $\theta_{\text{spec}}$. However, this is not true – Alice is probably providing the reward $\theta_{\text{spec}}$ to the robot so that it causes some *change* to the state that she has optimized, and so it will be *predictably* different from $\theta_{\text{spec}}$. On the other hand, we do need to put high probability on $\theta_{\text{spec}}$, since otherwise $\theta_{\text{final}}$ will not incentivize any of the behaviors that $\theta_{\text{spec}}$ did. Nonetheless, this is another simple heuristic for how we might combine the two rewards, that manages the tradeoff between ${\theta_{\text{spec}}}$ and ${\theta_{\text{Alice}}}$. We compared the Additive and Bayesian methods by evaluating their robustness. We vary the parameter that controls the tradeoff and report the true reward obtained by $\pi_{\text{RLSP}}$, as a fraction of the expected true reward under the optimal policy. For the Bayesian method, we vary the standard deviation $\sigma$ of the Gaussian prior over ${\theta_{\text{Alice}}}$ that is centered at ${\theta_{\text{spec}}}$. For the Additive method, the natural choice would be to vary $\lambda$; however, in order to make the results more comparable, we instead set $\lambda = 1$ and vary the standard deviation of the Gaussian prior used while inferring ${\theta_{\text{Alice}}}$, which is centered at zero instead of at ${\theta_{\text{spec}}}$. A larger standard deviation allows ${\theta_{\text{Alice}}}$ to become larger in magnitude (since it is penalized less for deviating from the mean of zero reward), which effectively corresponds to a smaller $\lambda$. While we typically create $\pi_{\text{RLSP}}$ using value iteration, this leads to deterministic policies with very sharp changes in behavior that make it hard to see differences between methods, and so we also show results with soft value iteration, which creates stochastic policies that vary more continuously. As demonstrated in Figure \[fig:prior-vs-addition\], our experiments show that overall the two methods perform very similarly, with some evidence that the Additive method is slightly more robust. The Additive method also has the benefit that it can be applied in situations where the inferred reward and specified reward are over different feature spaces, by creating the final reward $R_{\text{final}}(s) = {{\theta_{\text{Alice}}}}^T f_{\text{Alice}}(s) + \lambda R_{\text{spec}}(s)$. [^1]: equal contribution; [^2]: work done at UC Berkeley
--- abstract: 'A new concept of an electromechanical nanodynamometer based on the relative displacement of layers of bilayer graphene is proposed. In this nanodynamometer, force acting on one of the graphene layers causes the relative displacement of this layer and related change of conductance between the layers. Such a force can be determined by measurements of the tunneling conductance between the layers. Dependences of the interlayer interaction energy and the conductance between the graphene layers on their relative position are calculated within the first-principles approach corrected for van der Waals interactions and the Bardeen method, respectively. The characteristics of the nanodynamometer are determined and its possible applications are discussed.' author: - 'N.A. Poklonski' - 'A.I. Siahlo' - 'S.A. Vyrko' - 'A.M. Popov' - 'Yu.E. Lozovik' - 'I.V. Lebedeva' - 'A.A. Knizhnik' bibliography: - 'GrapheneBasedNanodynamometer.bib' title: 'Graphene-based nanodynamometer' --- Introduction ============ Due to the unique electrical and mechanical properties, carbon nanostructures (fullerenes, carbon nanotubes and graphene) are considered as promising materials for the use in nanoelectromechanical systems (NEMS). Since the conductance of carbon nanotubes depends on the relative displacement of nanotube walls at the sub-nanometer scale,[@Grace04; @Tunney06; @PoklonskiHieu08] a set of nanosensors based on such a displacement was proposed. This set includes a variable nanoresistor,[@LozovikMinogin03; @YanZhou06] a strain nanosensor [@Bichoutskaia06] and a nanothermometer [@Bichoutskaia07; @PopovBichoutskaia07] (see Ref.  for a review). A number of nanotube-based NEMS, such as a nanoresonator based on a suspended nanotube [@Sazonova04; @Peng06] and a nanoaccelerometer based on a telescoping nanotube,[@Wang08; @Kang09] were suggested as means for measurements of small forces and accelerations by detection of changes in the system capacitance. In addition to zero-dimensional and one-dimensional carbon nanostructures, fullerenes and carbon nanotubes, a novel two-dimensional carbon nanostructure, graphene, was discovered recently.[@Novoselov04] By analogy with NEMS based on carbon nanotubes, nanodevices based on graphene were proposed.[@Zheng08; @Lebedeva11] A nanoresonator based on flexural vibrations of suspended graphene was implemented.[@Bunch07] Similar to devices based on the dependence of conductance of carbon nanotubes on the relative displacement of nanotube walls, NEMS based on the dependence of conductance of graphene on the relative displacement of graphene layers can be considered. In this paper, we propose a new concept of an electromechanical nanodynamometer based on the relative displacement of layers of bilayer graphene and investigate the operating characteristics of this sensor. The conceptual design of the nanodynamometer is shown in Fig. 1. The operation of the nanodynamometer is determined by the balance of *only two* forces: an external force, $F_\text{ext}$, applied to the movable layer of the bilayer graphene, which should be measured, and a force of interlayer interaction, $F_\text{int}$. The feedback sensing of the external force is based on the dependence of the tunneling conductance $G$ on the displacement of the movable layer under the action of the external force. ![Conceptual design of the nanodynamometer. The bottom graphene layers fixed on the electrodes are indicated as *1* and *2* and the movable top graphene layer is indicated as *3*.](fig1.eps) The paper is organized in the following way. Calculations of the dependence of the force of interlayer interaction on the relative displacement of graphene layers and estimations of accuracy of force measurements are presented in Section II. Section III is devoted to calculations of the tunneling conductance between graphene layers. Our conclusions and discussion of possible applications of the nanodynamometer are summarized in Section IV. Interlayer interaction of bilayer graphene ========================================== To study the dependence of the force of interlayer interaction, $F_\text{int}$, on the relative displacement of graphene layers the interlayer interaction of bilayer graphene has been investigated in the framework of the density functional theory with the dispersion correction (DFT-D).[@Grimme06; @WuVargas01] The periodic boundary conditions are applied to a 4.26 Å $\times$ 2.46 Å $\times$ 20 Å model cell. The VASP code [@Kresse96] with the density functional of Perdew, Burke, and Ernzerhof[@Perdew96] corrected with the dispersion term (PBE-D) [@Barone09] is used. The basis set consists of plane waves with the maximum kinetic energy of 800 eV. The interaction of valence electrons with atomic cores is described using the projector augmented-wave method (PAW).[@Kresse99] Integration over the Brillouin zone is performed using the Monkhorst–Pack method [@Monkhorst76] with $24\times36\times1$ $k$-point sampling. In the calculations of the potential energy reliefs, one of the graphene layers is rigidly shifted parallel to the other. Account of structure deformation induced by the interlayer interaction was shown to be inessential for the shape of the potential relief for the interaction between graphene-like layers, such as the interwall interaction of carbon nanotubes [@Kolmogorov00; @Belikov04] and the intershell interaction of carbon nanoparticles.[@LozovikPopov00; @LozovikPopov02] The DFT-D calculations show that the ground state of bilayer graphene corresponds to the AB stacking (Bernal structure) with the interlayer spacing $\delta Z = 3.25$ Å and the interlayer interaction energy $-50.6$ meV$/$atom. The interaction of a single carbon atom in the graphene flake with the graphite surface was described using the simple approximation [@Verhoeven04; @Kerssemakers97] containing only the first Fourier components. Based on that expression, the interlayer interaction energy $U(\delta x, \delta y)$ as a function of the relative displacements $\delta x$ and $\delta y$ of the layers along the axes $x$ and $y$ chosen along the armchair and zigzag directions, respectively, at the equilibrium interlayer spacing can be roughly approximated in the form [@Lebedeva10] $$\begin{aligned} \label{eq101} U &= U_1 \biggl(1.5 + \cos\biggl(2k_x\delta x - \frac{2\pi}{3}\biggr) -{} \notag\\ &- 2\cos\biggl(k_x\delta x - \frac{\pi}{3}\biggr)\cos(k_y\delta y)\biggr) + U_0,\end{aligned}$$ where $k_y=2\pi/(\sqrt{3}a_\text{CC})$, $k_x=k_y/\sqrt{3}$, $a_\text{CC} = 1.42$ Å is the bond length of graphene (see Fig. 2), $\delta x = 0$ and $\delta y = 0$ at the AB stacking. The parameters $U_0 = -101.18$ meV and $U_1 = 8.48$ meV (per elementary unit cell) are fitted to reproduce the potential energy relief of bilayer graphene. The relative root-mean-square deviation $\delta U/U_1$ of approximation (\[eq101\]) from the potential energy relief obtained using the DFT-D calculations is found to be $\delta U/U_1=0.043$. The potential energy relief calculated using approximation (\[eq101\]) is shown in Fig. 3. ![Structure of a single layer of graphene (see, e.g., Ref. ) in real space (a) and in reciprocal space (b). The elementary unit cell is denoted by dotted lines. The hexagon in part (b) is the boundary of the first Brillouin zone. ${\mathbf{a}}_1$ and ${\mathbf{a}}_2$ are the translational vectors, ${\mathbf{k}}_1$ and ${\mathbf{k}}_2$ are vectors reciprocal to ${\mathbf{a}}_1$ and ${\mathbf{a}}_2$; $k_x$ and $k_y$ are projections of ${\mathbf{k}}_1$ and ${\mathbf{k}}_2$ on coordinate axes. Nonequivalent lattice sites are denoted by $A$ and $B$.](fig2.eps) ![Calculated interlayer interaction energy $U$ of bilayer graphene per elementary unit cell as a function of the relative position $\delta x$ and $\delta y$ of the layers. The energy is given relative to the global energy minimum $U_0$. SP is a saddle point in the potential relief. The regions where the stable equilibrium is possible and not possible on the displacement of the movable layer along the armchair direction are shown with the solid and dashed lines, respectively.](fig3.eps) For simplicity, we restrict the analysis of operation of the nanodynamometer by the case where the external force $F_\text{ext}$ is directed along the $x$ (armchair) direction, i.e. along the path between adjacent energy minima. The dependence of the interlayer force on the displacement of the movable layer in this direction can be calculated using approximation (\[eq101\]), $$\begin{aligned} \label{eq102} F_\text{int} = -\frac{\partial U}{\partial x} = 2 U_1 k_x \biggl(\sin\biggl(2k_x\delta x - \frac{2\pi}{3}\biggr) -{} \notag\\ -\sin\biggl(k_x\delta x- \frac{\pi}{3}\biggr)\cos(k_y\delta y)\biggr).\end{aligned}$$ This dependence is shown in Fig. 4a. Under the action of the external force $F_\text{ext}$, the equilibrium position of the layers is determined by the condition $F_\text{ext} + F_\text{int} = 0$. This equilibrium is stable if the matrix of the second derivatives of the potential function $U(x,y)$ is positive definite. Differentiating Eq. (\[eq101\]), we find that the stable equilibrium is possible up to the displacement $|\delta x_1| \approx 0.229a_\text{CC}$ in the direction corresponding to transition from the AB stacking to the SP stacking and up to the displacement $|\delta x_2| = a_\text{CC}/4$ in the direction corresponding to transition from the AB stacking to the AA stacking. The maximum forces that can be measured in these directions are $F_1 = 15$ pN and $F_2 = 40$ pN per elementary unit cell, respectively. In Figs. 3 and 4, the regions where the stable equilibrium is possible and is not possible are shown with the solid the dashed lines, respectively. ![(a) Calculated force $F_\text{int}$ of the interlayer interaction acting on the movable layer of bilayer graphene per elementary unit cell as a function of the relative displacement $\delta x$ of the movable layer in the $x$ (armchair) direction obtained using approximation (\[eq101\]). The regions where the stable equilibrium is possible and not possible are shown with the solid and dashed lines, respectively. (b) Calculated tunneling conductance $G/G_\text{AB}$ between the layers as a function of the relative displacement $\delta x$ of the movable layer in the $x$ direction.](fig4.eps) The upper limit of forces that can be measured using the graphene-based nanodynamometer is proportional to the overlap area of the graphene layers and is given by $F_\text{max} \approx F_2 N_\text{G}$, where $N_\text{G}$ is the number of the elementary unit cells in the overlap area. For example, for the overlap areas of $10^2$ and $10^4$ nm$^2$, the maximum forces that can be measured are 76 nN and 7.6 $\mu$N, respectively. The accuracy of the force measurements is limited by thermal vibrations of the graphene layers. The amplitude of these vibrations can be estimated as $$\label{eq103} \langle x^2 \rangle_T \approx \frac{k_\text{B} T}{N_\text{G}} \left(\frac{\partial^2 U}{\partial x^2} \right)^{-1},$$ where $k_\text{B}$ is the Boltzmann constant, $T$ is temperature and $\partial^2 U /\partial x^2$ is the second derivative of the interlayer interaction energy with respect to the displacement of the layers along the armchair direction at the energy minimum. The latter quantity is found to be equal $\partial^2U/\partial x^2 = 3U_1k_x^2 = 55.3$ meV$/$Å$^2$ per elementary unit cell from Eq. (\[eq101\]). The relative error of the force measurements can be estimated as the ratio of the amplitude of the thermal vibrations to the maximal displacement of graphene layers where the stable equilibrium is possible. At liquid helium and room temperatures, these ratios equal $\sqrt{\langle x^2\rangle_T}/\delta x_2 = 0.23/\sqrt{N_\text{G}}$ and $\sqrt{\langle x^2\rangle_T}/\delta x_2 = 1.9/\sqrt{N_\text{G}}$, respectively. So for the overlap area of 100 nm$^2$, these quantities are 0.005 and 0.044, respectively. It is seen that the relative error of the force measurements decreases with increasing the overlap area of the graphene layers. Conductance of bilayer graphene =============================== Let us show that tunneling conductance $G$ between graphene layers changes considerably with the relative displacement of the layers and therefore measurements of the conductance $G$ can be used to determine this displacement. We use the Bardeen method,[@Bardeen61] which was previously applied for calculation of the tunneling conductance between walls of double-walled carbon nanotubes.[@PoklonskiHieu08] It is known [@Tersoff85] that the tunneling conductance is proportional to the sum of squares of the amplitudes of the tunneling transition (tunneling matrix elements) for all electron states at both sides of the tunneling transition. This approach was used previously to study the electronic structure [@Bistritzer11] and conductance [@Bistritzer10] of twisted two-layer graphene system. Here we use such an approach to calculate the relative changes of tunneling conductance $G$ between the layers at their relative displacement from the ground state corresponding to AB stacking. In the framework of the Bardeen’s formalism,[@Bardeen61] the amplitude of the tunneling transition between states of the bottom ($\Psi_\text{bot}$) and top ($\Psi_\text{top}$) layers of bilayer graphene is given by $$\label{eq104} M_\text{bot,top}^{{\mathbf{k}}_\text{bot},{\mathbf{k}}_\text{top}} = \frac{\hbar^2}{2m_0}\int_S (\Psi_\text{bot}^{*} \nabla \Psi_\text{top} - \Psi_\text{top} \nabla \Psi_\text{bot}^{*})\,d{\mathbf{S}},$$ where $S$ is the overlap area between the graphene layers, ${\mathbf{k}}_\text{bot}$ and ${\mathbf{k}}_\text{top}$ are two-dimensional vectors in the reciprocal space of the graphene lattice corresponding to the bottom and top layers, $m_0$ is the electron mass in vacuum, $\hbar = h/2\pi$ is the Planck constant. In the tight-binding approximation for vectors ${\mathbf{k}}_\text{bot}$ (or ${\mathbf{k}}_\text{top}$) near the corners ($K$-points,[@CastroNeto09] ${\mathbf{K}} = (2\pi/(3a_\text{CC}), 2\pi/(3\sqrt{3}a_\text{CC}))$ and ${\mathbf{K}}' = (2\pi/(3a_\text{CC}), -2\pi/(3\sqrt{3}a_\text{CC}))$) of the Brillouin zone (Fig. 2b), the wave function of the bottom graphene layer takes the form [@Barnett05] $$\begin{aligned} \label{eq105} \Psi_\text{bot} &= \frac{1}{\sqrt{N_\text{G}}}\sum_{g=1}^{N_\text{G}}\exp(i{\mathbf{k}}_\text{bot}{\mathbf{R}}_g^\text{bot})\times{} \notag\\ &\times\frac{1}{\sqrt{2}}\bigl(\chi({\mathbf{r}} - {\mathbf{R}}_g^\text{bot}) \pm \chi({\mathbf{r}} - {\mathbf{R}}_g^\text{bot} - {\mathbf{d}})\bigr),\end{aligned}$$ and the same formula for $\Psi_\text{top}$. Here $N_\text{G}$ is the number of the elementary unit cells of graphene, ${\mathbf{d}}$ is the vector between two non-equivalent carbon atoms ($A$ and $B$) in the elementary unit cell, $d = a_\text{CC}$; signs $+$ and $-$ correspond to $\pi$- (bonding) and $\pi^*$- (antibonding) orbitals in graphene, respectively, ${\mathbf{R}}_g^\text{bot}$ is the radius vector of the $g$-th unit cell of the bottom graphene layer, ${\mathbf{r}}$ is the radius vector. The function $\chi({\mathbf{r}})$ is a Slater $2p_x$-orbital $$\label{eq106} \chi\left({\mathbf{r}}\right) = \left(\frac{\xi^5}{\pi}\right)^{1/2} z\,\exp\left(-\xi\sqrt{x^2 + y^2 + z^2}\right),$$ where [@Clementi63] $\xi=1.5679/a_\text{B}$ and $z$ is the axis perpendicular to the graphene plane; $a_\text{B} = 0.529$ Å is the Bohr radius and $r = \sqrt{x^2 + y^2 + z^2}$ is the magnitude of the radius vector ${\mathbf{r}}$ from carbon atom center. Let us substitute the wave function (\[eq105\]) into Eq. (\[eq104\]). The product $\Psi_\text{bot}^*\nabla\Psi_\text{top}$ in Eq. (\[eq105\]) can be rewritten as $$\begin{aligned} \label{eq116} &\Psi_\text{bot}^*\nabla\Psi_\text{top} = \notag\\ &= \frac{1}{2N_\text{G}} \sum_{g=1}^{N_\text{G}}\exp(-i{\mathbf{k}}_\text{bot}{\mathbf{R}}_g^\text{bot})\bigl(\chi({\mathbf{r}} - {\mathbf{R}}_g^\text{bot}) \pm \chi({\mathbf{r}} - {\mathbf{R}}_g^\text{bot} - {\mathbf{d}})\bigr)\times{} \notag\\ &\times \nabla\sum_{g'=1}^{N_\text{G}}\exp(i{\mathbf{k}}_\text{top}{\mathbf{R}}_{g'}^\text{top}) \bigl(\chi({\mathbf{r}} - {\mathbf{R}}_{g'}^\text{top}) \pm \chi({\mathbf{r}} - {\mathbf{R}}_{g'}^\text{top} - {\mathbf{d}})\bigr)= \notag\\ &= \frac{1}{2N_\text{G}} \sum_{g=1}^{N_\text{G}}\sum_{h=1}^{N_\text{G}} \exp(i({\mathbf{k}}_\text{top}{\mathbf{R}}_g^\text{bot} - {\mathbf{k}}_\text{bot}{\mathbf{R}}_g^\text{bot}) + i{\mathbf{k}}_\text{top}\Delta{\mathbf{R}}_h)\times{} \notag\\ &\times \bigl(\chi({\mathbf{r}}') \pm \chi({\mathbf{r}}' - {\mathbf{d}})\bigr) \nabla\bigl(\chi({\mathbf{r}}' - \Delta{\mathbf{R}}_h) \pm \chi({\mathbf{r}}' - \Delta{\mathbf{R}}_h - {\mathbf{d}})\bigr),\end{aligned}$$ where ${\mathbf{r}}' = {\mathbf{r}} - {\mathbf{R}}_g^\text{bot}$. In Eq. (\[eq116\]) the coordinates of unit cells of the top layer ${\mathbf{R}}_{g'}^\text{top} = {\mathbf{R}}_g^\text{bot} + \Delta {\mathbf{R}}_h$ are expressed via the coordinates of unit cells of the bottom layer ${\mathbf{R}}_g^\text{bot}$ and displacements of unit cells of the top layer $\Delta {\mathbf{R}}_h$. Since all unit cells of graphene are identical, only one cell of the bottom layer can be considered in the calculation of $M_\text{bot,top}^{{\mathbf{k}}_\text{bot},{\mathbf{k}}_\text{top}}$. It should also be taken into account that for large $N_\text{G} \gg 1$ the following relation is satisfied $$\label{eq107} \frac{1}{N_\text{G}} \sum_{g=1}^{N_\text{G}} \exp(i {\mathbf{k}}_\text{top} {\mathbf{R}}_g) \exp(-i {\mathbf{k}}_\text{bot} {\mathbf{R}}_g) = \delta_{{\mathbf{k}}_\text{bot},{\mathbf{k}}_\text{top}},$$ where $\delta_{{\mathbf{k}}_\text{bot},{\mathbf{k}}_\text{top}}$ is the Kronecker symbol. Taking into account Eqs. (\[eq116\]) and (\[eq107\]), the Eq. (\[eq104\]) takes the form $$\begin{aligned} \label{eq108} &M_\text{bot,top}^{{\mathbf{k}}_\text{bot},{\mathbf{k}}_\text{top}} = \frac{\hbar^2}{2m_0} \sum_{g=1}^{N_\text{G}} \frac{1}{2N_\text{G}}\exp(i {\mathbf{k}}_\text{top}{\mathbf{R}}_g^\text{bot} - i {\mathbf{k}}_\text{bot}{\mathbf{R}}_g^\text{bot}) \times{} \notag\\ &\times \sum_{h=1}^{N_\text{G}}\exp(i {\mathbf{k}}_\text{top}\Delta{\mathbf{R}}_h) \int_S \bigl(\chi({\mathbf{r}} - \Delta{\mathbf{R}}_h) \pm \chi({\mathbf{r}} - \Delta{\mathbf{R}}_h - {\mathbf{d}})\bigr) \times{} \notag \\ &\times \nabla\bigl(\chi({\mathbf{r}}) \pm \chi({\mathbf{r}} - {\mathbf{d}})\bigr)\,d{\mathbf{S}} = M_\text{bot,top}^{{\mathbf{k}}_\text{top}} \delta_{{\mathbf{k}}_\text{bot},{\mathbf{k}}_\text{top}}.\end{aligned}$$ Applying $\delta_{{\mathbf{k}}_\text{bot},{\mathbf{k}}_\text{top}}$ in Eq. (\[eq108\]), we get only vectors ${\mathbf{k}}_\text{top} = {\mathbf{k}}_\text{bot}$. This yields for ${\mathbf{k}}_\text{top} = {\mathbf{k}}_\text{bot} = {\mathbf{K}}$ (or ${\mathbf{k}}_\text{top} = {\mathbf{k}}_\text{bot} = {\mathbf{K}}'$; see Fig. 2b): $$\begin{aligned} \label{eq109} &M_\text{bot,top}^{{\mathbf{k}}_\text{bot},{\mathbf{k}}_\text{top}} = M_\text{bot,top}^{{\mathbf{K}},{\mathbf{K}}} = M_\text{bot,top}^{{\mathbf{K}}',{\mathbf{K}}'} = \frac{\hbar^2}{2m_0} \sum_{h=1}^{N_\text{G}} \frac{1}{2}\exp(i {\mathbf{K}}\Delta{\mathbf{R}}_h) \times{} \notag\\ &\times \int_S \bigl(\chi({\mathbf{r}} - \Delta{\mathbf{R}}_h) \pm \chi({\mathbf{r}} - \Delta{\mathbf{R}}_h - {\mathbf{d}})\bigr) \nabla\bigl(\chi({\mathbf{r}}) \pm \chi({\mathbf{r}} - {\mathbf{d}})\bigr)\,d{\mathbf{S}} \approx{} \notag\\ &\approx \sum_{h=1}^{n_\text{G}}\exp(i {\mathbf{K}}\Delta{\mathbf{R}}_h) (\gamma_{A-A'_h}+\gamma_{A-B'_h}+\gamma_{B-A'_h}+\gamma_{B-B'_h}),\end{aligned}$$ where $n_\text{G} = [\pi\Delta R_\text{max}^2/(\sqrt{3}a_1^2/2)]$ is the number of unit cells of the top layer located at the distance in graphene plane less than $\Delta R_\text{max}$ from the considered unit cell of the bottom layer, $\gamma_{A(B)-A'_h(B'_h)}$ are the hopping integrals between atom $A$ (or $B$) in the considered unit cell of the bottom layer and atom $A'_h$ (or $B'_h$) in the $h$-th unit cell of the top layer. We use $\Delta R_\text{max} = 2a_1 = 2a_2 = 2\sqrt{3}a_\text{CC}$ and $n_\text{G} = [8\pi/\sqrt{3}] = 14$ in the calculations for the both layers (Fig. 2a), taking into account that the interactions between atoms lying at longer distances change the value of the matrix element by less than 0.1%. The hopping integrals $\gamma_{A(B)-A'_h(B'_h)}$ in Eq.  are given by $$\begin{aligned} \label{eq110} \hspace{-6pt}\gamma_{A(B)-A'_h(B'_h)} &= \gamma_\rho(x, y) \notag\\ &= \frac{\hbar^2}{2m_0} \int_S \frac{1}{2}\left(\chi_\text{bot}\frac{d}{dz}\chi_\text{top} - \chi_\text{top}\frac{d}{dz}\chi_\text{bot}\right)dS,\end{aligned}$$ where the index $A(B)$ denotes atom $A$ (or atom $B$), $\chi_\text{bot} = \chi(x - X_{A(B)}, y - Y_{A(B)}, -\delta Z/2)$, $\chi_\text{top} = \chi\bigl(x - (X_{A'_h(B'_h)} + \delta x),$ $y - (Y_{A'_h(B'_h)} + \delta y), \delta Z/2\bigr)$, $X_{A(B)}$ and $Y_{A(B)}$ are the coordinates of atom $A$ (or atom $B$) in the elementary unit cell of the bottom layer ($X_A = 0$, $X_B = a_\text{CC}$, $Y_A = Y_B = 0$), $X_{A'_h(B'_h)}$ and $Y_{A'_h(B'_h)}$ are the coordinates of atoms of the top layer for bilayer graphene in the ground state (AB stacking), and $\delta Z/2 = 1.625$ Å is the half of the interlayer distance, $\bm{\rho} = (X_{A'_h(B'_h)} + \delta X - X_A(B)$, $Y_{A'_h(B'_h)} + \delta Y - Y_A(B)$, $0)$ is the projection on graphene plane of the vector connecting two selected atoms in bilayer graphene; for $\gamma_{A-A'_h}$ and $\gamma_{B-B'_h}$ vector $\bm{\rho}$ is the projection of $\Delta {\mathbf{R}}_h$, for $\gamma_{A-B'_h}$ vector $\bm{\rho}$ is the projection of $\Delta{\mathbf{R}}_h+{\mathbf{d}}$, and for $\gamma_{B-A'_h}$ vector $\bm{\rho}$ is the projection of $\Delta{\mathbf{R}}_h-{\mathbf{d}}$. Analogously to Refs.  the hopping integral $\gamma_\rho$ depends on the magnitude $\rho$ of vector $\bm{\rho}$ (see Fig. 5). The function $\gamma_\rho$ can be approximated (with accuracy within 3%) by an expression $\gamma_\rho = \gamma_\text{max}\exp(-\zeta(\rho/a_\text{CC})^2)$, where $\gamma_\text{max} = 189$ meV and $\zeta = 0.8$. The calculation according to Eq. (\[eq110\]) for the AA stacking of bilayer graphene, in which equivalent atoms of the top and bottom layers are located opposite to each other, gives the values of the hopping integrals $\gamma_{A'-A} = \gamma_{B'-B} = 189.2$ meV. The expressions (\[eq106\]), (\[eq109\]) and (\[eq110\]) allow to calculate the tunneling matrix element. For the AB stacking (the Bernal structure, $\delta x=0$ and $\delta y=0$), the amplitude of the tunneling transition between the states $\Psi_\text{bot}$ and $\Psi_\text{top}$ of the bottom and top layers of bilayer graphene is found to be $|M_\text{bot,top}^{{\mathbf{K}},{\mathbf{K}}}| \approx 136$ meV, while for the AA stacking ($\delta x = -a_\text{CC}, \delta y = 0$), $|M_\text{bot,top}^{{\mathbf{K}},{\mathbf{K}}}| \approx 272$ meV (see Eq. (\[eq109\])). ![The hopping integral $\gamma_\rho$ as a function of the magnitude $\rho$ of vector $\bm{\rho}$ (see Eq. (\[eq110\]))](fig5.eps) The ratio of the tunneling conductance $G$ to the tunneling conductance $G_\text{AB}$ of bilayer graphene at the ground state ($\delta x = 0$) equals to the ratio $|M_\text{bot,top}^{{\mathbf{K}},{\mathbf{K}}}|^2/|M_\text{bot,top}^{{\mathbf{K}},{\mathbf{K}}}|_{\delta x=0}^2$ determined by Eq. (\[eq109\]). The dependence of this ratio on the relative displacement of the layers along the $x$ (armchair) direction is shown in Fig. 4b. It is seen that the tunneling conductance between the graphene layers strongly depends on their relative position at the sub-nanometer scale, similar to the results obtained for double-walled carbon nanotubes.[@Grace04; @Tunney06; @PoklonskiHieu08] The conductance reaches its maximum for the AA stacking, in which atoms of the layers are located at the smallest distances to each other. The minimum of the tunneling conductance corresponds to the SP stacking. Figure 4 shows that the relative displacement of the graphene layers in the course of the operation of the nanodynamometer can result in changes of the tunneling conductance $G$ in the relatively wide range from 0.61$G_\text{AB}$ to 1.73$G_\text{AB}$. Thus it is seen that the relative displacement $\delta x$ of the layers (Fig. 4b) and, consequently, the external force acting to the layers (Fig. 4a) can be determined by the measurements of the electrical conductance between the layers. The model that we use to calculate the tunneling conductance adequately describes electron tunneling for relative positions of the graphene layers in which their atoms are not located exactly opposite to each other. In the case when the atoms are located exactly opposite to each other, hybridization of their wave functions occurs leading to a significant increase of the conductance which is now determined not by tunneling between the layers but rather by transitions between energy bands of the combined electron system of bilayer graphene. Therefore our calculations provide only the lower bound estimate of the relative variation of the tunneling conductance upon the relative displacement of the graphene layers. Nevertheless even this estimate is sufficient to demonstrate the feasibility of force measurements using the proposed design of the nanodynamometer. Let us also consider the possibility of a nanodynamometer based on the relative rotation of graphene layers. At a relative translational displacement of the layers from the ground state corresponding to the AB stacking, the interlayer interaction energy increases and the tunneling conductance between the layers increases or decreases (depending on direction of displacement) identically for all local areas of the overlap. Contrary to that case, at a relative rotation of the layers, the interlayer interaction energy and the tunneling conductance change differently for local areas of the overlap. While the interlayer interaction energy increases for any local area since the AB stacking corresponds to the global energy minimum, the tunneling conductance increases for some local areas and decreases for the others. As a result, contributions from different local areas to the total tunneling conductance compensate each other. Therefore changes in the total tunneling conductance at the relative rotation of the layers are much smaller than such changes at the relative translational displacement. Moreover the force required for the relative rotation of the layers from the AB stacking to the incommensurate state is an order of magnitude greater than the force required for the displacement from the AB stacking to the SP stacking.[@Lebedeva10] Thus the scheme of the nanodynamometer based on the relative rotation of the graphene layers is less effective than the proposed scheme based on the relative displacement of the layers. For the proposed scheme of the nanodynamometer, the relative rotation of the layers should be avoided, i.e. only forces that do not produce a significant torque should be considered. This is the case when the measured force acts 1) uniformly on all atoms of the upper layer, 2) on adsorbents uniformly distributed on the surface or edges of the upper layer in the area between the first and second bottom layers 3) on a nanoobject placed near the center of the upper layer. Discussion and Conclusions ========================== We have proposed the concept of the electromechanical nanodynamometer based on bilayer graphene in which the force is determined by measurements of the conductance between the layers. In this nanodynamometer, the force acting on one of the graphene layers causes the relative displacement of this layer and related change of the conductance between the layers. The calculations of the potential relief of the interlayer interaction energy within the dispersion-corrected density functional theory approach showed that the stable equilibrium of bilayer graphene is possible if the measured force acting on one of the layers along the armchair direction does not exceed 40 pN per elementary unit cell. The corresponding displacement of graphene layers lies within 0.36 Å. The calculations of the tunneling conductance of bilayer graphene using the Bardeen method allowed us to estimate that on the relative displacement of the layers, the tunneling conductance changes by at least a factor of 2, which provides the excellent possibility to determine the force by the conductance measurements. The relative error of the force measurements is determined by the relative thermal vibrations of the layers. This error decreases with the increase of the overlap of the layers and with the decrease of temperature. Let us discuss possible applications of the considered nanodynamometer. A molecule or a nanoobject can be adsorbed on the top layer of the nanodynamometer in the region where the top layer does not overlap with the bottom layers. The measurements of the force acting on the molecule or nanoobject in the presence of an electric or magnetic field would allow to determine their polarizability, electric and magnetic dipole and quadrupole moments. In the pioneering work of Novoselov *et al* graphene flakes were placed on an insulating substrate and brought into contact with electrodes [@Novoselov04] (to create field-effect transistor). A further considerable progress has been achieved in manipulation of individual graphene layers. Individual graphene flakes were moved on a graphite surface by the tip of the friction force microscope.[@Dienwiebel04] The possibilities to cut graphene nanoribbons with desirable geometrical parameters [@LiuZhang11] and remove individual graphene layers in a controllable way for device patterning [@Dimiev11] were demonstrated. The tunneling conductance can be measured for graphene in the way similar to the experiments for multiwall carbon nanotubes.[@Stetter10; @Bourlon04] All these give us a cause for optimism that the proposed graphene-based electromechanical nanodynamometer will be implemented in the near future. This work has been partially supported by the RFBR (grant 12-02-90041-Bel, 11-02-00604-a) and BFBR (grant Nos. F11V-001, F12R-178). The DFT-D calculations have been performed on the SKIF MSU Chebyshev supercomputer and on the MVS-100K supercomputer at the Joint Supercomputer Center of the Russian Academy of Sciences.
--- abstract: 'Quantum theory is a mathematical formalism to compute probabilities for outcomes happenning in physical experiments. These outcomes constitute events happening in space-time. One of these events represents the fact that a system located in the region of space where is situated a physical device has a certain value of a physical observable at the time when the device fires the outcome corresponding to that value of the observable. The causal structure of these events is customarily assumed fixed in an absolute way. In this paper we show that this assumption cannot be substantiated on operational grounds proving that two observers looking at the same quantum experiment can calculate the probabilities of the experiment assuming a different causal structure for the space-time events constituted by the outcomes. We will thus say that in quantum theory we have relativity of causal structure.' author: - Marco Zaopo title: Relativity of Causal Structure in Quantum Theory --- In the standard, classical theory of probability, joint probability distributions on the values of two random variables are defined independently of the existence of a causal relationship between the values of one of the variables and those of the other [@Defin]. This is the case since a pair of random variables on which is defineable a joint probability distribution can represent something out of the domain of physics and thus not necessarily embedded in some given space-time (we will refer to space-time generically to a causal network of events, without specifying any other property such as discreteness, continuity, dimension etc.). In quantum theory, on the contrary, two random variables always represent two observables related to some physical system. The values of these variables are indeed associated to events embedded in some given space-time. In this case a given value of a random variable is infact always percieved by a click of a detector that has revealed a property of a physical system in some position of space at some given instant of time. Hence the events on which are defined joint probability distributions for two physical observables always have a definite causal structure since any space-time ultimatley constitutes a causal network of events. Quite recently, several authors have explored the statistics of different quantum experiments in which are investigated the same observables and phsyical systems and that differ in the causal arrangements of the devices involved [@Leif1; @Leif2; @Brukvedr; @EvPrWh; @MarcRez]. In these papers it is found that there are a lot of analogies from a formal point view in describing two quantum experiments differing only in the causal relations of the devices involved. Exploiting this fact, in [@Leif1], is formulated quantum theory as a theory of Bayesian inference in which the different causal relations between correlated regions are treated in a unified way. In this paper we show that the mathematical structure of quantum theory is such that two observers looking at the same quantum experiment can calculate the probabilities of the experiment assuming a different causal structure for the events on which is defined the probability distribution. This means that the causal structure of the events happening in a quantum experiment may not be regarded as absolutely fixed. Two observers can indeed obtain the information contained in a given experiment assuming a different causal structure for the events constituted by the outcomes involved. This result is likely to have implications in the search of a theory of quantum gravity. It suggests that to formulate a properly quantum theory of cosmological processes, we should look for a mathematical formalism to calculate probabilities of these processes such that the causal structure of the events on which is defined the probability distribution of a process can be regarded as a mathematical symmetry. This paper is organized as follows. We first introduce a general framework for quantum experiments whose information is contained in the joint probabilities of the values of a pair of phsyical observables. In this scenario, we formulate the property of relativity of causal structure. We show that in any quantum experiment performable in the above framework we have relativity of causal structure. We then discuss relativity of causal structure in relation with the no-signalling principle. We finally link our result to those obtained in [@Leif1]. Operational framework --------------------- In a generic quantum experiment one is interested in the joint probabilities of the outcomes happening on two devices $\mathcal{A}, \mathcal{B}$, that are able to analyze a set of physical observables generically pertaining to two quantum systems $\mathscr{S}_1$ and $\mathscr{S}_2$ respectively ($\mathscr{S}_1, \mathscr{S}_2$ can of course be the same system). The possible values of a given observable $A$ analyzed by device $\mathcal{A}$ constitute a set of outcomes of this device $\{a_i\}_{i \in A}$ while the possible values of another observable $B$, analyzed by device $\mathcal{B}$, constitute another set of outcomes $\{b_j\}_{j \in B}$. The information contained in the experiment is expressed by the probability distribution $\{p(a_i,b_j)\}_{i,j}$ for all $(a_i,b_j) \in A\times B $ where it holds the normalization condition: $ \sum_{i\in A, j \in B} p(a_i,b_j) = 1 $ This clearly holds for all the possible experiments performable with devices $\mathcal{A}, \mathcal{B}$, namely, for all the possible observables that $\mathcal{A}, \mathcal{B}$ can analyze. A simple example of this is an experiment involving two Stern-Gerlach apparata SG$_1$, SG$_2$ analyzing the spin of an electron. In this case observables $A$ and $B$ represent two given orientations that the spin of an electron can have, for example $Z_1$ and $Z_2$. The possible outcomes happening on SG$_1$ are $\{$spin up along $Z_1$ ($Z_1\uparrow$), spin down along $Z_1 (Z_1\downarrow) \}$ and those that can happen on SG$_2$ are $\{$spin up along $Z_2 (Z_2\uparrow)$, spin down along $Z_2 (Z_2\downarrow)\}$. The information contained in the experiment is in the joint probability distribution $\{p(Z_1\uparrow,Z_2\uparrow), p(Z_1\uparrow,Z_2\downarrow), p(Z_1\downarrow, Z_2\uparrow), p(Z_1\downarrow, Z_2\downarrow )\}$. The events on which are defined probability distributions in quantum experiments always possess a definite causal structure [@Wald]. \[caustr\] Given a pair of events, $\chi_a, \chi_b$ it is defined a *causal structure* for these events if one of the following holds: - $\chi_a$ causes $\chi_b$ - $\chi_b$ causes $\chi_a$ - $\chi_a$ does not cause $\chi_b$ and $\chi_b$ does not cause $\chi_a$ This is the case since the random variables on which the probability distributions are defined, refer to observables pertaining to physical systems. Consider an arbitrary pair of outcomes $(a_i,b_j) \in A\times B$ such that $p(a_i,b_j) \neq 0$. These outcomes constitute two events that can happen in space-time. Outcome $a_i$ has associated an event $\chi_{a_i}$ representing that a system $\mathscr{S}_1$ has the value $a_i$ of observable $A$ in the region occupied by device $\mathcal{A}$ with space coordinates $x_{a_i}$ at time $t_{a_i}$. In the same way outcome $b_j$ has associated an event $\chi_{b_j}$ saying that $\mathscr{S}_2$ has value $b_j$ of observable $B$ in a region occupied by $\mathcal{B}$ with space coordinates $x_{b_j}$ at time $t_{b_j}$. To any quantum experiment is associated a specific dynamics of one or more systems $\mathscr{S}$. Wether for the events $\chi_{a_i}, \chi_{b_j}$ associated to the outcomes $(a_i,b_j)$ holds anyone of the alternatives in definition \[caustr\] clearly depends on the dynamics of the systems involved in the experiment. From an operational point of view, the assignment of a dynamics to the systems in a quantum experiment consists of a specification of the inputs and outputs for the devices involved in it. This point of view is first illustrated in [@hardy]. If we are interested in the joint probabilities of outcomes happening on devices $\mathcal{A}, \mathcal{B}$, the possible input/output combinations assigned to these devices are responsible for the different alternatives in definition \[caustr\] that can be associated to the pair of events $\chi_{a_i}$, $\chi_{b_j}$ corresponding to the pair of outcomes $(a_i,b_j)$. If the dynamics of the experiment is such that system $\mathscr{S}_1$ is the output of device $\mathcal{A}$ and $\mathscr{S}_2$ is the input of device $\mathcal{B}$, we have that the pair of events $\chi_{a_i}, \chi_{b_j}$ associated to $(a_i,b_j)$ are such that $\chi_{a_i}$ causes $\chi_{b_j}$. If, conversely, the dynamics is the time reversal of the previous one, then system $\mathscr{S}_1$ is the input for $\mathcal{A}$, system $\mathscr{S}_2$ is the output for device $\mathcal{B}$ and the pair of events $\chi_{a_i}, \chi_{b_j}$ associated to $(a_i,b_j)$ are such that $\chi_{b_j}$ causes $\chi_{a_i}$. If the experiment is such that two causally independent systems are inputs (or outputs) for devices $\mathcal{A}$ and $\mathcal{B}$, then the causal structure of $\chi_{a_i}, \chi_{b_j}$ associated to outcomes $(a_i, b_j)$ is such that $\chi_{a_i},$ does not cause $\chi_{b_j}$ and $\chi_{b_j}$ does not cause $\chi_{a_i}$, namely, the two events are space-like. The assumption that these input/output associations can be done in an absolute way can hardly be motivated on operational grounds. There is infact no experiment that can probe that a quantum system is “escaped out from a device” in a given state and is “entered into another device” causing an outcome happening on it. This is the case since if it existed one such experiment, this should also make the system interact with another probe system; the interaction would perturb the dynamics of the original system and could in principle prevent it to enter the aperture of a physical device or even to escape out from it and would make the state and the measurement outcome change. A similar reasoning on the impossibility to “probe causal structrue” in quantum theory can be found in [@hardy1]. In what follows we will infact show that two different observers can compute the joint probabilities $p(a_i,b_j), \forall (a_i,b_j) \in A\times B$ in an experiment involving devices $\mathcal{A}$ and $\mathcal{B}$, assuming different input/output configurations for these devices. Since, from an operational point of view, the specific causal structure of events $\chi_{a_i}, \chi_{b_j}$ associated to the outcomes $(a_i,b_j)$ derives from the specification of the inputs and outputs of the devices involved, we will say that in quantum theory we have relativity of causal structure. Relativity of Causal Structure ------------------------------ Consider an arbitrary pair of outcomes $(a_i, b_j)$ having non zero probability of jointly happening $p(a_i,b_j)$. An observer $O_{\alpha}$ assumes that a quantum system $\mathscr{S}_1$ is the output of device $\mathcal{A}$ on which $a_i$ happens, is subject to an evolution $\mathscr{T}$ (eventually transforming $\mathscr{S}_1$ in system $\mathscr{S}_2$) and then constitutes the input of a measurement device $\mathcal{B}$ on which $b_j$ happens. This implies that the space-time events $\chi_{a_i}, \chi_{b_j}$ associated to outcomes $a_i, b_j$ are assumed such that $\chi_{a_i}$ causes $\chi_{b_j}$. A second observer $O_{\beta}$ looking at the same quantum experiment of $O_{\alpha}$ assumes that system $\mathcal{S}_2$ is the output of device $\mathcal{B}$ where $b_j$ happens, is subject to an evolution $\mathscr{T}'$ (eventually transforming $\mathscr{S}_2$ in $\mathscr{S}_1$) and then constitutes the input of a measurement device $\mathcal{A}$ on which it happens $a_i$. Since this constitutes the time reversal of the dynamics assumed by observer $O_{\alpha}$, space-time events $\chi_{a_i}, \chi_{b_j}$ are assumed by $O_{\beta}$ in such a way that $\chi_{b_j}$ causes $\chi_{a_i}$. A third observer, $O_{\gamma}$, looking at the same experiment, is indeed assuming that systems $\mathscr{S}_1$ and $\mathscr{S}_2$ are both causally independent inputs respectively of two measurement devices $\mathcal{A}$ and $\mathcal{B}$ on which happen $a_i$ and $b_j$. The two systems are both outputs of a preparation device for the composite system $\mathscr{S}_1\mathscr{S}_2$ that prepares a state $\tau_{12}$. Observer $O_{\gamma}$ thus assumes that $\chi_{a_i}, \chi_{b_j}$ are two space-like events, namely, $\chi_{a_i}$ does not cause $\chi_{b_j}$ and $\chi_{b_j}$ does not cause $\chi_{a_i}$. \[rcs\] We have *relativity of causal structure* if, given a choice of mathematical objects performed by anyone of $O_{\alpha}, O_{\beta}, O_{\gamma}$ to calculate probability $p(a_i,b_j)$, there exist unique choices of mathematical objects for the remaining two observers that permit them to calculate $p(a_i,b_j)$. This must hold for all $(a_i,b_j) \in A\times B$ and for all $(A, B)$. In what follows we will prove that in quantum theory we have relativity of causal structure. Before doing this we will state a rule of transformation from mathematical objects describing physical objects (i.e. evolutions, preparations and measurement outcomes) used by an observer $O$ to the corresponding mathematical objects used by another observer $O'$. $\bf{Transformation \;Rule}$ - *Whenever a system $\mathscr{S}$ for which is defined a physical object (i.e. preparation, evolution or measurement outcome) is seen as an input (output) by observer $O$ and as an output (input) by observer $O'$, the operator used to describe that object by $O$ is the transpose on the Hilbert space of system $\mathscr{S}$ of the operator used to describe the corresponding object seen by $O'$.* Observer $O_{\alpha}$ assumes that $a_i$ is an element of a preparations ensemble represented by a density matrix $\rho$ and a POVM $\{\bf{a_i}\}_{i \in A}$ such that: $$\label{ro} \rho = \sum_{i\in A} \text{Tr}[\bf{a_i}\rho] \frac{\sqrt{\rho} \;\bf{a_i} \sqrt{\rho}}{\text{Tr}[\bf{a_i}\rho]}$$ It is easy to see that $\rho$ is a convex combination of density operators for system $\mathscr{S}_1$ since $\sqrt{\rho} \bf{a_i} \sqrt{\rho} / \text{Tr}[\bf{a_i}\rho]$ is a positive operator with unit trace defined on $\mathscr{S}_1$ for all $i$ while $\{\text{Tr}[\bf{a_i}\rho]\}_{i\in A}$ is a probability distribution. In what follows we assume $\rho$ not pure, i.e. $i > 1$. We will discuss this assumption in section \[disc\]. The transformation $\mathscr{T}$ transforming ensemble $\rho$ for observer $O_{\alpha}$ is represented by a Completely Positive Trace Preserving (CPTP) map $\mathscr{T}= \sum_{m} K^m\otimes K^{m\dagger} $ where $K^m = \sum_{ab} K^m_{ab} |a\rangle_{2} {}_{1}\langle b|$ is a Kraus operator [@kraus]. $b_j$ for $O_{\alpha}$ is a measurement outcome represented by an element of a POVM $\{\bf{b_j}\}_{j \in B}$ for system $\mathscr{S}_2$. Using these informations and the rule stated above we can obtain the mathematical objects used to describe the experiment by observers $O_{\beta}$ and $O_{\gamma}$. In order to find the operators used to describe the experiment by observer $O_{\beta}$, we note that he assumes input systems in correspondence of systems that are outputs for $O_{\alpha}$, while assumes output systems in correspondence of systems that are inputs for $O_{\alpha}$. Hence all the operators used by $O_{\beta}$ are the transposed of those used by $O_{\alpha}$ since we have to transpose on all spaces on which operators are defined. To find the operators used by $O_{\gamma}$ we first explicitly write the evolution by means of transformation $\mathscr{T}$ of ensemble $\rho$ seen by $O_{\alpha}$. The density matrix obtained after the evolution by $O_{\alpha}$ is: $$\label{tr} \mathscr{T}({\rho}) = \sum_{m,ab,cd} K_{ab}^mK_{cd}^{m*} |a\rangle_2 {}_1\langle b|\rho |c\rangle_1{}_2\langle d|$$ Using the fact that $\sum_{m} K^m\otimes K^{m\dagger} $ can be written as: $$\sum_{m,ab,cd} K_{ab}^mK_{cd}^{m*} |c\rangle_1 {}_1\langle b| \otimes |a\rangle_2{}_2\langle d|$$ and the polar decomposition of $\rho$ we have: $$\mathscr{T}({\rho}) = \text{Tr}_1[\sum_{m,ab,cd} K_{ab}^mK_{cd}^{m*} \sqrt{\rho}|c\rangle_1 {}_1\langle b|\sqrt{\rho} \otimes |a\rangle_2{}_2\langle d|]$$ Note that, for the polar decomposition of $\rho$ to be uniquely defined, one must assume $\rho$ to be full rank in the Hilbert space corresponding to $\mathscr{S}_1$. This assumption is consistent with the fact that we are considering that observer $O_{\alpha}$ describes the possible values of an observable $A$ as an ensemble of preparations representing the system having all the different values of $A$. The density matrix obtained by $O_{\alpha}$ after the evolution can thus be written as $\mathscr{T}({\rho}) = \text{Tr}_1[\mathscr{T}_{\rho} ]$ where we define: $$\label{tr} \mathscr{T}_{\rho} : = \sqrt{\rho} \otimes I_2 [\sum_{m} (K^m \otimes K^{m\dagger})] \sqrt{\rho} \otimes I_2$$ where $I_2$ is the identity matrix on system $\mathscr{S}_2$. From (\[tr\]) we see that the evolution of ensemble $\rho$ can be represented as an operator acting on Hilbert spaces of systems $\mathscr{S}_1$ and $\mathscr{S}_2$. The evolution represented by $\mathscr{T}_{\rho}$ is seen as a bipartite state $\tau_{12}$ by $O_{\gamma}$ since he assumes that the output of device $\mathcal{A}$ seen by $O_{\alpha}$, $\mathscr{S}_1$, is indeed an input for $\mathcal{A}$. According to the transformation rule stated above, $O_{\gamma}$ uses the following mathematical object to represent the bipartite state $\tau_{12}$: $$\label{t12} \tau_{12} = \mathscr{T}_{\rho}^{T_1} = \sqrt{\rho}^T \otimes I_2 [\sum_{m} (K^m \otimes K^{m\dagger})^{T_1} ]\sqrt{\rho}^T \otimes I_2$$ Where ${}^{T_1}$ denotes partial transpose on space 1 corresponding to $\mathscr{S}_1$. To see that (\[t12\]) is a normalized bipartite state we define the normalized bipartite state on two copies of $\mathscr{S}_1$, $|\Phi\rangle_{11'}$: $$\label{fi} |\Phi\rangle_{11'} = \sqrt{\rho}^T \otimes I_{1'} \sum_j | j \rangle_{1} \otimes |j \rangle_{1'}$$ where $\{|j\rangle\}_{j=1}^{d_1}$ is an orthonormal basis for Hilbert space of system $\mathscr{S}_1$. Exploiting (\[fi\]) we can write: $$\label{isot} \mathscr{I} \otimes \mathscr{T} (|\Phi\rangle\langle\Phi|) = \tau_{12}$$ where $\mathscr{I}$ is the identity map on system $\mathscr{S}_1$ and $\mathscr{T}$ represent the evolution defined above. From (\[isot\]) we can see that $\tau_{12}$ is a normalized bipartite state since $\mathscr{T}$ is a TPCP map acting on system $\mathscr{S}_1$ and $|\Phi\rangle\langle\Phi|$ is a normalized bipartite state. The element of ensemble of preparations corresponding to $a_i$ for $O_{\alpha}$ is seen by $O_{\gamma}$ as a measurement outcome. In consequence of this it is represented as $\bf{a_i^T}$ by $O_{\gamma}$ since he assumes system $\mathscr{S}_1$ as an input contrary to $O_{\alpha}$. The probability $p_{\alpha}(a_i,b_j)$ calculated by observer $O_{\alpha}$ is: $$\label{oalfa} p_{\alpha}(a_i,b_j) = \text{Tr}_2[\bf{b_j} \text{Tr}_1[\mathscr{T}_{\rho} \bf{a}_i]]$$ The probability calculated by $O_{\beta}$ is: $$\label{obeta} p_{\beta}(a_i,b_j) = \text{Tr}_1[\bf{a_i}^T \text{Tr}_2[\mathscr{T}_{\rho}^T \bf{b}_j^T]]$$ The probability calculated by $O_{\gamma}$ is: $$\label{ogamma} p_{\gamma}(a_i,b_j) = \text{Tr}_{12} [\bf{a_i}^T \otimes \bf{b_j} \mathscr{T}_{\rho}^{T_1}]$$ It can be easily verified that $p_{\alpha}(a_i,b_j) = p_{\beta}(a_i,b_j) = p_{\gamma}(a_i,b_j)$. Since given an operator corresponding to a preparation, transformation or measurement outcome seen by a given observer, its transpose and its partial transpose on any of its subspaces are uniquely defined and we assumed $(a_i,b_j)$ arbitrary, we have relativity of causal structure by definition \[rcs\]. Discussion and related work {#disc} --------------------------- First we discuss the assumption done after (\[ro\]) that $\rho$ is not a pure state. This is necessary for $\tau_{12}$ in (\[t12\]) to be normalized. If we have $\rho = |a\rangle\langle a|$ then observer $O_{\alpha}$ is interested only in joint probabilities of the type $p(a, b_j)$ with $b_j \in \{b_j\}_{j \in B}$ and $a$ fixed value of observable $A$. This is equivalent to state that the uncertainty in observable $A$ is 0 and $p(a, b_j) = p(b_j|a)$. Observers $O_{\gamma}$ and $O_{\beta}$ clearly cannot assume that the uncertainty in $A$ is 0, since from their point of view this represents a measurement whose outcomes are random. Nevertheless they can compute the above probability as $$\label{relat} p(b_j|a) = p(a,b_j)/\sum_{b_j}p(a,b_j).$$ This is the fraction of times $b_j$ happens given that $a_i = a$ has happened and is the probability obtained by $O_{\alpha}$ assuming $\rho = |a\rangle\langle a|$. The probability $p(a,b_j)$ written in (\[relat\]) for $O_{\beta}$ and $O_{\gamma}$ can be calculated for an arbitrary probability distribution on the values $\{a_i\}_{i\in A}$ of observable $\mathcal{A}$. We should now discuss the relationship between relativity of causal structure and the “no-signalling principle”. \[nosig\] *No-signalling principle* If two devices $\mathcal{A}$ and $\mathcal{B}$ are space-like separated originating outcomes corresponding to space-like events, then $$\label{ns} p(b_j) = \sum_{a_i} p(a_i,b_j) = \sum_{a'_i} p(a_i',b_j) \;\;\;\forall\;\; \{a_i\}_{i\in A} , \{a_i'\}_{ \in A'}$$ for all $b_j \in \{b_j\}_{j \in B}$ where $B$ is a measurement on device $\mathcal{B}$ and $A$ and $A'$ are any two different measurements performed on device $\mathcal{A}$. Since this principle holds in quantum theory, the quantum correlations between space-like devices cannot be used by an agent operating on $\mathcal{B}$ to become aware of the actions of an agent operating on device $\mathcal{A}$. Note that (\[ns\]) is only a necessary condition that joint probabilities of outcomes happening on two space-like separated devices must satisfy. Hence there are no contradictions for an observer $O_{\alpha}$ assuming that for every pair of outcomes $(a_i,b_j) \in A\times B$ the associated space-time events $\chi_{a_i}, \chi_{b_j}$ are such that, say, $\chi_{a_i}$ causes $\chi_{b_j}$. On the other hand, if $\sum_{a_i} p(a_i,b_j) \neq \sum_{a'_i} p(a_i',b_j)$ for some $b_j$ and a pair $\{a_i\}_{i\in A} , \{a_i'\}_{ \in A'}$ then an observer $O_{\alpha}$ establishes that an agent operating on $\mathcal{A}$ must have changed the ensemble of preparations from $\rho = \sum_{i\in A} \text{Tr}[\bf{a_i}\rho] \sqrt{\rho} \;\bf{a_i} \sqrt{\rho}/\text{Tr}[\bf{a_i}\rho]$ to $\rho' = \sum_{i'\in A} \text{Tr}[\bf{a_i'}\rho] \sqrt{\rho'} \;\bf{a_i'} \sqrt{\rho'}/\text{Tr}[\bf{a_i'}\rho']$. An observer $O_{\gamma}$ looking at the same experiments establishes that correlations on devices $\mathcal{A}$ and $\mathcal{B}$ are in one case due to a bipartite state $\tau = \mathscr{T}_{\rho}^{T_1}$ (with $\mathscr{T}$ evolution of ensemble $\rho$ seen by $O_{\alpha}$ and $\mathscr{T}_{\rho}$ defined in (\[tr\])) and in the other case to a bipartite state $\tau' = \mathscr{T}_{\rho'}^{T_1}$; in both cases ${}^{T_1}$ means transposition on Hilbert space on which $\rho, \rho'$ are defined. In (\[t12\]) it is introduced an isomorphism between bipartite states and evolutions of preparations ensembles via partial transposition of the corresponding operators. This isomorphism is also introduced in [@Leif1] where it is invented the formalism of *quantum conditional states.* Quantum conditional states are used to formulate a theory of Bayesian inference for random variables representing physical observables pertaining to two regions that have a definite causal relationship. The peculiarity of this theory is a tool called *star product*. Star product permits to perform statistical inference for two correlated regions A and B in strict analogy with the ordinary theory of probability in which there is no dependence on the causal relationship between the regions. Quantum conditional states are divided into causal conditional states and acausal conditional states depending on wether the two regions are causally related regions (an outcome in one region causes that in the other region or viceversa) or not. A CPTP map, $\mathscr{T}_{AB}$ from region A to region B, is related to an *acausal* conditional state $\rho_{A|B}^s$, by means of the Choi isomorphism [@choi]: $$\mathscr{T}_{AB} \leftrightarrow \mathscr{I}_{A'} \otimes \mathscr{T}_{A''B} (|\Phi^+\rangle\langle\Phi^{+}|) = \rho_{\text{A}|\text{B}}^s$$ where $|\Phi^+\rangle = \frac{1}{\sqrt{d_A}}\sum_i | i \rangle_{A'} \otimes |i \rangle_{A''}$ and $\{|i\rangle\}_{i=1}^{d_A}$ is a basis for Hilbert space pertaining to the system in region A and $A'$, $A''$ two copies of the system in region A. The rule of belief propagation is used to find the joint state $\rho_{AB}^s$ for two systems in space-like separated regions, A and B, starting from the prior pertaining to one of the two regions, $\rho_A$; this is expressed via the star product: $$\rho_{AB}^s = \rho_A\star \rho_{A|B}^s = d_A \sqrt{\rho_A} \otimes I_B \; \rho_{A|B}^s \;\sqrt{\rho_A} \otimes I_B$$ The star product used here involves also a normalization factor $d_A$ that cancels with the factor $1/d_A$ arising from the definition of conditional state involving $|\Phi^+\rangle$. Note that $\rho_{AB}^s$ is equal to $\tau_{12}$ defined in (\[t12\]) with $\mathscr{S}_1$ and $\mathscr{S}_2$ pertaining to devices $\mathcal{A}$ and $\mathcal{B}$ respectively in regions A and B. The map $\mathscr{T}_{AB}$ is related to a *causal* conditional state $\rho_{A|B}^t$ by means of the Jamiolkowsky isomrphism [@jamio]: $$\mathscr{T}_{AB} \leftrightarrow [\mathscr{I}_{A'} \otimes \mathscr{T}_{A''B} (|\Phi^+\rangle\langle\Phi^{+}|)]^{T_{A'}} = \rho_{A|B}^t$$ where ${}^{T_{A'}}$ denotes partial transposition on Hilbert space of system $A'$ pertaining to region A. The rule of belief propagation is used to find the joint state $\rho_{AB}^t$ for two systems in two causally related regions A and B (or equivalently for one system at two different times) starting from the prior pertaining to region A, $\rho_A$. This is expressed with the star product as above: $$\rho_{AB}^t = \rho_A^T \star \rho_{A|B}^t = d_A \sqrt{\rho_A^T} \otimes I_B \; \rho_{A|B}^t \;\sqrt{\rho_A^T} \otimes I_B$$where ${}^{T}$ denotes transposition. Note that here $\rho_{AB}^t$ is equal to $\mathscr{T}_{\rho}$ defined in (\[tr\]) with $\mathscr{S}_1$ and $\mathscr{S}_2$ pertaining to devices $\mathcal{A}$ and $\mathcal{B}$ respectively in regions A and B. Conclusion ---------- In conclusion we have showed that the assumption of an absolute causal structure for the space-time events associated to the outcomes in a quantum experiment cannot be motivated on operational grounds. We infact showed that two observers looking at the same quantum experiment can compute the relevant probabilities assuming a different causal structure for the events on which is defined the probability distribution. This can have implications in quantum gravity. In light of this result, a possible way to concieve a theory of quantum gravity is to look for a formalism to compute probabilities of cosmological processes such that the causal structure of the events on which is defined the probability distribution of a process can be regarded as a mathematical symmetry. [99]{} B. de Finetti, Theory of Probability, vol. 1 (Wiley 1974) M.S. Leifer, R. W. Spekkens, arXiv:1107.5849v1 M.S. Leifer, Phys. Rev. A 74, 042310 (2006), arXiv:quant-ph/0606022v2 S. Taylor, S. Cheung, C. Brukner, V. Vedral, Proceedings of Quantum Communication, Measurement and Computing, (AIP, 2004), vol 734 of AIP Conference Proceedings, arXiv:quant-ph/0611233 P. Evans, H. Price, K.B. Wharton, arXiv:1001.5057. S. Marchovitch, B. Reznik, arXiv:1103.2557. R. M. Wald, General Relativity, The University of Chicago Press, 1984. L. Hardy, arXiv:0912.4740 L. Hardy, arXiv:gr-qc/0509120 K. Kraus, States, Effects and Operations: Fundamental Notions of Quantum Theory, Springer Verlag 1983 M. Choi, Completely Positive Linear Maps on Complex matrices, Lin. Alg. and App., 285–290, 1975 A. Jamiolkowski, Rev. Math. Phys. [**3**]{}, 275 (1972).
--- abstract: | Let $G$ be an edge-colored graph. A rainbow (heterochromatic, or multicolored) path of $G$ is such a path in which no two edges have the same color. Let the color degree of a vertex $v$ be the number of different colors that are used on the edges incident to $v$, and denote it to be $d^c(v)$. It was shown that if $d^c(v)\geq k$ for every vertex $v$ of $G$, then $G$ has a rainbow path of length at least $\min\{\lceil\frac{2k+1}{3}\rceil,k-1\}$. In the present paper, we consider the properly edge-colored complete graph $K_n$ only and improve the lower bound of the length of the longest rainbow path by showing that if $n\geq 20$, there must have a rainbow path of length no less than $\displaystyle \frac{3}{4}n-\frac{1}{4}\sqrt{\frac{n}{2}-\frac{39}{11}}-\frac{11}{16}$.\ \[2mm\] [**Keywords:**]{} properly edge-colored graph, complete graph, rainbow ( heterochromatic, or multicolored) path.\ \[2mm\] [**AMS Subject Classification (2010)**]{}: 05C38, 05C15 author: - | He Chen$^1$ and Xueliang Li$^2$\ \[2mm\] $^1$Department of Mathematics,\ Southeast University, Nanjing 210096, China\ [email protected]\ $^2$Center for Combinatorics and LPMC\ Nankai University, Tianjin 300071, China\ [email protected]\ title: '**Long rainbow path in properly edge-colored complete graphs[^1]**' --- Introduction ============ We use Bondy and Murty [@B-M] for terminology and notation not defined here and consider simple graphs only. Let $G=(V,E)$ be a graph. By an [*edge-coloring*]{} of $G$ we mean a function $C: E\rightarrow \mathbb{N} $, the set of natural numbers. If $G$ is assigned such a coloring, then we say that $G$ is an [*edge-colored graph*]{}. Denote the edge-colored graph by $(G,C)$, and call $C(e)$ the [*color*]{} of the edge $e\in E$. We say that $C(uv)=\emptyset$ if $uv\notin E(G)$ for $u,v\in V(G)$. For a subgraph $H$ of $G$, we denote $C(H)=\{C(e) \ | \ e\in E(H)\}$ and $c(H)=|C(H)|$. For a vertex $v$ of $G$, the [*color neighborhood*]{} $CN(v)$ of $v$ is defined as the set $\{C(e)\ | \ e \mbox{ is incident with }v\}$, the [*color degree*]{} $d^c(v)=|CN(v)|$. A subgraph of $G$ is called [*rainbow (heterochromatic, or multicolored)*]{} if any two edges of it have different colors. If $u$ and $v$ are two vertices on a path $P$, $uPv$ denotes the segment of $P$ from $u$ to $v$, whereas $vP^{-1}u$ denotes the same segment but from $v$ to $u$. There are many existing publications dealing with the existence of paths and cycles with special properties in edge-colored graphs. The heterochromatic Hamiltonian cycle or path problem was studied by Hahn and Thomassen [@H-T], Rödl and Winkler (see [@F-R]), Frieze and Reed [@F-R], and Albert, Frieze and Reed [@A-F-R]. In [@A-J-Z], Axenovich, Jiang and Tuza gave the range of the maximum $k$ such that there exists a $k$-good coloring of $E(K_n)$ that contains no properly colored copy of a path with fixed number of edges, no heterochromatic copy of a path with fixed number of edges, no properly colored copy of a cycle with fixed number of edges and no heterochromatic copy of a cycle with fixed number of edges, respectively. In [@E-T-1], Erdös and Tuza studied the heterochromatic paths in infinite complete graph $K_\omega$. In [@E-T-2], Erdös and Tuza studied the values of $k$, such that every $k$-good coloring of $K_n$ contains a heterochromatic copy of $F$ where $F$ is a given graph with $e$ edges ($e<n/k$). In [@M-S-T], Manoussakis, Spyratos and Tuza studied $(s,t)$-cycle in $2$-edge colored graphs, where $(s,t)$-cycle is a cycle of length $s+t$ and $s$ consecutive edges are in one color and the remaining $t$ edges are in the other color. In [@M-S-T-V], Manoussakis, Spyratos, Tuza and Voigt studied conditions on the minimum number $k$ of colors, sufficient for the existence of given types (such as families of internally pairwise vertex-disjoint paths with common endpoints, hamiltonian paths and hamiltonian cycles, cycles with a given lower bound of their length, spanning trees, stars, and cliques ) of properly edge-colored subgraphs in a $k$-edge colored complete graph. In [@C-M-M], Chou, Manoussakis, Megalaki, Spyratos and Tuza showed that for a 2-edge-colored graph $G$ and three specified vertices $x, y$ and $z$, to decide whether there exists a color-alternating path from $x$ to $y$ passing through $z$ is NP-complete. Many results in these papers were proved by using probabilistic methods. In [@A-J-Z], Axenovich, Jiang and Tuza considered the local variation of anti-Ramsey problem, namely, they studied the maximum $k$ such that there exists a $k$-good edge-coloring of $K_n$ containing no heterochromatic copy of a given graph $H$, and denote it by $g(n,H)$. They showed that for a fixed integer $k\geq 2$, $k-1\leq g(n, P_{k+1})\leq 2k-3$, i.e., if $K_n$ is edge-colored by a $(2k-2)$-good coloring, then there must exist a heterochromatic path $P_{k+1}$, and there exists an a $(k-1)$-good coloring of $K_n$ such that no heterochromatic path $P_{k+1}$ exists. In [@B-L], the authors considered long heterochromatic paths in general graphs with a $k$-good coloring and showed that if $G$ is an edge-colored graph with $d^c(v)\geq k$ (color degree condition) for every vertex $v$ of $G$, then $G$ has a heterochromatic path of length at least $\lceil\frac{k+1}{2}\rceil$. In [@C-L-1; @C-L-2], we got some better bound of the length of longest heterochromatic paths in general graphs with a $k$-good coloring. In [@C-L-3], we showed that if $|CN(u)\cup CN(v)|\geq s$ (color neighborhood union condition) for every pair of vertices $u$ and $v$ of $G$, then $G$ has a heterochromatic path of length at least $\lceil\frac{s+1}{2}\rceil$, and gave examples to show that the lower bound is best possible in some sense. In [@G-M], Gyárfás and Mhalla showed that in any properly edge-colored complete graph $K_n$, there is a rainbow path with no less than $(2n+1)/3$ vertices. In [@C-L-2] we got a better result, showing that in any edge-colored graph $G$, if for every vertex of $G$ there are at least $k$ colors appear on it, then the longest rainbow path in $G$ is no shorter than $\lceil\frac{2k}{3}\rceil+1$. \[2/3 k\][@C-L-2] Let $G$ be an edge-colored graph. If $d^c(v)\geq k$ for every vertex $v\in V(G)$, then $G$ has a heterochromatic path of length at least $\min\{\lceil\frac{2k}{3}\rceil+1,k-1\}$. In this paper, we will improve the bound in [@G-M], and show that a longest rainbow path in a properly edge-colored $K_n$ is not shorter than $\displaystyle \left(\frac{3}{4}-o(1)\right)n$. Propositions of a longest rainbow path ====================================== Suppose $G$ is a properly edge-colored $K_n$, $P=v_0v_1v_2\cdots v_l$ is one of the longest rainbow paths in $G$, and $C(v_{i-1}v_i)=C_i$ ($i=1,2,\cdots,l$). Suppose $l<n-2$ and $u$ is an arbitrary vertex which does not belong to the path $P$. Then we can easily get the following proposition. \[P1\] $C(v_0u)\in C(P)$, $C(v_lu)\in C(P)$. Otherwise, $uv_0Pv_l$ or $uv_lP^{-1}v_0$ is a rainbow path of length $l+1$, a contradiction. \[P2\] If $C(uv_i)\notin C(P)$, then $C(uv_{i-1})\in C(P)$, $C(uv_{i+1})\in C(P)$. Otherwise, $v_0Pv_{i-1}uv_iPu_l$ or $v_0Pv_iuv_{i+1}Pu_l$ is a rainbow path of length $l+1$, a contradiction. \[P3\] If $C(uv_i)\notin C(P)$, then $\{C(v_0v_{i+1}), C(v_lv_{i-1})\}\subset C(P)\cup C(uv_i)$. Otherwise, $uv_iP^{-1}v_0v_{i+1}Pv_l$ or $uv_iPv_lv_{i-1}P^{-1}v_0$ is a rainbow path of length $l+1$, a contradiction. \[P4\] If $C(uv_i)\notin C{P}$, then $C(v_0v_l)\in C(P)\setminus \{C_{i-1},C_i\}$. Otherwise, $uv_iPv_lv_0Pv_{i-1}$ or $uv_iP^{-1}v_0v_lP^{-1}v_{i+1}$ is a rainbow path of length $l+1$, a contradiction. \[P5\] If $C(v_0v_i)\notin C(P)$, then $C(v_lu)\in C(P)\setminus{C(v_{i-1}v_i)}$; if $C(v_lv_i)\notin C(P)$, then $C(v_0u)\in C(P)\setminus{C(v_iv_{i+1})}$. Otherwise, $v_{i-1}P^{-1}v_0v_iPv_lu$ or $v_{i+1}Pv_lv_iP^{-1}v_0u$ is a rainbow of length $l+1$, a contradiction. \[P6\] If $C(v_0v_i)\notin C(P)$, then $C(v_{i-1}u)\in C(P)\cup{C(v_0v_i)}$; if $C(v_lv_i)\notin C(P)$, then $C(v_{i+1}u)\in C(P)\cup{C(v_lv_i)}$. Otherwise, $uv_{i-1}P^{-1}v_0v_iPv_l$ or $uv_{i+1}Pv_lv_iP^{-1}v_0$ is a rainbow of length $l+1$, a contradiction. With these propositions, we can give new lower bound of a longest rainbow path. And we will do that separately in the following two situations: the biggest rainbow cycle is of length $l+1$, and the biggest rainbow cycle is of length less than $l+1$. A longest rainbow path has the same number of vertices as a biggest rainbow cycle ================================================================================= If the longest rainbow path has the same number of vertices as the biggest rainbow cycle, then the biggest rainbow cycle is of length $l+1$, and there exists a rainbow path $P=v_0v_1\cdots v_l$ such that $C(v_0v_l)\notin C(P)$. Then, we can easily get the following conclusion from Proposition \[P4\]. \[L1\] If $C(v_0v_l)\notin C(P)$, then for an arbitrary $u\in V(G\setminus P)$, $C(u,P)\in C(P)\cup C(v_0v_l)$. By using this Lemma, we can get one of our main conclusions. \[T1\] If $n\geq 20$ and $C(v_0v_l)\notin C(P)$, then $\displaystyle l\geq\frac{3}{4}n-1$. We will prove it by contradiction. Suppose a longest rainbow path in $G$ is of length $l< \frac{3}{4}n-1$. Then $|V(G)\setminus V(P)|=n-l-1> \frac{n}{4}\geq 5$. We can conclude by Lemma \[L1\] that for any vertex $u\in V(G)\setminus V(P)$, $C(u,P)\subseteq C(P)\cup C(v_0v_l)$. On the other hand, $|V(P)|=|C(P)\cup C(v_0v_l)|=l+1$ and $G$ is a properly edge-colored $K_n$. Therefore, $C(u,P)=C(P)\cup C(v_0v_l)$, $C(G\setminus P)\cap \left(C(P)\cup C(v_0v_l)\right)=\emptyset$. Since $P$ is one of the longest rainbow paths, by Proposition \[P1\], there exist $2\leq i_1<i_2<\cdots<i_{n-2-l}<l$, $1\leq j_1<j_2<\cdots<j_{n-2-l}<l-1$, such that $\begin{array}{ll} & |\{C(v_0v_{i_1}),C(v_0v_{i_2}), \cdots,C(v_0v_{i_{n-2-l}})\}|\setminus(C(P)\cup \{C(v_0v_l)\})\\ = & |\{C(v_lv_{j_1}),C(v_lv_{j_2}), \cdots,C(v_lv_{j_{n-2-l}})\}|\setminus(C(P)\cup \{C(v_0v_l)\})\\ = & n-l-2 \end{array}$\ Additionally, $C(uv_{i_k-1})\neq C(v_0v_l)$, $C(uv_{j_k+1})\neq C(v_0v_l)$, $k=1,2,\cdots, n-l-2$. Let $I=\{i-1| C(v_0v_i)\notin C(P)\cup C(v_0v_l), 2\leq i\leq l-1 \}$, $J=\{j+1|C(v_jv_l)\notin C(P)\cup C(v_0v_l), 1\leq j\leq l-1\}$. Now we distinguish the following two cases: [**Case 1. $I\cap J\neq\emptyset$.**]{} This implies that there exists some $t$ in $I \cap J$, i.e., $$\{C(v_0v_{t+1}), C(v_lv_{t-1})\}\cap (C(P)\cup C(v_0v_l))=\emptyset$$ [**Case 1.1. $C(v_0v_{t+1})\neq C(v_lv_{t-1})$.**]{} Since $n-l\geq 4$ and $C(u,P)=C(P)\cup C(v_0v_l)$, there are no less than $3$ colors which is not in $C(P)\cup C(v_0v_l)$ such that they belong to the color set $C(u,V(G)\setminus V(P))$. Therefore, there exist $u_1,u_2 \in V(G)\setminus V(P)$ such that $C(u_1u_2)\notin C(P)\setminus \{C(v_0v_l),C(v_0v_{t+1}),C(v_{t-1}v_l)\}$. By Lemma \[L1\], there exists some vertex $v\in V(P)$ such that $C(u_1v)=C(v_0v_l)$, denote it by $v_{i_0}$. We can conclude from Proposition \[P6\] that $i_0\neq t$. Since $C'=v_0v_{t+1}Pv_lv_{t-1}P^{-1}v_0$ is a rainbow cycle of length $l$ in which the color $C(v_0v_l)$ does not appear on it. Therefore, $u_2u_1v_{i_0}C$ contains a rainbow path of length $l+1$, a contradiction. [**Case 1.2. $C(v_0v_{t+1})=C(v_lv_{t-1})\notin C(P)\cup C(v_0v_l)$.**]{} First, we can conclude that $C(v_{t-1}u)\neq C(v_0v_l)$ for any vertex $u \in V(G)\setminus V(P)$. Otherwise, suppose there exists some $u\in V(G)\setminus V(P)$ such that $C(v_{t-1}u)=C(v_0v_l)$. Since $|V(G)\setminus V(P)| >5$, there exists a vertex $u_1\in V(G)\setminus(V(P)\cup \{u\})$ such that $C(uu_1)\notin C(P)\cup \{C(v_0v_l), C(v_0v_{t+1})\}$. Therefore, $u_1uv_{t-1}P^{-1}v_0v_{t+1}Pv_l$ is a rainbow path of length $l+1$, a contradiction. Then, we will show that $t-1\notin I\cup J$. If $t-1\in I$, i.e., $C(v_0v_t)\notin C(P)\cup \{C(v_0v_l), C(v_0v_{t+1})\}$, $C'=v_0Pv_{t-1}v_lP^{-1}v_tv_0$ is a rainbow cycle of length $l+1$ without color $C(v_0v_l)$. On the other hand, by \[L1\] there exists a vertex $u\in V(G)\setminus V(P)$ and a vertex $v_{i_0}\in V(P)$ such that $C(uv_{i_0})=C(v_0v_l)$. Then $uv_{i_0}C$ contains a rainbow path of length $l+1$, a contradiction. If $t-1\in J$, i.e., $C(v_{t-2}v_l)\notin C(P)\cup \{C(v_0v_l), C(v_{t-1}v_l)\}$, $C'=v_0Pv_{t-2}v_lP^{-1}v_{t+1}v_0$ is a rainbow cycle of length $l$ without color $C(v_0v_l)$. Since $|V(G)\setminus V(P)|>5$, for any vertex $u\in V(G)\setminus V(P)$, $d^c_{G\setminus P}(u)\geq 5$. So, by Theorem \[2/3 k\] there exists a rainbow path $u_1u_2u_3\in G\setminus P$ with no colors in $C(P)\cup \{C(v_0v_l), C(v_0v_{t+1}), C(v_{t-2}v_l)\}$. Since $G$ is properly edge-colored, at least one edge in $\{v_tu_1, v_tu_3\}$ does not have color $C(v_0v_l)$, W.O.L.G., assume $C(v_tu_1)\neq C(v_0v_l)$. Then, because $C(v_{t-1}u_1)\neq C(v_0v_l)$, $C(u_1,P)=C(P)\cup C(v_0v_l)$. So, by Lemma \[L1\] there exists some $i_0$, $0\leq i_0\leq l$, $i_0\neq t-1,t$ such that $C(u_1v_{i_0})=C(v_0v_l)$. Then $u_3u_2u_1v_{i_0}C'$ contains a rainbow path of length $l+1$, a contradiction. So, we have $t-1\notin I\cup J$. Let $K=I\cap J$, $I'=(I\setminus K)\cup \{t-1|t\in K\}$. Then $|I'|=|I|$ and $I'\cap J =\emptyset$. Additionally, for any $t\in I'\cup J$ and any $u\in V(G)\setminus V(P)$, $C(v_tu)\neq C(v_0v_l)$. Otherwise, there exist some $t_0\in K$ and some vertex $u\in V(G)\setminus V(P)$, such that $C(v_{t_0-1}u)=C(v_0v_l)$. Since $|V(G)\setminus V(P)|\geq 6$, there exists some vertex $u_1\in V(G)\setminus V(P)$ such that $C(uu_1)\notin C(P)\cup \{C(v_0v_l), C(v_0v_{t_0+1})\}$. Then $u_1uv_{t_0-1}P^{-1}v_0v_{t_0+1}Pv_l$ is a rainbow path of length $l+1$, a contradiction. On the other hand, $$|I'\cup J|=|I'|+|J|=|I|+|J|\geq 2[(n-1)-(l+1)]=2(n-l-2),$$\ and $|V(G)\setminus V(P)|=n-(l+1)=n-l-1$. So there are at least $n-l-1$ $i$’s ($1\leq i\leq l-1$) such that $C(uv_i)=C(v_0v_l)$ for some $u\in V(G)\setminus V(P)$. So we have $|I'\cup J|+(n-l-1)\leq l-1$, and then $2(n-l-2)+n-l-1\leq l-1$, which implies $l\geq \frac{3}{4}n-1$, a contradiction. [**Case 2. $I\cap J=\emptyset$.**]{} By Proposition \[P6\], we have that for any $t\in I\cup J$ and any $u\in V(G)\setminus V(P)$, $C(v_tu)\neq C(v_0v_l)$. On the other hand, there are at least $|V(G)\setminus V(P)|=n-l-1$ $i$’s ($1\leq i\leq l-1$) such that $C(uv_i)=C(v_0v_l)$ for some $u\in V(G)\setminus V(P)$. So we have $|I\cup J|+(n-l-1)\leq l-1$, and then $2(n-l-2)+n-l-1\leq l-1$, which implies $l\geq \frac{3}{4}n-1$, a contradiction. This complete the proof. A biggest rainbow cycle has less vertices than a longest rainbow path ===================================================================== Since a biggest rainbow cycle have less vertices than a longest rainbow path, then $C(v_0v_l)\in C(P)$. For any longest rainbow path $P$, by Proposition \[P1\] and Theorem \[T1\], there exist $2\leq i_1<i_2<\cdots<i_{t_1}<l$ ($t_1\geq n-1-l$) such that $$|\{C(v_0v_{i_1}), C(v_0v_{i_2}),\cdots, C(v_0v_{i_{t_1}})\}|= |CN(v_0)\setminus C(P)|=t_1.$$ Now we will distinguish two cases: the case when there is a vertex $u\in V(G)\setminus V(P)$ such that $C(v_lu)=C_1$, and the case when there is no such vertex. We first consider the case when there is a vertex $u\in V(G)\setminus V(P)$ such that $C(v_lu)=C_1$. \[T2\] If $C(v_0v_l)\in C(P)$ and there is a vertex $u\in V(G)$ such that $C(v_lu)=C_1$, then $l\geq \displaystyle \frac{3}{4}n-\frac{1}{4}\sqrt{\frac{n}{2}-\frac{39}{11}}-\frac{11}{16}$. Suppose $P$ is a longest rainbow path that has the minimized $t_1$. We can conclude from Proposition \[P5\] that $C_{i_k}\notin C(v_l,C(G)\setminus V(P))$, $k=1,2,\cdots, t_1$. Let $C^1=\{C_{i_k}| k=1,2,\cdots, t_1\}$, $C_j^0=CN(v_{i_j-1})\setminus(C(P)\cup C(v_0v_{i_j}))$. Let the color set $C_j^1$, $C_j^*$ ($j=1,2,\cdots,t_1$) be defined by the following procedure. For $j=1$ to $t_1$ do $C_j^*=\emptyset$, for $s=1$ to $i_j-3$ if $C(v_{i_j-1}v_s)\in C_j^0$, let $C_j^*=C_j^*\cup\{C_{s+1}\}$; for $s=i_{j+1}$ to $l-1$ if $C(v_{i_j-1}v_s)\in C_j^0$, let $C_j^*=C_j^*\cup\{C_s\}$, $C_j^1=C_{j-1}^1\cup C_j^*$ Then we can conclude that $|C_j^*|=|C_j^0|\geq t_1-1$ by Proposition \[P1\]. Suppose $|C_{t_1}^1|-|C^0|=j_0$ and $j\geq j_0+2$. Let $C_{j,1}=\{C(v_{i_t-1}v_{i_t})| t>j \mbox{~ and~} C(v_{i_j-1}v_{i_t})\in C_j^0\}$, $C_{j,2}=\{C(v_{i_t-1}v_{i_t})|t<j \mbox{~ and ~} C(v_{i_j-1}v_{i_t-1})=C(v_0v_{i_t})\}$, $C_{j,3}=\{C(v_{i_t-1})| t<j \mbox{~ and ~} C(v_{i_j-1}v_{i_t-1})\in C_j^1\setminus C(v_0v_{i_t})\}$. Then $C_{j,1}$, $C_{j,2}$, $C_{j,3}$ are mutually independent and $C_j^*\cap C^0=C_{j,1}\cup C_{j,2}\cup C_{j,3}$. By the definition $|C_{j,1}|\leq t_1-j$, $\displaystyle \bigcup\limits_{j=j_0+2}^{t_1} C_{j,2}\subseteq \{C_{i_1}, C_{i_2},\cdots, C_{i_{t_1-1}}\}$ and $C_{j,2}\cap C_{j',2}=\emptyset$ since $G$ is properly edge-colored. Since $C(v_lu)=C_1$, we have $C_{j,3}=\emptyset$; otherwise, $v_2Pv_{i_t-1}v_{i_j-1}P^{-1}v_{i_t}v_0v_{i_j}Pv_lu$ is a rainbow path of length $l+1$, a contradiction. Therefore, $C_j^*\cap C^0=C_{j,1}\cup C_{j,2}$. On the other hand, $|C_j^*\setminus C^0|\leq |C_j^1\setminus C^0|\leq j_0$. So, $|C_{j,2}|=|C_j^*\cap C^0|-|C_{j,1}|\geq (t_1-1-j_0)-(t_1-j)=j-j_0-1$. Notice that $\displaystyle \sum\limits_{j=j_0+2}^{t_1}|C_{j,2}|=\left|\bigcup\limits_{j=j_0+2}^{t_1}C_{j,2} \right|\leq t_1-1$. Then, we have $\displaystyle \sum\limits_{j=j_0+2}^{t_1} (j-j_0-1)\leq t_1-1$, i.e., $\displaystyle \frac{1}{2} \left( t_1^2 -2j_0t_1 -t_1 + j_0^2 +j_0 \right) \leq t_1-1$. Therefore, $\displaystyle j_0\geq t_1-\frac{1}{2}-\sqrt{2t_1-\frac{7}{4}}$, $\displaystyle |C_{t_1}|=t_1+j_0\geq 2t_1-\frac{1}{2}-\sqrt{2t_1-\frac{7}{4}}$. Since $C(v_l, V(G)\setminus V(P))\subseteq C(P)\setminus (C_{t_1}^1\cup \{C_l\})$ and $G$ is properly edge-colored, $\displaystyle |V(G)\setminus V(P)|\leq l-\left(2t_1-\frac{1}{2}-\sqrt{2t_1-\frac{7}{4}}\right) -1$, i.e., $\displaystyle n-(l+1)\leq l-\left(2t_1-\frac{1}{2}-\sqrt{2t_1-\frac{7}{4}}\right) -1$. So, $\displaystyle 2t_1-\sqrt{2t_1-\frac{7}{4}}\leq 2l-n+\frac{1}{2}$. Since $f(x)=2x-\sqrt{2x-\frac{7}{4}}$ increases when $x>2$ and $t_1\geq n-l-1>2$, we have $$2(n-l-1)-\sqrt{2(n-l-1)-\frac{7}{4}}\leq 2l-n+\frac{1}{2}.$$ Therefore, $\displaystyle l\geq \frac{3}{4}n - \frac{1}{4}\sqrt{\frac{n}{2}-\frac{39}{16}}-\frac{11}{16}$. This completes the proof. Now we consider the case when for any longest rainbow path $P=v_0v_1v_2\cdots v_l$ and any $u\in V(G)\setminus V(P)$, $C(v_lu)\neq C_1$. \[L2\] If for any longest rainbow path $P=v_0v_1v_2\cdots v_l$ and any $u\in V(G)\setminus V(P)$, $C(v_lu)\neq C_1$ and there are at most two $j$’s satisfying $2\leq j\leq t_1$, $i_j-i_{j-1}\geq 2$, then $l\geq \frac{3n-4}{4}$. For any $j$ ($1\leq j\leq t_1$), $v_{i_j-1}P^{-1}v_0v_{i_j}Pv_l$ is a rainbow path. So we can get by Proposition \[P5\] and the condition of this lemma that $\{C_{i_j-1}, C_{i_j}\}\cap C(v_l, V(G)\setminus V(P))=\emptyset$. Let $\displaystyle C^*=\bigcup\limits_{j=1}^{t_1}\{C_{i_j-1},C_{i_j}\}$. Then $|C^*|\geq 2t_1-2$ since there are at most two $j$’s satisfying $2\leq j\leq t_1$, $i_j-i_{j-1}\geq 2$. On the other side, $C(v_l, V(G)\setminus V(P))\subseteq C(P)\setminus (C^*\cup \{C_l\})$. So we have $$n-l-1\leq l-(2t_1-2)-1=l-2t_1+1\leq l-2(n-l-1)+1.$$ This implies that $\displaystyle l\geq \frac{3n-4}{4}$ and completes the proof. Then we can get the following conclusion. \[T3\] If $C(v_0v_l)\in C(P)$ and for any vertex $u\in V(G)$, $C(v_lu)\neq C_1$, then $l\geq \displaystyle \frac{3}{4}n-\frac{1}{4}\sqrt{\frac{n}{2}-\frac{39}{11}}-\frac{11}{16}$. Let $i_0=\min\{i| \exists u\notin V(P) \mbox{~s.t.~} C(v_lu)=C_i\}$. Suppose $P$ is one of the longest rainbow paths such that $i_0$ is the smallest. Let $j^*=\max\{j|i_j-i_{j-1}=1\}$. Then we have $i_0>i_{j^*}$; otherwise, $v_1Pv_{i_{j^*-1}}v_0v_{i_{j^*}}Pv_l$ is also a rainbow path of length $l$, but $C_{i_0}$ appears on the ($i_0-1$)-th edge of the path, a contradiction. Now we distinguish the following two cases. [**Case 1. $i_0< i_{t_1}$.**]{} Let the integer $j_0$ and the color sets $C_j^0$, $C_j^*$, $C_{j,1}$, $C_{j,2}$, $C_{j,3}$ be defined as in Theorem \[T2\]. Suppose $i_{j_1-1}<i_0<i_{j_1}$. Then we have that for any $j_1\leq j_2\leq t_1$, $\{C(v_{j_2-1}v_{i_t-1})| 1\leq t<j_1\}\cap C_{j_2}^0=\emptyset$. Otherwise, there exists $j_3<j_1\leq j_2$, such that $C(v_{i_{j_3}-1}v_{i_{j_2}-1})\notin C_{j_2}^0$. Then, $v_{i_{j_3}}Pv_{i_{j_2}-1}v_{i_{j_3}-1}P^{-1}v_0v_{i_{j_2}}Pv_l$ is a rainbow path of length $l$, but the color $C_{i_0}$ appears on the $(i_0-i_{j_3})$-th edge of this path, a contradiction to the choice of $P$. If there exists $j_1\leq j_2<j_3$ such that $C(v_{i_{j_3}-1}v_{i_{j_2}-1})\notin \{C_{j_3}^0\cup C(v_0v_{i_{j_2}})\}$, then $v_1Pv_{i_{j_2}-1}v_{i_{j_3}-1}P^{-1}v_{i_{j_2}}v_0v_{i_{j_3}}Pv_l$ is a rainbow path of length $l$, but $C_{i_0}$ appears on the $(i_0-1)$-th edge of this path, a contradiction. Therefore, for any $j\geq j_1$, $C_{j,3}=\emptyset$, $C_{j,2}\subseteq\{C_{i_t}|j_1\leq t<t_1\}$. [**Case 1.1. $j_1>j_0$.**]{} As in Theorem \[T2\], we can get that $\displaystyle \sum\limits_{j=j_1}^{t_1} (j-j_0-1)= \sum\limits_{j=j_1}^{t_1} |C_{j,2}| = \left|\bigcup\limits_{j=j_1}^{t_1} C_{j,2}\right|\leq t_1-j_1$. This implies that $(t_1-j_1+1)(j_1+t_1-2j_0-2)\leq 2(t_1-j_1)$. Therefore, $j_0\geq \frac{1}{2}[(t_1^2-3t_1)-(j_1^2-3j_1)+2j_1-2]>\frac{1}{2}(2j_1-2)=j_1-1$, a contradiction. [**Case 1.2. $j_1\geq j_0$.**]{} By the same calculation we did in Theorem \[T2\], we can conclude that $l\geq \displaystyle \frac{3}{4}n-\frac{1}{4}\sqrt{\frac{n}{2}-\frac{39}{11}}-\frac{11}{16}$. [**Case 2. $i_0>i_{t_1}$.**]{} If there are at most two $j$’s satisfying $2\leq j\leq t_1$, $i_j-i_{j-1}\geq 2$, then by Lemma \[L2\], $\displaystyle l\geq \frac{3n-4}{4}\geq \displaystyle \frac{3}{4}n-\frac{1}{4}\sqrt{\frac{n}{2}-\frac{39}{11}}-\frac{11}{16}$. So we will only consider the case when there are at least three $j$’s satisfying $2\leq j\leq t_1$, $i_j-i_{j-1}\geq 2$. Then $l\geq \frac{3n-4}{4}$. Suppose there are exactly $k$ ($k\geq 3$) such $j$’s satisfying $s_1<s_2<\cdots<s_k$. Then for any integer $p$ ($1\leq p\leq k$) $v_1Pv_{i_{s_p-1}}v_0v_{i_{s_p}}Pv_l$ is a rainbow path of length $l$. Therefore, $$C(v_1,V(G)\setminus V(P))\subseteq (C(P)\setminus\{C_{i_{s_p}}\})\cup \{C(v_0v_{i_{s_p-1}}), C(v_0v_{i_{s_p}})\}.$$ Notice that $k\geq 3$, and so $\displaystyle \bigcap\limits_{p=1}^k \{C(v_0v_{i_{s_p-1}}), C(v_0v_{i_{s_p}})\}=\emptyset$, and then $C(v_1,V(G)\setminus V(P))\subseteq (C(P)\setminus\{C_{i_{s_p}}\})$. Let $C^*=C(P)\setminus \left( \bigcup\limits_{p=1}^s C_{i_{s_p}}\cup\{C_1,C_2\}\right)$. Then $C(v_1,V(G)\setminus V(P))\subset C^*$. [**Case 2.1. $|C^*\cap \{C_1,C_2,\cdots, C_{i_{t_1}}\}|<t_1$.**]{} $|C^*\cap \{C_1,C_2,\cdots, C_{i_{t_1}}\}|<t_1$ implies that $i_{t_1}-k-2<t_1$ and there exists a vertex $u\in V(G)\setminus V(P)$ such that $C(v_1u)=C_t$, where $t\geq i_{t_1}-[t_1-(i_{t_1}-k-2)]=t_1+k+2$ and it appears on the $(l-t+1)$-th edge of the rainbow path $v_lP^{-1}v_{i_{s_1}}v_0v_{i_{s_1-1}}P^{-1}v_1$ of length $l+1$. By the choice of $P$, we can conclude that $l-t+1\geq i_0>i_{t_1}$, i.e., $t\leq l-i_{t_1}+1$. Remember that $i_{t_1}\geq 2t_1-k$ and $t_1\geq n-l-1$, and so we have $t_1+k+2\leq t\leq l-i_{t_1}+1\leq l-2t_1+k+1$, i.e., $l\geq3t_1+1\geq 3n-3l-2$, and therefore $\displaystyle l\geq\frac{3n-2}{4}\geq \displaystyle \frac{3}{4}n-\frac{1}{4}\sqrt{\frac{n}{2}-\frac{39}{11}}-\frac{11}{16}$. [**Case 2.2. $|C^*\cap \{C_1,C_2,\cdots, C_{i_{t_1}}\}|\geq t_1$.**]{} Suppose $C_t$ is the $t_1$-th color in $C^*$, $i_{j_0-1}< t\leq i_{j_0}$ and there are $k_1$ $j$’s in the set $\{2,\cdots,j_0-1\}$ satisfying $i_j-i_{j-1}=1$. Then we can conclude that $t=t_1+k_1+2$ and $t> 2(j_0-1)-k_1-2=2j_0-k_1-4$. Since if $i_p-i_{p-1}>1$ then $|C^*\cap\{C_{i_{p-1}+1},\cdots, C_{i_p}\}|\leq i_p-i_{p-1}$, we have $$\displaystyle t_1=i_1-2+\sum\limits_{\begin{subarray}{c} p\leq j_0-1\\i_p-i_{p-1}>1\end{subarray}}(i_p-i_{p-1})+t-i_{j_0-1}=t_1-k_1-2.$$ On the other hand, $i_{t_1}\geq i_{j_0}+2(t_1-j_0)-(k-k_1)=i_{j_0}+2t_1-2j_0-k+k_1\geq t+2t_1-2j_0-k+k_1$. By Lemma \[L2\] there is some integer $j$ satisfying $i_j-i_{j-1}=1$, and so $v_lP^{-1}v_{i_j}v_0v_{i_{j-1}}P^{-1}v_1$ is a rainbow path of length $l+1$ and $C_t$ appears on the $(l-t)$-th or $(l-t+1)$-the edge. Therefore, we have $i_0\leq l-t$ by the choice of $P$. Then we have $l-t\geq i_0>i_{t_1}\geq t+2t_1-2j_0-k+k_1$, i.e., $$\begin{aligned} l-t_1-k-2 &\geq& 3t_1+2k_1-2j_0+2\\ & >& 3t_1+2k_1+(-t-k_1-4)+2\\ &= &3t_1-t+k_1-2\\ &= &3t_1-(t_1+k_1+2)+k_1-2\\ &= &2t_1-4\end{aligned}$$ So, $l\geq 3t_1+k-2\geq 3t_1-2\geq 3(n-l-1)-2$, which implies that $$\displaystyle l\geq \frac{3n-5}{4} \geq \displaystyle \frac{3}{4}n-\frac{1}{4}\sqrt{\frac{n}{2}-\frac{39}{11}}-\frac{11}{16}.$$ This completes the proof. Conclusion ========== By Theorems \[T1\], \[T2\] and \[T3\], we can easily get the following conclusions. For any properly edge-colored complete graph $K_n$ ($n\geq 20$), there is a rainbow path of length no less than $\displaystyle \frac{3}{4}n-\frac{1}{4}\sqrt{\frac{n}{2}-\frac{39}{11}}-\frac{11}{16}$. For any properly edge-colored complete graph $K_n$ ($n\geq 20$), there is a rainbow path of length no less than $\displaystyle (\frac{3}{4}-o(1))n$. [99]{} M. Albert, A. Frieze and B. Reed, Multicolored Hamilton cycles, [*Electronic J. Combin.*]{} [**2**]{}(1995), $\sharp$R10. M. Axenovich, T. Jiang and Zs. Tuza, Local anti-Ramsey numbers of graphs, [*Combin. Probab. Comput.*]{} [**12**]{}(2003), 495-511. J.A. Bondy and U.S.R. Murty, Graph Theory with Applications, Macmillan London and Elsvier, New York (1976). H.J. Broersma, X. Li, G. Woeginger and S. Zhang, Paths and cycles in colored graphs, [*Australasian J. Combin.*]{} [**31**]{}(2005), 297-309. H. Chen and X. Li, Long heterochromatic paths in edge-colored graphs, [*Electron. J. Combin.*]{} [**12(1)**]{}(2005), $\sharp$R33. H. Chen and X. Li, Color degree and color neighborhood union conditions for long heterochromatic paths in edge-colored graphs, arXiv:math.CO/0512144 v1 7 Dec 2005. H. Chen and X. Li, Color neighborhood union conditions for long heterochromatic paths in edge-colored graphs, [*Electron. J. Combin.*]{} [**14**]{}(2007), $\sharp$R77. W.S. Chou, Y. Manoussakis, O. Megalaki, M. Spyratos and Zs. Tuza, Paths through fixed vertices in edge-colored graphs, [*Math. Inf. Sci. Hun.*]{} [**32**]{}(1994), 49-58. P. Erdös and Zs. Tuza, Rainbow Hamiltonian paths and canonically colored subgraphs in infinite complete graphs, [*Mathematica Pannonica*]{} [**1**]{}(1990), 5-13. P. Erdös and Zs. Tuza, Rainbow subgraphs in edge-colorings of complete graphs, [*Ann. Discrete Math.*]{} [**55**]{}(1993), 81-88. A.M. Frieze and B.A. Reed, Polychromatic Hamilton cycles, [*Discrete Math.*]{} [**118**]{}(1993), 69-74. A. Gyárfás and M. Mhalla, Rainbow and orthogonal paths in factorizations of $K_n$, [*J. Combin. Designs*]{} [**18**]{}(2010), 167-176. A. Gyárfás and G. Simonyi, Edge colorings of complete graphs without tricolored triangles, [*J. Graph Theory*]{} [**46**]{}(2004), 211-216. G. Hahn and C. Thomassen, Path and cycle sub-Ramsey numbers and edge-coloring conjecture, [*Discrete Math.*]{} [**62(1)**]{}(1986), 29-33. Y. Manoussakis, M. Spyratos and Zs. Tuza, Cycles of given color patterns, [*J. Graph Theory*]{} [**21**]{}(1996), 153-162. Y. Manoussakis, M. Spyratos, Zs. Tuza and M. Voigt, Minimal colorings for properly colored subgraphs, [*Graphs and Combin.*]{} [**12**]{}(1996), 345-360. [^1]: Supported by NSFC No.10901035 and No.11371205.
--- abstract: 'We study the reduced fidelity susceptibility $\chi_{r}$ for an $M$-body subsystem of an $N$-body Lipkin-Meshkov-Glick model with $\tau=M/N$ fixed. The reduced fidelity susceptibility can be viewed as the response of subsystem to a certain parameter. In noncritical region, the inner correlation of the system is weak, and $\chi_{r}$ behaves similar with the global fidelity susceptibility $\chi_{g}$, the ratio $\eta=\chi_{r}/\chi_{g}$ depends on $\tau$ but not $N$. However, at the critical point, the inner correlation tends to be divergent, then we find $\chi_{r}$ approaches $\chi_{g}$ with the increasing the $N$, and $\eta=1$ in the thermodynamic limit. The analytical predictions are perfect agreement with the numerical results.' author: - Jian Ma - Xiaoguang Wang - 'Shi-Jian Gu' title: 'Many-body reduced fidelity susceptibility in Lipkin-Meshkov-Glick model' --- Introduction ============ Quantum phase transition (QPT) [@sachdev] occurs at absolutely zero temperature is driven purely by quantum fluctuations. It was studied conventionally by Landau paradigm with order parameter in the frame of statistics and condensed matter physics. Recently, two quantum-information [@nilesen] concepts, entanglement [@xgwPRA64; @vidal03; @Latorre04; @Osterloh; @vidal04; @Sebastien04; @Latorre05; @BarthelPRA74; @BarthelPRL97; @Roman08; @Cui08] and fidelity [@HTQuan06; @PZanardi06; @Buonsante07; @PZanardi0606130; @PZanardi07; @WLYou07; @HQZhou07; @LCVenuti07; @SJGu07; @SChen07; @WQNing07; @MFYang07; @NPaunkovic07; @KwokPRE78; @MaPRE78] have been investigated extensively in QPTs and are recognized to be effective and powerful in detecting the critical point. The former measures quantum correlations between partitions, while the latter measures the distance in quantum state space. Therefore, the success of them in characterizing QPTs is understood by regarding the universality of the critical behaviors itself, that is, the divergent of the correlation and the dramatic change of the ground state structure. Furthermore, as the fidelity depends computationally on an arbitrarily small change of the driving parameter, Zarnardi *et al*. suggested the Riemannian metric tensor [@PZanardi07], while You *et al*. suggested the fidelity susceptibility [@WLYou07], both focus on the leading term of the fidelity. In the following, we mainly consider the fidelity susceptibility (FS). Until now, most efforts have been devoted to the study of the global ground state fidelity susceptibility (GFS), denoted by $\chi_{g}$, which reflects the susceptibility of the system in response to the change of certain driving parameter. In this work, we study the responses of a subsystem, for which we study its FS, the so-called reduced fidelity susceptibility (RFS), denoted by $\chi_{r}$. Some special cases have been studied in Refs. [@HQZhou07; @NPaunkovic07; @KwokPRE78; @MaPRE78], where the subsystems are only one-body or two-body, while in this paper we will study an arbitrary $M$-body subsystem. The motivation for the investigation of RFS is clear in physics. Firstly, it reveals information about the change of the inner structure for a system that undergoes QPT. Secondly, as the existence of interactions and correlations, a general quantum system is not the simple addition of its different parts, especially in the critical region, where the entanglement entropy is divergent [@vidal03; @Latorre04; @BarthelPRL97]. Therefore it is significant to investigate the behavior of the RFS, as well as the effects of entanglement on it, in both critical and noncritical regions. And our study can be viewed as a connection between the FS and the entanglement entropy. To study this question, we consider an $N$-body Lipkin-Meshkov-Glick model (LMG) [@Lipkin] model, and study the RFS for its $M$-body subsystem. As $0\leq\chi_{r}\leq\chi_{g}$ [@MaPRE78], we consider a more useful quantity, $\eta=\chi_{r}/\chi_{g}$, and thus $\eta\in\lbrack0,1]$. We find that, the behaviors of the RFS, as well as $\eta$, are quite different in noncritical and critical regions. In noncritical region, the entanglement entropy is saturated by a finite upper bound, and the inner correlation is small, thus the RFS behaves similar with the GFS, and the ratio $\eta$ depends on $\tau=M/N$ but not $N$. However, at the critical point, the entanglement entropy tends to be divergent with the increasing of system size, and the inner correlations are very strong. Then we find the RFS approaches GFS with the increasing of $N$, and $\eta=1$ in the thermodynamic limit for $\tau\neq 0$. These can be understood by considering the divergent of correlation in second-order QPTs, which is reflected by the entanglement entropy. This paper is organized as follows. In Sec. II, we introduce the LMG model and give a brief review of the GFS studied in [@KwokPRE78]. Then in Sec. III, we derive the RFS in the thermodynamic limit and obtain its divergent form in the vicinity of the critical point. Then we perform some numerical computations, and the results are in perfect agreement with our analytical prediction. LMG model and global fidelity susceptibility ============================================ The LMG model, originally introduced in nuclear physics and has found applications in a broad range of other topics: statistical mechanics of quantum spin system [@BotetPRL49], Bose-Einstein condensates [@Cirac], or magnetic molecules such as Mn$_{12}$ acetate [@Garanin], as well as quantum entanglement [@VidalPRA69], and quantum fidelity [@KwokPRE78; @MaPRE78]. It is an exactly solvable [@PanPLB451; @LinksPRA36] many-body interacting quantum system as well as one of the simplest to show a quantum transition in the regime of strong coupling. The quantum phase transition of this model can be described by the symmetry broken mechanism, the two phases are associated with either collective or single-particle behavior. The Hamiltonian of the LMG model reads$$H=-\frac{1}{N}\left( S_{x}^{2}+\gamma S_{y}^{2}\right) -hS_{z},$$ where $S_{\alpha}=\sum_{i=1}^{N}\sigma_{\alpha}^{i}/2$ ($\alpha=x,y,z$) are the collective spin operators; $\sigma_{\alpha}^{i}$ are the Pauli matrices; $N$ is the total spin number; $\gamma$ is the anisotropic parameter. $\lambda$ and $h$ are the spin-spin interaction strength and the effective external field, respectively. Here, we focus on the ferromagnetic case ($\lambda>0$), and without loss of generality, we set $\lambda=1$ and $0\leq\gamma\leq1$. As the spectrum is invariant under the transformation $h\leftrightarrow-h$, we only consider $h\geq0$. This system undergoes a second-order QPT at $h=1$, between a symmetric (polarized, $h>1$) phase and a broken (collective, $h<1$) phase, which is well described by a mean-field approach [@DusuelPRB71]. The classical state is fully polarized in the field direction $\left( \left\langle \sigma_{z}^{i}\right\rangle =1\right) $ for $h>1$, and is twofold degenerate with $\left\langle \sigma_{z}^{i}\right\rangle =h$ for $h<1$. Before deriving the RFS, we give a brief review of the GFS of the LMG model that has been studied in Ref. [@KwokPRE78], where the authors employed the Holstein-Primakoff transformation and derived the GFS for both phases in the thermodynamic limit, $$\chi_{g}\left( h,\gamma\right) =\left\{ \begin{aligned} &\frac{N}{4\sqrt{\left( 1-h^{2}\right) \left( 1-\gamma\right) }} +\frac{h^{2}\left( h^{2}-\gamma\right) ^{2}}{32\left( 1-\gamma\right) ^{2}\left( 1-h^{2}\right) ^{2}}, &\text{for}\quad&0\leq h<1,\\ &\frac{\left( 1-\gamma\right) ^{2}}{32\left( h-\gamma\right) ^{2}\left( h-1\right) ^{2}}, &\text{for}\quad& h \ge 1. \end{aligned}\right. \label{gfs}$$ It has been found that, when $h<1$, the GFS increases with $N$ and can be viewed as an extensive quantity, however, when $h>1$ the GFS is saturated with an upper bound, i.e. it is intensive. Reduced fidelity susceptibility =============================== Thermodynamic limit ------------------- Now we give some basic formulas for fidelity and its susceptibility. As the subsystem is represented by a mixed state, we introduce the Uhlmann fidelity [@Uhlmann], $$F\left( \rho,\tilde{\rho}\right) \equiv \text{tr}\sqrt{\rho^{1/2}\tilde{\rho}\rho^{1/2}},$$ where $\rho\equiv\rho\left( h\right) $ and $\tilde{\rho}\equiv\rho\left( h+dh\right) $ with a certain parameter $h$. If $dh$ tends to zero, the two states are close in parameter space, and their Bures distance [@Bures69] is,$$ds_{B}^{2}=2\left[ 1-F\left( \rho,\tilde{\rho}\right) \right] .$$ In the basis of $\rho$, denoted by $\left\{ |\psi_{i}\rangle\right\} $, the Bures distance can be written as [@SommerJPA]$$ds_{B}^{2}=\frac{1}{4}\sum_{n=1}^{N}\frac{dp_{n}^{2}}{p_{n}}+\frac{1}{2}\sum_{n\neq m}^{N}\frac{\left( p_{n}-p_{m}\right) ^{2}}{p_{n}+p_{m}}\left\vert \langle\psi_{n}|d\psi_{m}\rangle\right\vert ^{2},$$ where $p_{i}$ are the eigenvalues of $\rho$, $N$ is the dimension of $\rho$. As FS is the leading term of fidelity, i.e., $F=1-\chi\delta^{2}/2$, we can get FS for $h$ immediately,$$\chi\left( h\right) =\frac{1}{4}\sum_{n=1}^{N}\frac{\left( \partial _{h}p_{n}\right) ^{2}}{p_{n}}+\frac{1}{2}\sum_{n\neq m}^{N}\frac{\left( p_{n}-p_{m}\right) ^{2}}{p_{n}+p_{m}}\left\vert \langle\psi_{n}|\partial _{h}\psi_{m}\rangle\right\vert ^{2}, \label{chi}$$ where $\partial_{h}:=\partial/\partial h$. In our study, $\rho$ and $\tilde{\rho}$ are just the reduced density matrices for ground states. In the follows, the $N$-body LMG is divided into two parts, $A$ and $B$ with size $M$ and $N-M$, respectively. We will study the RFS for subsystem $A$, the reduced density matrix is $\rho_{A}$. This study would give a connection between the RFS and the entanglement entropy [@BarthelPRL97]. As we know that, the entanglement reflects the correlation among inner partitions, and our study will reveal the effects of these correlations on RFS, especially at the critical point. Now we introduce the total spin operators for the two subsystems, $S_{\alpha}^{A,B}=\sum_{i\in A,B}\sigma_{\alpha}^{i}/2$. To describe quantum fluctuations, it is convenient to use the Holstein-Primakoff representation of the spin operators [@HolsteinPR58], and the first step is to rotate the $z$ axis along the semiclassical magnetization$$\begin{pmatrix} S_{x}\\ S_{y}\\ S_{z}\end{pmatrix} =\begin{pmatrix} \cos\theta_{0} & 0 & \sin\theta_{0}\\ 0 & 1 & 0\\ -\sin\theta_{0} & 0 & \cos\theta_{0}\end{pmatrix}\begin{pmatrix} \tilde{S}_{x}\\ \tilde{S}_{y}\\ \tilde{S}_{z}\end{pmatrix} .\label{rotate}$$ As presented in [@DusuelPRB71], $\theta_{0}=0$ for $h>1$ so that $\mathbf{S}=\mathbf{\tilde{S}}$, and $\theta_{0}=\arccos h$ for $h\leq1$. The Holstein-Primakoff representation is then applied to the rotated spin operators$$\begin{aligned} \tilde{S}_{z}^{A} & =M/2-a^{\dagger}a,\nonumber\\ \tilde{S}_{-}^{A} & =\sqrt{M}a^{\dagger}\sqrt{1-a^{\dagger}a/M}=\left( \tilde{S}_{+}^{A}\right) ^{\dagger},\nonumber\\ \tilde{S}_{z}^{B} & =\left( N-M\right) /2-b^{\dagger}b,\nonumber\\ \tilde{S}_{-}^{B} & =\sqrt{N-M}b^{\dagger}\sqrt{1-b^{\dagger}b/\left( N-M\right) }=\left( \tilde{S}_{+}^{B}\right) ^{\dagger},\end{aligned}$$ where $a\left( a^{\dagger}\right) $ and $b\left( b^{\dagger}\right) $ are bosonic creation and annihilation operators for subsystem $A$ and $B$, respectively, and $S_{\pm}^{A,B}=S_{x}^{A,B}\pm iS_{y}^{A,B}$. After this transformation, the LMG Hamiltonian is mapped onto a system of two interacting bosonic modes $a$ and $b$. For fixed $\tau=M/N$, the Hamiltonian can be expanded in $1/N$. Up to the order $\left( 1/N\right) ^{0}$, one gets $H=NH^{(-1)}+H^{\left( 0\right) }+O\left( 1/N\right) $ with $H^{\left( -1\right) }=(m^{2}-1-2h)/4$, where $m=\cos\theta_{0}$, and$$H^{(0)}=-\frac{1+\gamma}{4}+\mathbf{A}^{\dagger}\mathbf{VA}^{T}+\frac{1}{2}\left[ \mathbf{A}^{\dagger}\mathbf{W}\left( \mathbf{A}^{\dagger}\right) ^{T}+h.c.\right] \label{bosonic_ham}$$ where $\mathbf{A=}\left( a,b\right) $, and$$\begin{aligned} \mathbf{V} & \mathbf{=}\frac{2hm+2-3m^{2}-\gamma}{2}\mathbb{I}\nonumber\\ \mathbf{W} & \mathbf{=}\frac{\gamma-m^{2}}{2}\begin{pmatrix} \tau & \sqrt{\tau\left( 1-\tau\right) }\\ \sqrt{\tau\left( 1-\tau\right) } & 1-\tau \end{pmatrix} ,\end{aligned}$$ where $\mathbb{I}$ is a $2\times2$ identity matrix; $m=h$ in broken phase and $m=1$ in symmetric phase. The bosonic Hamiltonian can be diagonalized by Bogoliubov transformation and is useful in deriving the reduced density matrix. As shown in [@bombelli; @PreschelJPA32; @PeschelJPA36], the reduced density matrix for eigenstates of a quadratic form can always be written as $\rho_{A}=e^{-\mathcal{H}}$ with$$\mathcal{H}=\kappa_{0}+\kappa_{1}a^{\dagger}a+\kappa_{2}\left( a^{\dagger 2}+a^{2}\right) .$$ $\kappa_{i}$ ($i=0,1,2$) can be determined by using [@BarthelPRL97]$$\text{tr}\rho_{A}=1,~\text{tr}\left( \rho_{A}a^{\dagger}a\right) =\left\langle a^{\dagger}a\right\rangle ~\text{and}~\text{tr}\left( \rho _{A}a^{\dagger2}\right) =\left\langle a^{\dagger2}\right\rangle .$$ where $\left\langle \Omega\right\rangle =\left\langle \psi_{g}|\Omega|\psi _{g}\right\rangle $, $|\psi_{g}\rangle$ is the ground state, Then we can diagonalize $\rho_{A}$ by Bogoliubov transformation. However, in this paper we will adopt another method to diagonalize $\rho_{A}$, as shown in Ref. [@BarthelPRA74], $\rho_{A}$ is written in the bosonic coherent state representation$$\begin{aligned} \langle\phi|\rho_{A}|\phi^{\prime}\rangle & =K\exp\left[ \frac{1}{4}\left( \phi^{\ast}+\phi^{\prime}\right) \frac{G^{++}-1}{G^{++}+1}\left( \phi^{\ast }+\phi^{\prime}\right) \right] \\ & \times\exp\left[ \frac{1}{4}\left( \phi^{\ast}-\phi^{\prime}\right) \frac{G^{--}+1}{G^{--}-1}\left( \phi^{\ast}-\phi^{\prime}\right) \right] ,\end{aligned}$$ where $a|\phi\rangle=\phi|\phi\rangle$; $K=\sqrt{\left( 1+G^{++}\right) \left( 1-G^{--}\right) }$ is determined by the normalization of $\rho_{A}$; $G^{++}$ and $G^{--}$ are Green’s functions defined as$$\begin{aligned} G^{++} & =\langle\left( a^{\dagger}+a\right) ^{2}\rangle,\nonumber\\ G^{--} & =\langle\left( a^{\dagger}-a\right) ^{2}\rangle.\end{aligned}$$ Then $\rho^{A}$ can be diagonalized by the following Bogoliubov transformation, $$\begin{aligned} g & =\cosh\varphi a+\sinh\varphi a^{\dagger}\nonumber\\ & =\frac{P+Q}{2}a+\frac{P-Q}{2}a^{\dagger}$$ with $PQ=1$, $PG^{++}=\mu Q$, and $QG^{--}=-\mu P$. The Green’s functions can be obtained by diagonalizing the bosonic represented Hamiltonian (\[bosonic\_ham\]),$$\begin{aligned} G^{++} & =1+\left( 1/\alpha-1\right) \tau,\nonumber\\ G^{--} & =\left( 1-\alpha\right) \tau-1,\end{aligned}$$ where$$\alpha=\left\{ \begin{aligned} &\sqrt{\frac{h-1}{h-\gamma}} &\text{for}\quad&h\ge1,\\ &\sqrt{\frac{1-h^{2}}{1-\gamma}} &\text{for}\quad&0\le h<1. \end{aligned}\right.$$ The diagonalized $\rho^{A}$ reads$$\rho^{A}=\frac{2}{\mu+1}e^{-\varepsilon g^{\dagger}g},$$ where the pseudoenergy $\varepsilon=\ln\left[ \left( \mu+1\right) /\left( \mu-1\right) \right] $ with $\mu=\alpha^{-1/2}\sqrt{\left[ \tau \alpha+\left( 1-\tau\right) \right] \left[ \tau+\alpha\left( 1-\tau\right) \right] }$. Now we can derive the RFS, of which the first term involves only the eigenvalues of $\rho_{A}$, and the second term involves both the eigenvalues and the eigenvectors. The eigenvectors of $\rho_{A}$ is the number state $|n\rangle$: $g^{\dagger}g|n\rangle=n|n\rangle$, and the term $\left\vert \langle\psi_{n}|\partial_{h}\psi_{m}\rangle\right\vert ^{2}=\left\vert \langle n|\partial_{h}m\rangle\right\vert ^{2}$ can be calculated by using$$\left\vert \langle n|\partial_{h}m\rangle\right\vert ^{2}=\frac{\left\vert \left\langle n|\partial_{h}g^{\dagger}g|m\right\rangle \right\vert ^{2}}{\left( m-n\right) ^{2}}.$$ Then we write the RFS explicitly,$$\chi_{r}\left( h,\gamma,\tau\right) =\frac{\left( \partial_{h}\mu\right) ^{2}}{4\left( \mu^{2}-1\right) }+\frac{\left( \mu\partial_{h}\varphi\right) ^{2}}{\mu^{2}+1}+\frac{N\tau}{4\mu}\left( \partial_{h}\theta_{0}\exp\varphi\right) ^{2}, \label{rfs}$$ where $\varphi=\text{arctanh}\left[ \left( \mu-G^{++}\right) /\left( \mu+G^{++}\right) \right] $, $\theta_{0}=\arccos h$ for $h\leq1$ and $\theta_{0}\equiv0$ for $h>1$. Thus the last term of the above expression only takes effect in the broken phase. We emphasize that, in the broken phase $h<1$, we should perform a rotation (\[rotate\]) at first. \[ptb\] [fig1.eps]{} We can express it farther as$$\chi_{r}\left( h,\gamma,\tau\right) =\left\{ \begin{aligned} &\chi+\frac{N\tau}{4G^{++}\left( 1-h^{2}\right) } &\text{for}\quad&0\leq h<1,\\ &\chi &\text{for}\quad& h \ge 1, \end{aligned}\right.$$ where$$\chi=\frac{\left( \partial_{h}\mu\right) ^{2}}{4\left( \mu^{2}-1\right) }+\frac{\mu^{2}}{4\left( \mu^{2}+1\right) }\left[ \partial_{h}\ln\left( -\frac{\mu}{G^{++}}\right) \right] ^{2}.$$ In the vicinity of the critical point, the RFS diverges as$$\begin{aligned} \chi_{r}/N\propto\left( 1-h\right) ^{-1/2}\text{, } & \text{for }0\leq h<1,\\ \chi_{r}\propto\left( 1-h\right) ^{-2}\text{, } & \text{for }h\geq1,\end{aligned}$$ and this is the same with $\chi_{g}$. Additionally, we show the entanglement entropy $\mathcal{E}=-$tr$\left( \rho\ln\rho\right) $ that was derived in [@BarthelPRL97; @BarthelPRA74], $$\mathcal{E}=\frac{\mu+1}{2}\ln\frac{\mu+1}{2}-\frac{\mu-1}{2}\ln\frac{\mu -1}{2}+x\ln2.$$ where $x=1$ when $h<1$ and $x=0$ when $h>1$, the $\ln2$ term comes from the two-fold degeneracy of the ground state in the broken phase, and this degeneracy is lifted for finite $N$. The entanglement entropy diverges as $\left( 1/4\right) \ln\left\vert h-1\right\vert $ around the critical point, and is nearly independent with $N$ in noncritical region. \[ptbh\] [fig2.eps]{} Finite size cases ----------------- To perform numerical computations, we should derive the reduced density matrix for $\rho_{A}$ in finite size case. The LMG model is of high symmetry in interaction, and the ground state which is the superposition of the Dick states lies in the $J=N/2$ section $$|\psi_{g}\rangle=\sum_{m=0}^{N}C_{m}|J,-J+m\rangle,$$ where $C_{m}$ is the coefficient to be determined numerically. We hope to write $|J,-J+m\rangle$ in the form of $|J_{A},m_{A}\rangle|J_{B},m_{B}\rangle $, where $J_{A}=M/2$ and $J_{B}=\left( N-M\right) /2$ correspond to the two local systems. Since $|J,-J+m\rangle=\sqrt{\left( 2J-m\right) !/\left( 2J\right) !m!}\left( S_{+}\right) ^{m}|J,-J\rangle$, and the ladder operator $S_{+}=S_{+}^{A}+S_{-}^{B}$ . Then the ground state is $$\begin{aligned} |\psi_{g}\rangle= & \sum_{m=0}^{N}\sum_{p=0}^{2J_{A}}C_{m}\sqrt {\text{H}\left( p;2J,2J_{A},m\right) }|J_{A},-J_{A}+p\rangle\nonumber\\ & \otimes|J_{B},-J_{B}+m-p\rangle\label{reducing of ground state}$$ where $$\text{H}\left( p;2j,2j_{1},m\right) =\frac{\binom{2j_{1}}{p}\binom{2j_{2}}{m-p}}{\binom{2j}{m}} \label{hypergeometric distrbution}$$ is the so called Hypergeometric distribution function. And the matrix element of $\rho_{A}$ is $$\begin{aligned} \left( \rho_{A}\right) _{p,q}= & \sum_{m=0}^{N}C_{m}C_{q+m-p}^{\ast}\sqrt{\text{H}\left( p;2J,2J_{A},m\right) }\nonumber\\ & \times\sqrt{\text{H}\left( q;2J,2J_{A},q+m-p\right) }.\end{aligned}$$ By using the exact diagonalization method, the RFS as a function of $h$ for fixed $\tau$ is computed and shown in Fig. (\[fig1\]). As one can see that, the peaks of the RFS approach the critical point and become sharper and sharper with the increasing of $N$. The RFS in the symmetric phase ($h>1$) has an upper bound, however, in the broken phase ($h<1$) the RFS increases with the total spin number $N$. Thus we address that, the RFS is extensive in the broken phase, in which the LMG model is of collective behavior, while is intensive in the symmetric phase, in which the LMG model behaves like a single particle. This is similar with the GFS [@KwokPRE78]. As $0\leq\chi_{r}\leq\chi_{g}$, we will focus on a more useful quantity $\eta\left( \tau,h\right) \equiv\chi_{r}\left( h,\gamma,\tau\right) /\chi_{g}\left( h,\gamma\right) $ and study its properties in critical and noncritical regions. With Eqs. (\[gfs\]), (\[rfs\]), we find that in the thermodynamic limit$$\lim_{h\rightarrow1}\eta\left( \tau,h\right) =1,$$ for any non-vanishing $\tau$. To verify our prediction, we show the analytical and numerical results in Fig. (\[fig2\]). As one can see that, at the critical point, the RFS approaches the global one, i.e. $\eta$ tends to $1$, and at the same time, the entanglement entropy, i.e. the inner correlation between subsystems $A$ and $B$, is divergent with the increasing of $N$. When $h$ is away from the critical region, the inner correlation decreases dramatically, and then $\eta$ depends on $\tau$ but not the total system size $N$ as shown in Fig. (\[fig3\]). \[ptb\] [fig3.eps]{} As demonstrated in Ref. [@MaPRE78], when there are no correlations between partitions of a system, for example an $N$-body system represented by a product state that reads $$|\psi\left( h\right) \rangle=\bigotimes_{i=1}^{N}|\phi_{i}\left( h\right) \rangle,$$ if we denote a one-body reduced fidelity as $F_{r}$, the relation between the global and the reduced fidelities is$$F_{g}\left( h,\delta\right) =\prod_{i=1}^{n}F_{r}^{i}\left( h,\delta \right) .$$ and thus we have $\chi_{g}=\sum_{i=1}^{N}\chi_{r}^{i}$, moreover, if the system is of translation symmetry, we have $\chi_{g}=N\chi_{r}$. If there is entanglement between partitions, we have no such results, especially in the critical point, the entanglement is divergent, and then $\chi_{g}/\chi_{r}=1$ in the thermodynamic limit. This is some kind of effect of the inner correlations on the susceptibility of the system states. However, we address that our results are based on a high-dimension model, actually there are interactions between any two particles in the LMG model. We think it is deserved to study the RFS for a contiguous block in a low-dimension model, for example, the $XY$ model in which the interaction is just between neighboring sites. Thus the correlation between a block and its complementary part takes effect only on the boundary, and the results for $\eta$ maybe different. Conclusion ========== In conclusion, we derive the RFS analytically in the thermodynamic limit for a fixed $\tau$. To analyze the effects of the inner correlations on the RFS, we study the ratio $\eta=\chi_{r}/\chi_{g}$ combined with the entanglement entropy in both critical and noncritical regions. Our results give a clear picture for understanding the effects of correlations on the response. In the critical region, with the increasing of $N$, the entanglement entropy tends to be divergent and $\eta$ approaches $1$, while in the thermodynamic limit, $\eta\equiv1$ for $\tau\neq0$. This indicates that, the sensitivity of the subsystem is equal to the global one. In noncritical region, the RFS behaves similarly with the GFS, and $\eta$ depends on $\tau$ but not $N$. [99]{} S. Sachdev, *Quantum Phase Transitions* (Cambridge University Press, Cambridge, England, 1999); M. Vojta, Rep. Prog. Phys. **66**, 2069 (2003). M. A. Nilesen and I. L. Chuang, *Quantum Computation and Quantum Information* (Cambridge University Press, Cambridge, England, 2000) Xiaoguang Wang, Phys. Rev. A **64**, 012313 (2001). A. Osterloh, L. Amico, G. Falci, and R. Fazio, Nature (London) **416**, 608 (2002). G. Vidal, J. I. Latorre, E. Rico, and A. Kitaev, Phys. Rev. Lett. **90**, 227902  (2003). J. I. Latorre, E. Rico, and G. Vidal, Quantum Inf. Comput. **4**, 048 (2004). J. Vidal, G. Palacios, and C. Aslangul, Phys. Rev. A **70**, 062304 (2004). Sébastien Dusuel and Julien Vidal, Phys. Rev. Lett. **93**, 237204 (2004). J. I. Latorre, R. Orús, E. Rico, and J. Vidal, Phys. Rev. A **71**, 064101 (2005). T. Barthel, S. Dusuel, and J. Vidal, Phys. Rev. Lett. **97**, 220402 (2006). T. Barthel, M. C. Chung, and U. Schollwöck, Phys. Rev. A **74**, 022329 (2006). R. Orús, S. Dusuel, and Julien Vidal, Phys. Rev. Lett. **101**, 025701 (2008). H. T. Cui, Phys. Rev. A **77**, 052105 (2008). H. T. Quan, Z. Song, X. F. Liu, P. Zanardi, and C. P. Sun, Phys. Rev. Lett. **96**, 140604 (2006). P. Zanardi and N. Paunkovic, Phys. Rev. E **74**, 031123 (2006). P. Buonsante and A. Vezzani, Phys. Rev. Lett. **98**, 110601 (2007). P. Zanardi, M. Cozzini, and P. Giorda, J. Stat. Mech. **2**, L02002 (2007); M. Cozzini, P. Giorda, and P. Zanardi, Phys. Rev. B, **75**, 014439 (2007); M. Cozzini, R. Ionicioiu, and P. Zanardi, *ibid.* **76**, 104420 (2007). P. Zanardi, P. Giorda, and M. Cozzini, Phys. Rev. Lett. **99**, 100603 (2007). W. L. You, Y. W. Li, and S. J. Gu, Phys. Rev. E **76**, 022101 (2007). H. Q. Zhou and J. P. Barjaktarevic, J. Phys. A: Math. Theor. **41** 412001 (2008); H. Q. Zhou, J. H. Zhao, and B. Li, arXiv:0704.2940; H. Q. Zhou, arXiv:0704.2945. L. CamposVenuti and P. Zanardi, Phys. Rev. Lett. **99**, 095701 (2007). S. J. Gu, H. M. Kwok, W. Q. Ning, and H. Q. Lin, Phys. Rev. B **77**, 245109 (2008). S. Chen, L. Wang, S. J. Gu, and Y. Wang, Phys. Rev. E **76** 061108 (2007). W. Q. Ning, S. J. Gu, Y. G. Chen, C. Q. Wu, and H. Q. Lin, J. Phys.: Condens. Matter **20**, 235236 (2008). M. F. Yang, Phys. Rev. B **76**, 180403(R) (2007); Y. C. Tzeng and M. F. Yang, Phys. Rev. A **77**, 012311 (2008). N. Paunkovic, P. D. Sacramento, P. Nogueira, V. R. Vieira, and V. K. Dugaev, Phys. Rev. A 77, 052302 (2008). H. M. Kwok, W. Q. Ning, S. J. Gu, and H. Q. Lin, Phys. Rev. E **78**, 032103 (2008). J. Ma, L. Xu, H. N. Xiong, and X. Wang, Phys. Rev. E **78**, 051126 (2008). H. J. Lipkin, N. Meshkov, and N. Glick, Nucl. Phys. A **62**, 188 (1965), **62**, 211 (1965). R. Botet, R. Jullien, and P. Pfeuty, Phys. Rev. Lett. **49**, 478 (1982). J. I. Cirac, M. Lewenstein, K. M[ø]{}lmer, and P. Zoller, Phys. Rev. A **57**, 1208 (1998). D. A. Garanin, X. Martinez Hídalgo, and E. M. Chudnovsky, Phys. Rev. B **57**, 13639 (1998) J. Vidal, G. Palacios, and R. Mosseri, Phys. Rev. A **69**, 022107 (2004). F. Pan and J. P. Draayer, Phys. Lett. B 451, 1 (1999). J. Links, H. Q. Zhou, R. H. McKenzie, and M. D. Gould, J. Phys. A **36**, R63 (2003). S. Dusuel and J. Vidal, Phys. Rev. B **71**, 224420 (2005). T. Holstein and H. Primakoff, Phys. Rev. **58**, 1098 (1940). L. Bombelli, R. K. Koul, J. Lee and R. D. Sorkin, Phys. Rev. D **34**, 373 (1986). I. Peschel and M. C. Chung, J. Phys. A **32**, 8419 (1999). I. Peschel, J. Phys. A **36**, L205 (2003). D. Bures, Trans. Am. Math. Soc. **135**, 199 (1969). A. Uhlmann, Rep. Math. Phys. **9**, 273 (1976); **24**, 229 (1986). H. J. Sommers and K. Zyczkowski, J. Phys. A **36**, 10083 (2003).
--- abstract: 'The problem of a particle confined in a box with moving walls is studied, focusing on the case of small perturbations which do not alter the shape of the boundary (pantography). The presence of resonant transitions involving the natural transition frequencies of the system and the Fourier transform of the velocity of the walls of the box is brought to the light. The special case of a pantographic change of a circular box is analyzed in dept, also bringing to light the fact that the movement of the boundary cannot affect the angular momentum of the particle.' address: - 'Dipartimento di Fisica e Chimica, Università di Palermo, I-90123 Palermo, Italy' - 'Dipartimento di Fisica e Chimica, Università di Palermo, I-90123 Palermo, Italy' - 'Dipartimento di Fisica e Chimica, Università di Palermo, I-90123 Palermo, Italy' author: - Fabio Anzà - Antonino Messina - Benedetto Militello title: Resonant transitions due to changing boundaries --- Introduction {#sec:Introduction} ============ Resolution of Schrödinger equations with time-dependent Hamiltonian operators is challenging. In fact, exact resolutions are rare and limited to specific classes of problems [@ref:Barnes2012; @ref:Messina2014; @ref:Simeonov2014]. On the contrary, in most of the cases one can solve the dynamical problem only under special assumptions and with some approximations, as it happens for an Hamiltonian adiabatically changing [@ref:Messiah] or in the presence of weak interactions which legitimate the use of a perturbative approach  [@ref:Aniello2005; @ref:Militello2007; @ref:Rigolin2008; @ref:Zagury2010]. When the Hamiltonian time-dependence is periodical some special recipes based on the Floquet theory can be used [@ref:Traversa2013; @ref:Moskalets2002; @ref:Shirley1965]. In the panorama of systems with time-dependent Hamiltonians a special class is that of systems whose boundary conditions are time-dependent. Such kind of problems have been widely studied over the last decades in connection with the Casimir effect [@ref:CasimirReview; @ref:Wilson2010], but in the past it has also been considered in connection with quantum mechanical problems, from the Fermi quantum bouncer [@ref:Fermi1949] to several works analyzing a free quantum particle in a box with moving walls [@ref:Doescher; @ref:Pinder; @ref:Schlitt; @ref:Dodonov]. The interest in such a class of problems is not only academic, and the relevance of moving boundaries has been discussed in cavity quantum electrodynamics [@ref:Garraway2008] and in the physical scenario of trapped particles to propose new strategies for cooling atoms [@ref:XiChen2009]. More recently, several other works appeared, dealing with specific box shapes [@ref:Mousavi2012; @ref:Mousavi2013], aimed at giving proper mathematical treatment of the Schrödinger problem in the presence of moving boundaries [@ref:DiMartino], and exploring the raise of correlations between different particles confined in the same non-static [@ref:Mousavi2014]. In this paper, we analyze the dynamics of a quantum particle confined into a box whose walls are moving, and in particular we will focus on the possibility of exploiting moving boundaries to resonantly stimulate the system. Following the approach of our previous work [@ref:Anza2014], we recast the original problem into the problem of a particle in a static box governed by a time-dependent Hamiltonian. The time-dependence of the boundary is then converted into a time-dependence of the relevant effective Hamiltonian acting in the fictitious static domain. Of course, if the walls move periodically, the effective time-dependent Hamiltonian will be periodic too, and with the same frequency. Through the exploitation of a time-dependent perturbative approach, we single out the presence of specific resonant transitions which is immediately traceable back to the analysis of the Fourier transform of the velocity of the boundary compared with the natural transition frequencies of the physical system. We will concentrate on two-dimensional problems but our results are immediately extendable to the three-dimensional case, and of course are essentially valid also for one-dimensional systems. The paper is organized as follows. In the next section we present the physical problem, the relevant mathematical formulation and the proper unitary transformation which removes the boundary motion and make it replaced by a time-dependent term in the Hamiltonian. In the subsequent section we introduce the perturbative treatment singling out the presence of resonances and discussing the role of the Fourier transform of the boundary motion. In section \[sec:OscillatingCircle\] we specialize the previous results to the case of a two-dimensional circular box whose radius is oscillating at a single frequency. Finally, in section \[sec:Conclusion\] we give some conclusive remarks. The Physical System and Hamiltonian {#sec:PhysicalSystem} =================================== Two-dimensional box ------------------- Let us consider a particle confined in a two-dimensional box whose contour, expressed in polar coordinates, is given by $$\label{eq:OriginalBoundary} r = \lambda(t)\gamma(\theta)\,,\qquad \theta \in [0, 2\pi]\,,$$ where $\lambda(t)$ is a smooth function of time, and $\gamma(\theta)$ is a smooth function of the angle $\theta$ such that $\gamma(2\pi)=\gamma(0)$ (which guarantees that the box is not open and the particle is confined). Because of the presence of $\lambda(t)$ the dimensions of the box change, but since the dependence of the radial coordinate of the box on $\theta$ and $t$ is factorized as $\lambda(t)\gamma(\theta)$ the shape of the box does not change. This is a kind of boundary modification that we call pantographic, and that corresponds to the fact that each point of the boundary moves only along the relevant radial direction. The Hamiltonian describing the particle is simply given by the kinetic term: $$\begin{aligned} H &=& \frac{p^2}{2\mu} = -\frac{\hbar^2}{2\mu} \nabla^2\,,\\ \nabla^2 &=& \left( \frac{1}{r}\frac{\partial}{\partial r} + \frac{\partial^2}{\partial r^2} + \frac{1}{r^2}\frac{\partial^2}{\partial\theta^2} \right)\,,\end{aligned}$$ with the time-dependent Dirichlet boundary conditions: $$\begin{aligned} \psi(\lambda(t)\gamma(\theta), \theta) = 0\,.\end{aligned}$$ More precisely, the Hamiltonian is given by an operator which coincides with the kinetic term in the domain corresponding to the box, and zero elsewhere. For the sake of simplicity we will avoid introduction of such a new symbol. According to our treatment in [@ref:DiMartino] and [@ref:Anza2014], we make a unitary transformation which essentially maps the original domain delimited by the boundary given by [Eq. (\[eq:OriginalBoundary\])]{} into another domain with the same shape but different diameter: $$\label{eq:RescaledBoundary} r = \gamma(\theta)\,,\qquad \theta \in [0, 2\pi]\,.$$ We call the first domain (the time-dependent one) ${\cal D}_\lambda$ and this second one (the static one) ${\cal D}_1$. The relevant unitary operator acts as follows: $$\label{eq:UnitaryMapping} \phi(r, \theta) = (U_\lambda \psi)(r, \theta) = \lambda \psi(r \lambda, \theta)\,.$$ It maps quantum states defined in ${{\cal D}}_\lambda$ into states defined in ${{\cal D}}_1$. In the new picture, the operator which generates the dynamics of the particle is given by $$\begin{aligned} \label{eq:EffectiveHamiltonian} \nonumber H_{\mathrm{eff}} &=& U_\lambda H U_\lambda^\dag + \ii\hbar\dot{U}_\lambda U_\lambda^\dag\\ &=& -\frac{\hbar^2}{2\mu \lambda^2} \nabla^2 + \ii\hbar \frac{\dot{\lambda}}{\lambda} \left(1 + \, r \frac{\partial}{\partial r}\right) \,,\end{aligned}$$ where one has to take into account the fact that, since $\lambda$ depends on $t$, the unitary operator $U_\lambda$ depends on time as well. The Hamiltonian in [Eq. (\[eq:EffectiveHamiltonian\])]{} describes a particle with varying mass (because of the factor $\lambda^{-2}$ in the kinetic energy) in the presence of a time-dependent potential, i.e., the term $\ii\hbar\dot{\lambda}/\lambda(1+r\partial_r)$, which from now on we will call the dilation potential or dilation term. It is remarkable to note that the dilation potential involves only the radial coordinate. This is a consequence of the pantographic nature of the box movement, which implies radial dilation and then only radial motion of the walls of the box. Of course, this simple fact does not guarantee conservation of the angular momentum. Indeed, generally speaking, the commutator between two operators depends on the domain in which the two operators are acting. Therefore, commutation between the dilation potential and the angular momentum will depend also on the shape of the box. It is also worth noting that in the more general case of changes with deformation (i.e., non pantographic) the effective Hamiltonian in the static domain turns out to be much more complicated than the one in [Eq. (\[eq:EffectiveHamiltonian\])]{} (the complete expression is reported in Ref. [@ref:Anza2014]). One-dimensional and three-dimensional cases ------------------------------------------- The one-dimensional and three-dimensional counterparts of our problem are treated in a very similar way. In the one-dimensional case, the domain is expressed as ${{\cal D}}_\lambda=[-\lambda(t)l/2, \lambda(t)l/2]$ and will be mapped into ${{\cal D}}_1=[-l/2, l/2]$, the boundary conditions are $\psi(\pm\lambda(t)/2) = 0$ and will be mapped into $\phi(\pm l/2) = 0$, and the generator of the time evolution will change as follows: $$H=-\frac{\hbar^2}{2\mu}\frac{\partial^2}{\partial x^2} \;\;\;\;\Longrightarrow \;\;\;\; H_{\mathrm{eff}} = -\frac{\hbar^2}{2\mu \lambda^2} \frac{\partial^2}{\partial x^2} + \ii\hbar\frac{\dot{\lambda}}{\lambda} \left(\frac{1}{2} + \, x \frac{\partial}{\partial x}\right)\,,$$ where the last differential operator can be put in the form $r\partial_r$, with $r=|x|$. In the three-dimensional case, the domain is expressed as ${{\cal D}}_\lambda=\{(r,\theta,\varphi)| r\le \lambda(t)\gamma(\theta,\varphi), \theta\in [0,2\pi], \varphi\in [0,\pi]\}$ and will be mapped into ${{\cal D}}_1$ obtained for $\lambda=1$, the boundary conditions are $\psi(\lambda(t)\gamma(\theta,\varphi),\theta,\varphi) = 0$ $\forall \theta, \varphi$ and will be mapped into $\phi(\gamma(\theta,\varphi),\theta,\varphi) = 0$, and the generator of the time evolution will change as follows: $$H=-\frac{\hbar^2}{2\mu}\nabla^2 \;\;\;\;\Longrightarrow \;\;\;\; H_{\mathrm{eff}} = -\frac{\hbar^2}{2\mu \lambda^2} \nabla^2 + \ii\hbar\frac{\dot{\lambda}}{\lambda} \left(\frac{3}{2} + \, r \frac{\partial}{\partial r}\right)\,.$$ This clarifies how the further results, which will be explicitly derived for the two-dimensional case, are essentially valid in 1D and 3D. Pantographic perturbations {#sec:Resonances} ========================== Once the problem of one particle in a 2D box with moving walls is transformed into the problem of a particle in a static box, we get the Hamiltonian in [Eq. (\[eq:EffectiveHamiltonian\])]{}. Under the assumption that the dilation parameter $\lambda$ is a smooth and close-to-unity function, the quantity $\dot{\lambda}/\lambda$ is small and the dilation potential can be treated as a perturbation. This is in perfect agreement with considerations by Kato on the way to treat small modifications of the domain in which a particle can move [@ref:Kato]. Such an approach has already been exploited in Ref. [@ref:Anza2014] to treat changes of the domain with deformation. (It is appropriate to mention here that in [@ref:Anza2014] we assume knowledge of the dynamics in the pantographic case - the relevant Hamiltonian describing pantographic changes is considered as the unperturbed one- and then treat the terms coming from deformation as a perturbation. Nevertheless, knowledge of the dynamics in the pantographic case is quite limited, and exact dynamics is known only for the case of uniformly moving domain. In this paper, instead, we are attempting to extend our knowledge of the dynamics in the pantographic case.) Generally speaking, after making the unitary mapping to the domain ${{\cal D}}_1$, in the new picture, that we address as the Schrödinger picture, we have a static domain and a time-dependent Hamiltonian (see [Eq. (\[eq:EffectiveHamiltonian\])]{}) of the following form: $$H_\mathrm{S}(t) = \lambda^{-2}\, H_\mathrm{0} + \dot\lambda\,\lambda^{-1}\, V\,.$$ We express $\lambda$ as follows: $$\lambda = 1+\epsilon f(t)\,,$$ where $\epsilon$ is a dimensionless small parameter, $f(t)$ is a smooth and bounded function, and the operators $H_\mathrm{0}$ and $V$ are time-independent. We also introduce the series expansions of $\lambda^{-2}$ and $\dot\lambda\,\lambda^{-1}$: $$\begin{aligned} \lambda^{-2} &=& \sum_{\mathrm{k}} A_\mathrm{k}\,\epsilon^\mathrm{k} = 1 - 2\epsilon f(t) + ...\,, \\ \dot\lambda\,\lambda^{-1} &=& \sum_{\mathrm{k}} B_\mathrm{k}\,\epsilon^\mathrm{k} = \epsilon \dot{f}(t) + ...\,,\end{aligned}$$ then getting the following Hamiltonian corrected to the first order in $\epsilon$: $$H_\mathrm{S}(t) = (1 - 2\epsilon f(t))\, H_\mathrm{0} + \epsilon \dot{f}(t)\, V\,.$$ By performing the passage to the interaction picture through the unitary operator generated by the unperturbed Hamiltonian $\lambda^{-2}\, H_\mathrm{0}$, $U_\mathrm{0}(t) = \e^{-\ii \hbar^{-1} \int_0^t [\lambda(\epsilon, s)]^{-2}\mathrm{d}s\, H_\mathrm{0}}$, we obtain the new generator of the time evolution, $H_\mathrm{I}(t) = \dot\lambda\,\lambda^{-1} \, U_\mathrm{0}^\dag(t) \, V \, U_\mathrm{0}(t)$. After performing the relevant spectral decompositions, $$H_\mathrm{0}(t) = \sum_\alpha E_\alpha {\left\vert E_\alpha\right\rangle}{\left\langle E_\alpha\right\vert}\,,$$ and applying the standard perturbation treatment, we can explicitly write down the evolved state corrected up to the first order in the parameter $\epsilon$: $${\left\vert \psi(t)\right\rangle} = \left[\e^{-\frac{\ii}{\hbar} \int_0^t [1-2\epsilon f(s)]\mathrm{d}s\, H_\mathrm{0}} + \epsilon \, \e^{-\frac{\ii}{\hbar} H_\mathrm{0} t} \sum_{\alpha\beta} V_{\alpha\beta} \, \int_0^t \dot{f}(u) \, e^{\frac{\ii}{\hbar} (E_\mathrm{\alpha} - E_\mathrm{\beta}) u} \mathrm{d}u {\left\vert E_\mathrm{\alpha}\right\rangle}{\left\langle E_\mathrm{\beta}\right\vert}\right] {\left\vert \psi(0)\right\rangle}\,, \label{eq:PerturbativeEvolution}$$ where $V_{\alpha\beta} = {\left\langle E_\alpha\right\vert} V {\left\vert E_\beta\right\rangle}$. On the basis of [Eq. (\[eq:PerturbativeEvolution\])]{} one can immediately argue that the movement of the walls induces quantum transitions, and two factors determine which transitions can occur: on the one hand the coupling through the dilation potential, which means that the matrix element $V_{\alpha\beta}$ must be nonzero; on the other hand, in order to have a finite transition probability from the initial state to another at long time, the following condition must be satisfied: $$\label{eq:FourierTransform} \int_0^t \dot{f}(u) \, e^{\frac{\ii}{\hbar} (E_\alpha - E_\beta) u} \mathrm{d}u \not= 0\,,\qquad t\gg \frac{\hbar}{|E_\alpha - E_\beta|}\,.$$ Since the integral in Eq. \[eq:FourierTransform\] in the limit $t\rightarrow \infty$ essentially approaches the Fourier transform of $\dot{f}(t)$ for $\omega = -(E_\alpha - E_\beta)/\hbar$, one can assert that in order to have nonzero transitions between two states the Fourier transform of the radial velocity of the walls should be nonzero for the relevant frequency. On this basis, it is natural to talk about resonances and turns out to be important singling out the presence of resonant sinusoidal components in the perturbation (i.e., the dilation potential). Of course, as usual, non resonant components in the Fourier expansion of the perturbation can induce fluctuations determining small transitions at small time, but such transitions disappear after a sufficiently long time. As a relevant physical situation, one can consider the case where the $\lambda$ is a periodic function, and still smooth and close-to-unity. This means that $f(t)$ is smooth and periodic, and so is $\dot{f}(t)$, easily leading to the fact that, at first order, only transitions associated to the frequency of $\dot{f}$ or its multiples are allowed. The Breathing Circle {#sec:OscillatingCircle} ==================== In order to better fix the previous ideas we will consider a very special case, that we call the breathing circle. In other words, we consider a particle moving inside a two-dimensional circular box whose radius is time-dependent: $R(t) = R_0 (1+\epsilon\sin\omega t)$, so that $\gamma(\theta)=R_0$ and $f(t)=\sin\omega t$. After mapping the original problem into the problem of a particle confined in a circular box of radius $R_0$ and expanding the relevant Hamiltonian with respect to $\epsilon$ up to the first order, one obtains: $$\label{eq:EffCircHamiltonian} H_{\mathrm{eff}} = -\frac{\hbar^2}{2\mu} (1-2\epsilon\sin\omega t) \nabla^2 + \epsilon\, \omega\, \cos\omega t \, \ii\hbar \, \left(1 + \, r \frac{\partial}{\partial r}\right) \,.$$ According to the analysis in Ref. [@ref:Robinett1996], the eigenvalues and eigenfunctions of a free particle (hence governed by $H_\mathrm{0}=-\hbar^2/(2\mu)\nabla^2$) in a 2D circular box are given by the following expressions: $$\begin{aligned} E_{mn} &=& \frac{\hbar^2}{2 \mu r_0^2} a_{mn}^2\,,\\ \chi_{mn} &=& (2\pi)^{-1/2} \aleph_{mn} J_{m}(k_{mn} r ) \times e^{\ii m \theta}\,, \qquad m \in \mathbb{Z}\,,\end{aligned}$$ where $J_m(x)$ is the Bessel function of $m$-th order, $a_{mn}$ is the $n$-th zero of $J_m$, and $$\begin{aligned} && k_{mn}^2 = \frac{2 \mu E_{mn}}{\hbar^2} = \frac{(a_{mn})^2}{r_0^2} \; \Rightarrow \; k_{mn} = \frac{|a_{mn}|}{r_0} \, ,\\ && \aleph_{mn} = \left(\int_0^{r_0} r J_m (k_{mn}r)^2 \mathrm{d}r\,\right)^{-1/2}\,.\end{aligned}$$ We have already mentioned the fact that the operator $1+r\partial_r$ does not involve the angular variable but only the radial one. Moreover, because of the specific shape of the box, in this case the angular momentum of the system commutes with such an operator. This fact, considered also the commutation between kinetic energy and angular momentum, implies that the angular momentum is conserved through all the evolution. This result is indeed quite intuitive since the shape of the box is circular at any time and the kicks that the particle can receive from the walls of the box are only directed along the radius of the circle, hence not producing any torque. On the contrary, because of such kicks, the energy of the particle can change, which corresponds to the possibility of having transitions between states with the same angular momentum but different energies. In connection with the perturbation treatment, we can say that the matrix elements of the dilation potential are vanishing when different values of the angular momentum quantum number are involved: $V_{mnm'n'}={\left\langle \chi_{mn}\right\vert}V{\left\vert \chi_{m'n'}\right\rangle}=0$ if $m\not=m'$. In Fig. \[fig:TransitionProbabilities\], we show the transition probabilities to some of the excited states when the system is prepared in the ground state and the oscillation of the walls resonates with the transition ${\left\vert \chi_{01}\right\rangle} \rightarrow {\left\vert \chi_{02}\right\rangle}$. It is evident that the only significant transitions occur toward the state ${\left\vert \chi_{02}\right\rangle}$, especially at relatively long time. The small and fast oscillation superimposed to the quadratic behavior of the population of ${\left\vert \chi_{02}\right\rangle}$ is due to the fact that the rotating wave approximation has not been performed. In Fig. \[fig:MeanRadius\] we show how the mean value of the distance of the particle from the centre of the box ($\langle r\rangle$) evolves when the particle is prepared in its ground states and the wall movement resonates with the transition ${\left\vert \chi_{01}\right\rangle} \rightarrow {\left\vert \chi_{02}\right\rangle}$. ![(Color online) Transition Probabilities to the states ${\left\vert \chi_{02}\right\rangle}$ (blue bolded line), ${\left\vert \chi_{03}\right\rangle}$ (red line) and ${\left\vert \chi_{04}\right\rangle}$ (black thin line) when the system is initially prepared in the ground state ${\left\vert \chi_{01}\right\rangle}$ and the frequency of the wall oscillations ($\omega$) is equal to the frequency of the transition ${\left\vert \chi_{01}\right\rangle} \rightarrow {\left\vert \chi_{02}\right\rangle}$. Here $\hbar=1$, $\mu=1$, $r_0=1$ and $\epsilon=0.03$. The time is expressed in units of $2\pi/\omega$. Figures (a) and (b) show the probabilities in different time intervals. In the short-time scale (b) all the transitions are quite small. In the long-time scale (a) it is evident that the only transition probability which is non negligible is the one toward the state ${\left\vert \chi_{02}\right\rangle}$, which is a clear manifestation of the resonance.[]{data-label="fig:TransitionProbabilities"}](fig1a.eps "fig:"){width="35.00000%"} ![(Color online) Transition Probabilities to the states ${\left\vert \chi_{02}\right\rangle}$ (blue bolded line), ${\left\vert \chi_{03}\right\rangle}$ (red line) and ${\left\vert \chi_{04}\right\rangle}$ (black thin line) when the system is initially prepared in the ground state ${\left\vert \chi_{01}\right\rangle}$ and the frequency of the wall oscillations ($\omega$) is equal to the frequency of the transition ${\left\vert \chi_{01}\right\rangle} \rightarrow {\left\vert \chi_{02}\right\rangle}$. Here $\hbar=1$, $\mu=1$, $r_0=1$ and $\epsilon=0.03$. The time is expressed in units of $2\pi/\omega$. Figures (a) and (b) show the probabilities in different time intervals. In the short-time scale (b) all the transitions are quite small. In the long-time scale (a) it is evident that the only transition probability which is non negligible is the one toward the state ${\left\vert \chi_{02}\right\rangle}$, which is a clear manifestation of the resonance.[]{data-label="fig:TransitionProbabilities"}](fig1b.eps "fig:"){width="35.00000%"} ![(Color online) The mean value of the distance of the particle from the centre of the box $\langle r\rangle$ when the system is prepared in its ground state ${\left\vert \chi_{01}\right\rangle}$ and the wall oscillation frequency $\omega$ is equal to the transition frequency from ${\left\vert \chi_{01}\right\rangle}$ to ${\left\vert \chi_{02}\right\rangle}$. Here $\hbar=1$, $\mu=1$, $r_0=1$ and $\epsilon=0.03$. The time is expressed in units of $2\pi/\omega$.[]{data-label="fig:MeanRadius"}](fig2.eps){width="35.00000%"} Discussion {#sec:Conclusion} ========== The analysis developed in previous works, such as Ref. [@ref:DiMartino] and Ref. [@ref:Anza2014], show that when the problem of a system with moving boundaries is mapped into a problem with static boundaries the Hamiltonian inducing the evolution in the new picture becomes time-dependent. Even the very kinetic energy contains a time-dependent factor, so that the particle appears as a particle with changing mass. Moreover, a new term - the dilation term, which is time-dependent as well - is added. Such new term can induce transitions between the eigenstates of the unperturbed Hamiltonian, which in our case is simply given by the kinetic energy of a particle with a varying mass. Which transitions are induced depends on the way the boundary moves, and in particular from the Fourier transform of the velocity of the walls. As a very special case, we have considered the case where the walls oscillate at a precise frequency, so that transitions between eigenstates of the free Hamiltonian can be induced when the boundary oscillations are properly tuned. We emphasize the fact that our analysis demonstrates with an appropriate mathematical treatment the intuitive idea that a boundary oscillating at a given frequency can induce transitions, when such oscillations are tuned to a transition frequency of the system. Then, in some sense, an oscillating boundary acts on the system like a suitable oscillating field. By this way, it is worth mentioning that, because of the choice of considering only pantographic changes of the domain, the dilation potential turns out to be a radial potential ($1+r\partial_r$) and then, with an appropriate geometrical nature of the domain, it preserves angular momentum, as it happens in the case of a circular box. As a final remark, we point out that if the particle in the original time-dependent domain is subjected, not only to the potential describing the box contour, but also to another potential, of course such a term will be kept in the new picture with a suitable scaling (see Ref. [@ref:DiMartino]). In our case, for the sake of simplicity we have not considered such a situation. Nevertheless, in the presence of such additional potential the approach is essentially the same as before, just with some mathematical complications. In fact, in the static domain the particle would be described as a time varying particle subjected to a time-dependent potential (resulting as a time-dependent scaling of the original potential) and to the dilation term (resulting from the change of picture). Therefore, in the limit of small and smooth movements of the walls of the box, the last term will be treated as a perturbation to the time evolution induced by the first two terms. [99]{} Barnes E, Des Sarma S 2012 *Phys. Rev. Lett.* [**109**]{} 060401 Messina A and Nakazato H 2014 J. Phys. A: Math. Theor. [**47**]{} 445302 Simeonov L S and Vitanov N V 2014 *Phys. Rev. A* [**89**]{}, 043411 Messiah A, *Quantum Mechanics* (Dover) Aniello P 2005 *J. Opt. B* **7** S507 Militello B, Aniello P, Messina A 2007 *J. Phys. A: Math. Theor.* **40** 4847 Zagury N, Aragao A, Casanova J, Solano E 2010 *Phys. Rev. A* **82** 042110 Rigolin G, Ortiz G, Ponce V H 2008 *Phys. Rev. A* **78** 052508 Traversa F L, Di Ventra M, Bonani F 2013 *Phys. Rev. Lett.* **110** 170602 Moskalets M, Büttiker M 2002 *Phys. Rev. B* **66** 205320 Shirley J H 1965 *Phys. Rev. B* **138** 979 Klimchitskaya G L, Mohideen U, and Mostepanenko V M 2009 *Rev. Mod. Phys.* **81** 1827 Wilson C M, Duty T, Sandberg M, Persson F, Shumeiko V and Delsing P 2010 [*Phys. Rev. Lett.*]{} [**105**]{} 233907 Fermi E 1949 *Phys. Rev.* **75** 1169 Doescher S W and Rice M H 1969 *Am. J. Phys.* **37** 1246 Pinder D N 1989 *Am. J. Phys* **58** 54 Schlitt D W and Stutz C 1970 *Am. J. Phys* **38** 70 Dodonov V V, Klimov A B and Nikonov D E 1993 *J. Math. Phys.* **34** 3391 Linington I E and Garraway B M 2008 *Phys. Rev. A* **77** 033831 Xi Chen *et al* 2009 *Phys. Rev. A* **80** 063421 Mousavi S V 2013 *Physics Letters A* **377** 1513 Mousavi S V 2012 *EPL* **99** 30002 Di Martino S, Anzà F, Facchi P, Kossakowski A, Marmo G, Messina A, Militello B, Pascazio S 2013 *J. Phys. A* **46** 365301 Mousavi S V 2013 [Bohmian particles in time-dependent traps]{} ArXiv:1309.0993 Anzà F, Di Martino S, Messina A, Militello B 2014 [Dynamics of a particle confined in a two- or three-dimensional moving domain]{} ArXiv:1405.7195 Kato T 1966 *Perturbation Theory for Linear Operators* (Springer-Verlag, Berlin). See Chapter VII, section 6.5, Boundary perturbation. Robinett R W 1996 *Am. J. Phys.* **64** 440
--- abstract: | Given the enormous galaxy databases of modern sky surveys, parametrising galaxy morphologies is a very challenging task due to the huge number and variety of objects. We assess the different problems faced by existing parametrisation schemes (CAS, Gini, $M_{20}$, Sérsic profile, shapelets) in an attempt to understand why parametrisation is so difficult and in order to suggest improvements for future parametrisation schemes. We demonstrate that morphological observables (e.g. steepness of the radial light profile, ellipticity, asymmetry) are intertwined and cannot be measured independently of each other. We present strong arguments in favour of model-based parametrisation schemes, namely reliability assessment, disentanglement of morphological observables, and PSF modelling. Furthermore, we demonstrate that estimates of the concentration and Sérsic index obtained from the Zurich Structure & Morphology catalogue are in excellent agreement with theoretical predictions. We also demonstrate that the incautious use of the concentration index for classification purposes can cause a severe loss of the discriminative information contained in a given data sample. Moreover, we show that, for poorly resolved galaxies, concentration index and $M_{20}$ suffer from strong discontinuities, i.e. similar morphologies are not necessarily mapped to neighbouring points in the parameter space. This limits the reliability of these parameters for classification purposes. Two-dimensional Sérsic profiles accounting for centroid and ellipticity are identified as the currently most reliable parametrisation scheme in the regime of intermediate signal-to-noise ratios and resolutions, where asymmetries and substructures do not play an important role. We argue that basis functions provide good parametrisation schemes in the regimes of high signal-to-noise ratios and resolutions. Concerning Sérsic profiles, we show that scale radii cannot be compared directly for profiles of different Sérsic indices. Furthermore, we show that parameter spaces are typically highly nonlinear. This implies that significant caution is required when distance-based classificaton methods are used. author: - | René Andrae$^{1}$[^1], Knud Jahnke$^{1}$ and Peter Melchior$^{2}$\ $^{1}$Max-Planck-Institut für Astronomie, Königstuhl 17, 69117 Heidelberg, Germany\ $^{2}$Institut für Theoretische Astrophysik, Zentrum für Astronomie, Albert-Ueberle-Str. 2, 69120 Heidelberg, Germany bibliography: - 'bibliography.bib' date: 'Accepted 2010 September 10. Received 2010 April 1.' title: 'Parametrising arbitrary galaxy morphologies: potentials and pitfalls' --- \[firstpage\] Galaxies: general – Methods: data analysis, statistical – Techniques: image processing. Introduction ============ In the last ten years the field of galaxy evolution has experienced a boost. With the advent of large ground-based spectroscopic and imaging surveys such as the SDSS [@Abazajian2009] or space-based surveys like COSMOS [@Scoville2007], the database of galaxies has increased enormously. From both very deep as well as very wide area surveys substantial amounts of data are available, enabling us to study the dependence of galaxy formation and evolution on e.g. environment, star formation history or stellar/bulge/black hole mass. It is now possible to test multivariate dependencies and, in conjunction with numerical simulations, to describe possible evolutionary tracks of galaxies, and to single out not yet fully understood phenomena like the colour-bimodality of galaxies [e.g. @Strateva2001] or the linear relation between black hole and stellar bulge mass [e.g. @Haering2004; @Woo2006]. Studies of galaxy morphologies are very important in this context, because different morphologies are caused by different physical processes that are likely to also affect other properties, e.g. star-forming rate, and may also correlate with environment. Despite these efforts, it is still a very challenging task to meaningfully describe (parametrise) the morphologies of galaxies in very large data samples. Although we are well able to parametrise the morphologies of individual galaxies of certain types [e.g. @Simmat2010], finding a parametrisation scheme that is able to account for the huge variety of galaxy morphologies is a completely different task. Strategy -------- In this paper we discuss the concept of parametrisation and summarise commonly used parametrisation schemes, namely CAS [@Abraham1994; @Abraham1996; @Bershady2000], $M_{20}$ [@Lotz2004], Gini [@Lotz2004; @Lotz2008], Sérsic profile [@Sersic1968; @Graham2005], shapelets [@Refregier2003] and sérsiclets [@Ngan2009]. We categorise these schemes and identify important differences. However, the main intention of this article is to determine if there are any fundamental problems involved in the parametrisation of galaxy morphologies, which may turn out to be subtle or non-obvious. Our investigations are designed to test the current paradigm favouring model-independent schemes. It has already been shown that the diagnostic power of shapelets is limited for elliptical galaxies [@Melchior2009a], whereas the method of sérsiclets has not yet been successfully established. Therefore, we focus our attention on the caveats involved in the usage of the other parametrisation schemes. In the course of this investigation, we demonstrate that morphological observables are intertwined. This new insight implies that all schemes that try to estimate observables separately without addressing their inherent degeneracies are problematic in principle. In the remaining part of this introduction, we define the terms “galaxy morphology” and “parametrisation” and discuss what parametrisation is meant to achieve. In Sect. 2 we introduce two conceptually different approaches to parametrisation, namely model-independent (CAS, $M_{20}$, Gini) and model-based schemes (Sérsic profile, shapelets, sérsiclets). As a first fundamental problem and one of our main results, we illustrate in Sect. 3 that morphological observables are intertwined and cannot be measured independently. Second, we investigate the impact of the point-spread function on the concentration index in Sect. 4. Third, we consider general problems affecting the classification of galaxy morphologies in Sect. 5. Finally, in Sect. 6 we summarise our results and give recommendations for improvements of existing or the design of new parametrisation schemes. Galaxy morphology\[sect:galaxy\_morphology\] -------------------------------------------- The morphology of a galaxy is defined by the characteristics of its two-dimensional light distribution, i.e. by the projected shape of the galaxy. Some morphological observables are: - steepness of radial light profile - ellipticity (i.e. orientation & axis ratio) - asymmetry (e.g. lopsidedness) - substructures (e.g. spiral arm patterns, bars, etc.) - size - centroid The centroid position is an important morphological observable as well, since it is often required to derive other morphological estimators (cf. Table \[tab:para\_schemes\]). For decades galaxy morphologies have been studied in the visual regime, where all these observables are reasonably well defined. However, with increasing observational coverage of the electromagnetic spectrum, it became evident that morphology is a strongly varying function of wavelength. For instance, in the UV we observe mostly star-forming regions but no dust emission, such that galaxies can look patchy and highly irregular. On the other hand, in the far infra-red, there is almost no stellar but only dust emission. As we discuss in Sect. \[sect:assumptions\], many parametrisation schemes for galaxy morphologies make rather restrictive assumptions that are too specialised on the visual regime and cannot be generalised to the whole electromagnetic spectrum. As our discussion is set in the context of large surveys where galaxies exhibit a huge variety of different morphologies, we have to look for parametrisation schemes that are flexible enough to describe *arbitrary* morphologies. Observation, parametrisation, inference\[sect:trinity\] ------------------------------------------------------- In this section we want to clarify the role of parametrisation, i.e. what purpose it serves and what its benefits are. Parametrisation is one step in the sequence of observation, parametrisation and inference, which is visualised in Fig. \[fig:trinity\_observ\_para\_inf\]. The process of observation ($\mathcal F_1$) provides a nonlinear mapping of the true intrinsic galaxy morphology to an observed morphology. This mapping $\mathcal F_1$ comprises the projection onto the two-dimensional sky, the binning to pixels, the addition of pixel noise, and the convolution with the pixel-response function (gain of the detector). It also involves the convolution with the point-spread function, taking into account seeing effects, optics and instrument sensitivity. However, analysing galaxy morphologies directly in pixel space is infeasible, since the number of pixels is typically very large. Therefore, it is necessary to parametrise the observed morphology ($\mathcal F_2$ in Fig. \[fig:trinity\_observ\_para\_inf\]), a step that has the two following aims: First, we want to reduce the degrees of freedom, since there is a lot of redundant or uninteresting information in pixel space. Second, we want to move from pixel space to some other description that better suits a given physical question. Effectively, this means that parametrisation can act as a method to reduce the dimensionality of the problem, to suppress noise and to extract information. Note that this definition of parametrisation encompasses more than just data modelling. Based on such a parametrisation we can then try to infer the true intrinsic morphology. For instance, inference can be based on the search of multivariate dependencies of morphological descriptors on physical parameters or on classification. The inference step corresponds to the mapping $\mathcal F_3$ in Fig. \[fig:trinity\_observ\_para\_inf\], where obviously $\mathcal F_3=\mathcal F_1^{-1}\circ\mathcal F_2^{-1}$, i.e. both mappings $\mathcal F_1$ and $\mathcal F_2$ need to be invertible – at least in a practical sense. Often inference does not aim at the true intrinsic morphology, but at some abstract type or class that represents a reasonable generalisation. Still, if either $\mathcal F_1$ or $\mathcal F_2$ destroys too much information, this type of inference is impossible as well. For $\mathcal F_1$ being (approximately) invertible, the observation has to have a high signal-to-noise ratio and a high resolution relative to the features of interest (critical sampling). If this requirement is not met by the data, the observation will not resemble the true morphology and inference will be impossible. @Bamford2009 observe this problem in the Galaxy Zoo project and term it “classification bias”. They noticed that type fractions resulting from visual classifications of 557,681 SDSS galaxies with redshifts $z<0.25$ evolve significantly with $z$. As @Bamford2009 do not expect a pronounced morphological evolution in this redshift regime, they assign this effect to the degradation of image quality with increasing redshift. If $\mathcal F_2$ is invertible – i.e. whether or not $\mathcal F_2^{-1}$ and thus $\mathcal F_3=\mathcal F_1^{-1}\circ\mathcal F_2^{-1}$ exists – depends on the parametrisation scheme. This is the topic of this paper. Consequently, a reliable parametrisation is as important for inference as sufficient data quality. ![Interplay of observation ($\mathcal F_1$), parametrisation ($\mathcal F_2$) and inference ($\mathcal F_3$). This paper is concerned with the existence of the mapping $\mathcal F_2^{-1}$, which is necessary for $\mathcal F_3=\mathcal F_1^{-1}\circ\mathcal F_2^{-1}$ to exist.[]{data-label="fig:trinity_observ_para_inf"}](flowchard){width="50mm"} Parametrisation schemes\[sect:para\_schemes\] ============================================= In order to assess the advantages and deficites of different parametrisation schemes we now briefly summarise the most common approaches. We divide them into model-independent and model-based approaches. The most important difference is that the model-based approaches try to *model* the two-dimensional light distribution of an image and are thus mostly descriptive. Model-independent approaches more directly try to extract physical information, hence mixing decription and inference steps. We conclude this section by summarising the assumptions involved in the parametrisation schemes. Model-independent schemes ------------------------- The foremost reason to use a model-independent – or “non-parametric” – approach is that it appears to be very simple at first glance. Most of these parametrisation schemes seem easy to implement, since they do not require to fit a model. Furthermore, parameters in all these schemes have at least a rough physical interpretation. ### CAS system A widely used set of morphological parameters is provided by the CAS system, which is based on the so-called Concentration, Asymmetry and Clumpiness indices [@Abraham1994; @Abraham1996; @Bershady2000]. The concentration index is defined as $$\label{eq:def:concentration} C = 5\,\log_{10}\left( \frac{r_{80}}{r_{20}} \right) \;\textrm{,}$$ where $r_{80}$ and $r_{20}$ are the radii of circular (or elliptical) apertures containing 80% and 20% of the total image flux.[^2] The asymmetry index is defined as $$\label{eq:def:asymmetry} A = \frac{\sum_\textrm{pixels}|I(\vec x) - I^{\textrm{180}^\circ}(\vec x)|}{\sum_\textrm{pixels}I(\vec x)} \;\textrm{,}$$ where $I^{\textrm{180}^\circ}$ denotes the image $I$ rotated by $\textrm{180}^\circ$. Obviously, the asymmetry $A$ is bound in the interval $[0,2]$. Finally, the clumpiness is defined as $$\label{eq:def:clumpiness} S = 10\frac{\sum_\textrm{pixels}|I(\vec x) - I^\sigma(\vec x)|}{\sum_\textrm{pixels}I(\vec x)} \;\textrm{,}$$ where $I^\sigma$ has been convolved by a Gaussian of width $\sigma$. The specific choice of $\sigma$ is somewhat arbitrary within a certain range, being sensitive to clumps of varying spatial extent. As far as we know, there is no systematic investigation of the impact of the choice of $\sigma$ on the parametrisation results. ### $M_{20}$ and Gini Two further morphological parameters are $M_{20}$ and the Gini coefficient. We define the second-order moment of pixel $n$ with value $I_n$ at position $\vec x_n$ as [@Lotz2004] $$\label{eq:def:2nd_moment_of_pixel} M_n = I_n\,\left(\vec x_n - \vec x_c\right)^2 \;\textrm{,}$$ where $\vec x_c$ denotes the reference position. Summation of the $M_n$ over all pixels yields the total second moment $M_\textrm{tot}$ with respect to $\vec x_c$. There is a theoretical preference to choose the reference position $\vec x_c$ to be the centre of light, because this choice minimises $M_\textrm{tot}$. $M_{20}$ is defined as $$\label{eq:def:M20} M_{20} = \log_{10}\left(\frac{\sum_i M_i}{M_\textrm{tot}}\right) \;\textrm{,}$$ where the summation $\sum_i M_i$ is over the pixels in descending order $I_1\geq I_2\geq\ldots\geq I_N$ and stops as soon as $\sum_i I_i\geq 0.2\sum_{n=1}^N I_n$, i.e. as soon as 20% of the total flux is reached. $M_{20}$ is supposed to estimate the spatial distribution of the most luminous parts of a galaxy image. The Gini coefficient was defined by @Lotz2004 [@Lotz2008] based on @Glasser1962 as $$\label{eq:def:gini} G = \frac{\sum_{n=1}^N (2n-N-1)|I_n|}{(N-1)\sum_{n=1}^N |I_n|} \;\textrm{,}$$ where $N$ is the number of image pixels and $|I_1|\leq |I_2|\leq\ldots\leq |I_N|$ are the absolute values of the pixel fluxes sorted in ascending order. In contrast to $M_{20}$, Gini does not require an estimate of the centroid position. The Gini coefficient estimates the distribution of the pixel values over the image. As shown by @Lisker2008, it strongly depends on the signal-to-noise distribution within a galaxy’s image and is thus a highly unstable morphological estimator. Model-based schemes I: Sérsic profile\[sect:sersic-index\] ---------------------------------------------------------- ### Definition The radial light profiles of many galaxies are reasonably well described by the Sérsic profile [see @Sersic1968; @Graham2005 for a compilation of relevant formulae], $$\label{eq:def:Sersic_model} I(R) = I_\beta\,\exp\left\{-b_n\left[ \left(\frac{R}{\beta}\right)^{1/{n_S}} -1 \right]\right\} \;\textrm{,}$$ where $n_S$ is the Sérsic index and $\beta$ is the scale radius[^3]. The constant $b_n$ is usually chosen such that the radius $\beta$ encloses half of the total light. $I_\beta$ is the intensity at the half-light radius $\beta$. At fixed $n_S$, $b_n$ is then given by $$\label{eq:definition_b_n} \Gamma(2n_S) = 2\gamma(2n_S,b_n) \;\textrm{,}$$ where $\Gamma$ and $\gamma$ denote the complete and incomplete gamma functions. For $n_S>0.5$ one can approximate $b_n\approx 2 n_S - \frac{1}{3}$. The Sérsic profile corresponds to a Gaussian profile if $n_S=0.5$, to an exponential disk profile if $n_S=1$, and to a deVaucouleur profile if $n_S=4$. Throughout this paper we use a truncated Sérsic profile of the form $$\tilde I(R) = \left\{\begin{array}{lcr} I(R) - I(5\beta) & \Leftrightarrow & R \leq 5\beta \\ 0 & \textrm{otherwise} & \end{array}\right. \;\textrm{,}$$ such that all profiles are 0 for $R>5\beta$ but still continuous. This is necessary, since otherwise the profiles do not vanish quickly enough for large Sérsic indices. ### Redefining $b_n$ and $\beta$ It is important to note that $b_n$ and $\beta$ in Eq. (\[eq:def:Sersic\_model\]) are completely degenerate. We are free to make any choice of $b_n$ that is different from Eq. (\[eq:definition\_b\_n\]), thereby redefining the model and changing the meaning of $\beta$. There are two reasons why Eq. (\[eq:definition\_b\_n\]) potentially is not a good choice for $b_n$: 1. From a theoretical point of view, half-light radii $\beta$ are *not* comparable for different values of $n_S$, i.e. the size of one galaxies relative to a second galaxy can be inferred from their scale radii if *and only if* both Sérsic models use identical $n_S$. However, in practice this is rarely a problem, since studies usually compare only sizes of galaxies of similar Hubble types, e.g. in studies of the size-evolution of disc galaxies. Nonetheless, choosing $b_n$ according to Eq. (\[eq:definition\_b\_n\]), we *must not* demand that $\beta$ is smaller than the image size, since $\beta$ *cannot* be interpreted this way. We actually need to require that the profile drops within the image boundaries. Figure \[fig:interpretation\_scale\_radius\_4\_n\_S\] shows that the radii where the profile reduces to $\frac{1}{2}$, $\frac{1}{4}$, and $\frac{1}{10}$ of its value at $r=0$ vary over several orders of magnitude for different $n_S$. The scale radius $\beta$ is more intuitively defined such that $$\label{eq:def_1oX} \frac{I(\beta)}{I(0)} = 1/X$$ for some $X>0$ *independent* of $n_S$. This can be achieved by setting $b_n=b=\log X$ for all $n_S$. Panel (b) of Fig. \[fig:interpretation\_scale\_radius\_4\_n\_S\] shows that in this case the radii for different $n_S$ change by less than two orders of magnitude and hence can be compared much better. 2. It is well known that there is a strong correlation of $n_S$ and $\beta$ [e.g. @Trujillo2001], which is problematic for many fit algorithms. This correlation of $n_S$ and $\beta$ is almost completely induced by Eq. (\[eq:definition\_b\_n\]), i.e. it is artificial. We can remove this correlation by setting $b_n=\log X$ for all $n_S$, thereby simplifying the fit problem. We demonstrate this in Fig. \[fig:b\_n\_inducing\_artificial\_correlation\] showing $\chi^2$ manifolds for fitting an artificial light profile once using Eq. (\[eq:definition\_b\_n\]) (panel a) and once using $b_n=\log X$ for all $n_S$ (panel b). The noise level in this simulation is low (the signal-to-noise ratio of the central peak is 100). Higher noise levels will not change the curvatures of the $\chi^2$ “valleys” in Fig. \[fig:b\_n\_inducing\_artificial\_correlation\] but will only broaden them and reduce their depth. These issues are not fundamental and there is no theoretical preference for choosing between these approaches apart from the fact that Eq. (\[eq:def\_1oX\]) is likely to provide more robust parameter estimates. Furthermore, it is possible to convert to and fro the definitions of Eqs. (\[eq:definition\_b\_n\]) and (\[eq:def\_1oX\]) via $b/\beta^{1/n_S}=\textrm{const}$. ![Radii $r_X$ where Sérsic profile takes values $I(r_X)/I(0)=1/X$ for $X=2,4,10$ and $b_n$ given by Eq. (\[eq:definition\_b\_n\]) (panel (a)) and $b_n=\log 4$ (panel (b)).[]{data-label="fig:interpretation_scale_radius_4_n_S"}](interpretation_scale_radius_4_n_S){width="84mm"} ![$\chi^2/\textrm{dof}$ manifolds demonstrating how Eq. (\[eq:definition\_b\_n\]) induces the artificial correlation of $n_S$ and $\beta$.(a) $\chi^2/\textrm{dof}$ manifold for $b_n$ defined by Eq. (\[eq:definition\_b\_n\]). The white diamond indicates the optimum. The dashed white line is given by $b_n/\beta^{1/n_S}=\textrm{const}$ and follows the valley, thereby illustrating that the correlation of $n_S$ and $\beta$ is artificial.(b) Same as in (a) but for $b_n=\log 4$ for all $n_S$. The valley is approximately parallel to the $n_S$-axis, i.e. the correlation is gone.Both panels use the same artificial light profile with low noise level to evaluate the $\chi^2/\textrm{dof}$ manifold. It is much easier to find the optimum in panel (b) than in panel (a). The optimal values of $n_S$ are identical in (a) and (b), whereas the optimal values of $\beta$ are different due to the different choice of $b_n$. $\chi^2/\textrm{dof}$ is not a simple quadratic form, because the Sérsic profile is a nonlinear model.[]{data-label="fig:b_n_inducing_artificial_correlation"}](chi2_grid__bad "fig:"){width="84mm"} ![$\chi^2/\textrm{dof}$ manifolds demonstrating how Eq. (\[eq:definition\_b\_n\]) induces the artificial correlation of $n_S$ and $\beta$.(a) $\chi^2/\textrm{dof}$ manifold for $b_n$ defined by Eq. (\[eq:definition\_b\_n\]). The white diamond indicates the optimum. The dashed white line is given by $b_n/\beta^{1/n_S}=\textrm{const}$ and follows the valley, thereby illustrating that the correlation of $n_S$ and $\beta$ is artificial.(b) Same as in (a) but for $b_n=\log 4$ for all $n_S$. The valley is approximately parallel to the $n_S$-axis, i.e. the correlation is gone.Both panels use the same artificial light profile with low noise level to evaluate the $\chi^2/\textrm{dof}$ manifold. It is much easier to find the optimum in panel (b) than in panel (a). The optimal values of $n_S$ are identical in (a) and (b), whereas the optimal values of $\beta$ are different due to the different choice of $b_n$. $\chi^2/\textrm{dof}$ is not a simple quadratic form, because the Sérsic profile is a nonlinear model.[]{data-label="fig:b_n_inducing_artificial_correlation"}](chi2_grid_good "fig:"){width="84mm"} Model-based schemes II: Expansion into basis functions\[sect:basis\_functions\] ------------------------------------------------------------------------------- An alternative model-based parametrisation approach is the expansion into basis functions. The most important advantage of this concept is that the parametrisation is more flexible, whereas all previous schemes are highly specialised for certain morphologies. A good set of basis functions should be able to fit almost anything, provided the signal-to-noise ratio of the given data is sufficiently high. Hence, this approach should in principle be favoured when the task at hand is to parametrise arbitrary morphologies. Basis-function expansions are very common in physics and also in cosmology (e.g. decomposing the CMB into spherical harmonics). Usually, the basis functions are chosen based on symmetry arguments or best as eigenfunctions of the differential equations describing the underlying physics. However, we do not know the physics governing galaxy morphologies yet, hence there is no theoretically motivated choice for the set of basis functions. Therefore, basis functions are chosen such that they possess advantageous analytic properties or overcome special problems. In the following, we introduce the concept of basis functions. We briefly comment on the issues of orthonormality and completeness and then discuss example sets of basis functions. ### General concept {#sect:concept_linear_basis_functions} A set of basis functions is usually defined such that it is orthonormal and complete. However, we want to introduce this concept in a sligthly more general fashion. Consider a set of $N$ scalar-valued functions $\{B_1(\vec x;\vec\theta_1),\ldots,B_N(\vec x;\vec\theta_N)\}$, where $\vec x$ denotes the two-dimensional pixel-position vector and $\vec\theta_n$ is the set of parameters of the $n$-th basis function $B_n$. The basis functions may be nonlinear in both $\vec x$ and $\vec\theta_n$. We consider the linear superposition, i.e. the model, $$\label{eq:def:linear-model} f(\vec x) = \sum_{n=1}^N c_n B_n(\vec x;\vec\theta_n) \;\textrm{,}$$ with the $N$ expansion coefficients $c_n$. These coefficients are further model parameters in addition to $\vec\theta_n$. The $c_n$ enter Eq. (\[eq:def:linear-model\]) linearly, hence they form a linear space, i.e. a vector space. Therefore, the set of $N$ coefficients is also referred to as “coefficient vector” $\vec c$. Given an observed galaxy image $I(\vec x)$, we can fit the model $f(\vec x)$ to this image. The details of the fitting process will depend on the choice of the set of basis functions. The fitting process itself is also called the “decomposition of the image into the basis functions”. After fitting the model defined by Eq. (\[eq:def:linear-model\]) to the image, we obtain estimates for the coefficients $c_n$ and the parameters $\vec\theta_n$ for all basis functions. Usually, the $\vec\theta_n$ are used to incorporate several effects. For instance, there is typically a size parameter that scales the spatial extent of the basis functions such that the coefficients $c_n$ do not depend on the size of the object. If this is the case, then the basis functions are called “scale invariant”. The centroid position can also be part of $\vec\theta_n$. The linear coefficients $c_n$ are supposed to capture the morphological information. ### Orthonormality and completeness\[sect:orthogonality-completeness\] As aforementioned, sets of basis functions are often orthonormal and complete. The orthogonality would ensure that all coefficients were completely independent of each other. The completeness would allow us to decompose an [*arbitrary*]{} image. In practice, however, the completeness is lost due to pixel noise and pixellation, which sets an upper limit to the number of basis functions that can be used to decompose a given image. This can lead to characteristic modelling failures. We discuss this in slightly more detail in the next section. The strict orthogonality is also lost, due to pixellation [@Melchior2007]. This means that the resulting coefficients may exhibit minor correlations, but if the galaxy image and all basis functions are critically sampled, these correlations will be negligible. ### Shapelets Shapelets were introduced by @Refregier2003. They are a scaled version of Gauss-Hermite polynomials, i.e. $$\label{eq:def:shapelets} B_n(x; \beta) = \left( 2^n n! \sqrt{\pi}\beta \right)^{-1/2} H_n\left(\frac{x}{\beta}\right)\exp\left[-\frac{x^2}{2\beta^2}\right] \;\textrm{,}$$ where $H_n$ denotes the Hermite polynomial of order $n$ and $\beta$ is the shapelet scale size. A centroid can be introduced via $x\rightarrow x-x_0$. In this case, all basis functions take identical parameters $\vec\theta_n=\vec\theta=(x_0,\beta)$ in order to allow for orthogonality. From this definition, we can build two-dimensional basis functions, namely Cartesian shapelets and polar shapelets. The Gaussian weight function of shapelets leads to very nice analytical properties. For instance, shapelets are nearly invariant under Fourier transformation, which makes any convolution or deconvolution a closed and analytic operation in shapelet space, as described in @Melchior2009. However, the limitation of basis functions due to pixel noise has a severe consequence: Shapelets employ a Gaussian weight function (cf. Eq. (\[eq:def:shapelets\])), but real galaxies have typically much steeper profiles. This gives rise to characteristic modelling failures that typically manifest themselves in ring-like artifacts in the shapelet reconstructions of galaxies with exponential or steeper light profiles. This severly limits the diagnostic power of shapelets [cf. @Melchior2009a] and we therefore exclude them from our subsequent simulations. Despite these fundamental problems, shapelets demonstrate a very important aspect of basis-function expansions: For highly resolved galaxies of high signal-to-noise ratios Sérsic profiles are incapable of providing excellent models as they are not flexible enough to account for substructures such as spiral arm patterns, i.e. their residuals do not always reach noise level. In case of shapelets – as an example of basis functions – this is fundamentally different. They are highly flexible and reach noise level even for galaxies that are very large, highly resolved and bright [e.g. @Andrae2010a]. ### Sérsiclets Given the problematic impact of the Gaussian profile on shapelets, a set of basis functions based on the Sérsic profile is an obvious means to overcome the limitations of shapelets. The resulting basis functions are called sérsiclets. @Ngan2009 were the first to realise the potential of this approach, which is capable of accounting for all morphological observables listed in Sect. \[sect:galaxy\_morphology\]. However, for technical reasons their implementation of sérsiclets was flawed, as we illustrate in an upcoming paper [@Andrae2010c]. We therefore also exclude sérsiclets from our simulations. ### Outlook: Template libraries We already argued that no basis set – apart from the pixel grid itself – is actually complete due to the limitations induced by pixel noise. Now, we want to briefly touch – without going into details – on a set of basis functions that is finite and thus incomplete from the beginning. The motivation is very simple: For both shapelets and sérsiclets the basis functions lack a physical interpretation. Why not use basis functions that directly correspond to spiral arms, galactic bars or rings? We can use a set of such *templates* – a template library – to form linear models and decompose the image, resulting in a set of coefficients that form a vector space. The individual templates do not even need to be orthogonal, but just as linearly independent as possible in order to avoid heavy degeneracies during the fitting procedure. Unfortunately, the direct physical motivation is also the major drawback of this approach, since we are strongly prejudiced and lack flexibility in this case. For instance, template libraries are likely to have severe problems in decomposing irregular galaxies, i.e. they are inappropriate for parametrising arbitrary morphologies. Moreover, the set of morphological features is very large, hence such a library has to contain numerous templates. Assumptions\[sect:assumptions\] ------------------------------- It is crucial to be aware of all assumptions made by a certain method when using it, since if a method fails, it usually fails because one or more of its assumptions break down. In case of model-based approaches, the assumptions are usually rather obvious and therefore can be easily challenged. In contrast to this, the assumptions of model-independent approaches are implicit and often hidden. This may lead to the misapprehension that model-independent schemes were superior since they required fewer or even no assumptions. In Table \[tab:para\_schemes\] we summarise our categorisation of parametrisation schemes. Based on this table and the definitions given in the previous sections, we now work out the assumptions of all schemes from a *theoretical* point of view. In practice, it is virtually impossible to satisfy all assumptions. Whether the violation of some assumption leads to a breakdown of a certain method depends on the specific question under consideration, the desired precision, the details of the method’s implementation, and the quality of the data. In detail, the assumptions are: - Concentration index: There are no azimuthal structures such as spiral-arm patterns or galactic bars.[^4] The pixel noise is negligible and the object is not grossly asymmetric such that a centroid is well defined (cf. Sect. \[sect:asymmetry\_vs\_centroid\]). The scheme can be enhanced using elliptical apertures. - Asymmetry index: A centre of rotation is well defined. The pixel noise is negligible. Both issues have been addressed by @Conselice2000b. The asymmetry of interest is visible under rotations of $180^\circ$. - Clumpiness index: The functional type of the kernel matches the galaxy profile. The width of the kernel is chosen such that the information of interest is extracted. The ellipticity of the kernel matches the ellipticity of the object. - $M_{20}$: The pixel noise has negligible impact on the estimates of centroid and second moments. The centre of light and the object’s centre coincide, i.e. there is no substantial asymmetry. The structures dominating $M_{20}$ are of circular shape with the centroid at their centres.[^5] - Gini coefficient: The pixel noise is negligible [see @Lisker2008]. - Sérsic profile: The Sérsic profile is a good match of the object’s light profile. In particular, this means that the object’s light profile is symmetric, monotonically decreasing and the steepness is correctly described by the model, and there are no azimuthal structures such as spiral arm patterns, galactic bars or rings. - (Spherical) shapelets: Employing the Gaussian weight function fits galaxy profiles. Using spherical basis functions that have no intrinsic ellipticity does not lead to problems. We now clearly see that model-independent schemes implicitely make assumptions, too. This list suggests that non-parametric approaches *tend* to invoke fewer assumptions than model-based schemes[^6] at the loss of reliability, as we are going to demonstrate in the following sections. We also want to emphasise that shapelets – as an example of basis functions – can describe asymmetries. Characteristic $C$ $A$ $S$ $M_{20}$ $G$ Sérsic profile shapelets sérsiclets ---------------------------------------- ------------- ------------- ------------- ---------- ----- ---------------- --------------- ------------ model-based n n n n n y y y centroid estimate necessary y y n y n y y y account for steepness of light profile n n n n n y n y account for ellipticity y${}^{(1)}$ y${}^{(2)}$ y${}^{(3)}$ n n y y/n${}^{(4)}$ y account for substructures n y y n n n y y Intertwinement of morphological observables\[sect:entanglement\] ================================================================ The basic idea of model-independent schemes is to estimate the different morphological observables listed in Sect. \[sect:galaxy\_morphology\] independently of each other, thereby simplifying the problem. However, in this section we present as one of our main results the fact that these morphological observables are intertwined, which means that it is impossible to measure them independently of each other. Even if we try to measure only a single observable using a method unaware of the other observables, the mere presence of these observable features will influence the results. The notion of intertwinement should not be confused with redundancy, e.g. Sérsic index and concentration index are perfectly redundant (Sect. \[sect:ZEST\_bias\]) but asymmetry and concentration index are not (Sect. \[sect:asymmetry\_vs\_centroid\]). Of course, for some observables the intertwinement is stronger than for others. This intertwinement is not of physical origin but stems from the fact that usually all morphological observables are present simultaneously, such that the assumptions listed in Sect. \[sect:assumptions\] are *never* truly satisfied. We carry out noise-free simulations of the different parametrisation schemes and by doing so we reveal several systematic misestimations – in particular of the concentration index. All simulations invoke Sersic profiles and we want to explicitly emphasise that it is *not* necessary for real galaxies to actually follow Sersic profiles.[^7] However, as we demonstrate in Sect. \[sect:ZEST\_bias\], Sérsic profiles provide parametrisations that are in excellent agreement with estimates of light concentration. This would not be the case if Sérsic profiles were a bad description. Pixel noise in real data may hide these biases to some extent, but they will still be present. Example I: Sérsic profile vs. concentration index\[sect:ZEST\_bias\] -------------------------------------------------------------------- We begin with comparing Sérsic profiles and the concentration index, establishing a relation between both schemes that allows us to assess systematic effects on the concentration. The Sérsic index estimates how steeply the radial light profile falls off. Consequently, Sérsic index and concentration index are essentially two estimators for the same morphological feature, namely the steepness of the light profile. This is also evident from the fact that both schemes have almost identical assumptions (cf. Sect. \[sect:assumptions\]). In fact, we can compute the concentration of a two-dimensional Sérsic profile using numerical integration, i.e., Sérsic index and concentration index are perfectly redundant [see also @Trujillo2001]. Integration the flux to infinite radius, Eq. (\[eq:def:concentration\]) yields the power law $$C \approx 2.770\cdot n_S^{0.466} \;\textrm{,}$$ which provides a good approximation for the exact numerical solution for $0.5\leq n_S\leq 7$. The resulting values of $n_S=0.5,1$ and 4 are identical to those given by @Bershady2000. Integration the flux to one Petrosian radius instead of infinity, the approximate solution is $$\label{eq:C_as_f_of_n_S_approx} C \approx 2.586\cdot n_S^{0.305} \;\textrm{.}$$ Obviously, any declining radial profile can be mapped onto the concentration index this way, irrespective of whether or not it is a good description of a galaxy. Therefore, Fig. \[fig:C\_vs\_n\_S\_for\_ZEST\] also compares this theoretical expectation with the measured concentration indices and Sérsic indices of 31,288 COSMOS galaxies from the Zurich Structure & Morphology catalogue [@Scarlata2007; @Sargent2007].[^8] Evidently, the *independent* estimates of concentration indices conducted by @Scarlata2007 and of Sérsic indices conducted by @Sargent2007 are in excellent agreement with the theoretical prediction of Eq. (\[eq:C\_as\_f\_of\_n\_S\_approx\]). This clearly demonstrates that concentration and Sérsic indices are equivalent parametrisations in case of COSMOS galaxies, providing largely unbiased estimates. Nevertheless, this single example does *not* supersede a detailed study of potential biases that may occur in practice. In particular, the COSMOS data shown in Fig. \[fig:C\_vs\_n\_S\_for\_ZEST\] exhibits a large scatter that may hide biases. ![Comparing concentration and Sérsic indices of 31,288 COSMOS galaxies from the Zurich Structure & Morphology catalogue [@Sargent2007] (blue points) with the numerical solution (red solid curve) and power-law fit of Eq. (\[eq:C\_as\_f\_of\_n\_S\_approx\]) (oranged dashed curve). Shown are COSMOS galaxies with $I<22.5$, valid axis ratios ($0<q\leq 1$), and flags “stellarity”, “junkflag” and “flagpetro” of 0. Concentration indices were predicted from analytic Sérsic profiles using numerical integration out to one Petrosian radius. There was *no* pixellation.[]{data-label="fig:C_vs_n_S_for_ZEST"}](C_vs_n_S_for_ZEST){width="84mm"} Example II: Steepness of light profile vs. ellipticity ------------------------------------------------------ Our second example is the intertwinement of the steepness of the radial light profile and the ellipticity. These two are certainly the most important morphological observables listed in Sect. \[sect:galaxy\_morphology\], having the largest impact on parametrisation results. It is obvious that estimates of the steepness of the radial light profile must take into account ellipticity. Therefore, it is necessary to use elliptical isophotes in case of the concentration index or to fit a two-dimensional Sérsic profile that is enhanced by an ellipticity parameter. Unfortunately, in case of the SDSS, the aperture radii containing 50% and 90% of the total image flux given in the SDSS database are chosen as circular apertures [@Strauss2002]. This implies that estimates of the concentration index drawn from these values may be biased. In fact, this bias was already discussed by @Bershady2000. They investigated how the concentration index changes with axis ratio for samples of real galaxies of similar morphological types. @Bershady2000 claim that using circular apertures causes an overestimation of concentration indices of at most 3% and is therefore negligible. We investigate this effect in Fig. \[fig:impact\_eps\_on\_C\] for a realistic range of axis ratios, as is evident from panel (a). Panel (b) shows how the concentration index is influenced by the axis ratio for Sérsic profiles with fixed Sérsic indices, corresponding to galaxy samples of similar morphologies as in @Bershady2000.[^9] Evidently, for $q\gtrsim 0.5$ – which is the majority of galaxies in the given set – the bias is negligible. There are galaxies with $q<0.5$, which are typically disc-like galaxies with shallow light profiles. For those objects concentration estimates based on circular isophotes are substantially overestimated ($\approx 30\%$ for $n_S=1$). This bias is *not* negligible. @Bershady2000 based their investigation on estimated concentration indices of *real* galaxies. Hence, the most likely origin of this discrepancy in our results is that the intrinsic scatter in the real data used by @Bershady2000 hid this bias. Considering ellipticity and concentration index together – instead of using an elliptical concentration index – is *not* likely to solve this problem. The reason is that incorporating an ellipticity estimate may add information about the cause of the bias of the concentration index, but it does not provide information about the effect of this bias. Finally, we want to emphasise that Fig. \[fig:impact\_eps\_on\_C\] must not be used to calibrate the biased concentration estimates resulting from circular apertures. The reason is that this would now require Sérsic profiles to be a realistic description of galaxy morphologies. Moreover, also the study of @Bershady2000 cannot be used for such a purpose, because the bias clearly depends on the intrinsic concentration. This means that such a correction would require prior knowledge about the object’s true concentration. ![Impact of ellipticity on concentration estimates. Panel (a) shows the distribution of axis ratios $q=b/a$ for 2,272 SDSS galaxies from the data sample of @Fukugita2007. Panel (b) shows concentration estimates using circular isophotes for elliptical Sérsic profiles with $n_S=0.5$ (solid orange line), $n_S=1$ (dashed red line), $n_S=2$ (dotted-dashed blue line), and $n_S=4$ (dotted black line).[]{data-label="fig:impact_eps_on_C"}](impact_eps_on_C){width="84mm"} Vice versa, @Melchior2009a showed in the context of weak gravitational lensing that ellipticity measurements using shapelets are strongly biased in case of steep profiles. In other words, shapelets fail to provide reliable ellipticity estimates, because they do not properly account for the steepness of the radial light profile. This impressively demonstrates that these two observables may be closely intertwined. Example III: Impact of lopsidedness on centroid estimation\[sect:asymmetry\_vs\_centroid\] ------------------------------------------------------------------------------------------ As a third example for the intertwinement of morphological observables, we consider the impact of asymmetry on centroid estimates and the resulting parameter estimation using two-dimensional Sérsic profiles. We simulate a certain type of asymmetry, namely lopsidedness. In order to introduce lopsidedness analytically, we apply the flexion transformation from gravitational weak lensing [@Goldberg2005] to the Sérsic profiles as explained in Appendix \[app:shear\_flexion\_trafo\]. The strength of the flexion transformation is parametrised by $F_1$, $F_2$, $G_1$, and $G_2$. There is no pixel noise in this simulation. Figure \[fig:flexed\_gaussians\] shows Gaussian profiles resulting from this transformation.[^10] The resulting distortions are not unrealistically strong. ![Gaussian profiles of different lopsidedness. The applied flexions are $F_1=0.0$ (top left), $F_1=0.0325$ (top right), and $F_1=0.065$ (bottom). The resulting profiles exhibit realistic lopsidedness. All profiles are evaluated on a 1000$\times$1000 pixel grid using a scale radius of $\beta=50$. White diamonds indicate the maximum position.[]{data-label="fig:flexed_gaussians"}](flexed_Gaussian_1 "fig:"){width="35mm"} ![Gaussian profiles of different lopsidedness. The applied flexions are $F_1=0.0$ (top left), $F_1=0.0325$ (top right), and $F_1=0.065$ (bottom). The resulting profiles exhibit realistic lopsidedness. All profiles are evaluated on a 1000$\times$1000 pixel grid using a scale radius of $\beta=50$. White diamonds indicate the maximum position.[]{data-label="fig:flexed_gaussians"}](flexed_Gaussian_2 "fig:"){width="35mm"} ![Gaussian profiles of different lopsidedness. The applied flexions are $F_1=0.0$ (top left), $F_1=0.0325$ (top right), and $F_1=0.065$ (bottom). The resulting profiles exhibit realistic lopsidedness. All profiles are evaluated on a 1000$\times$1000 pixel grid using a scale radius of $\beta=50$. White diamonds indicate the maximum position.[]{data-label="fig:flexed_gaussians"}](flexed_Gaussian_3 "fig:"){width="35mm"} In Fig. \[fig:lopsidedness\_on\_x0\_A\_C\] we investigate the impact of this type of asymmetry on the centroid, the asymmetry index and the concentration index. The first and foremost consequence is that in the presence of asymmetry the maximum position and the centre of light as given by $$\label{eq:def:centroid_estimation} \hat{\vec x}_0 = \langle\vec x\rangle = \frac{\sum_n f_n \vec x_n}{\sum_n f_n} \,\textrm{,}$$ where $\vec x_n$ and $f_n$ denote the position vector and value of pixel $n$, do not coincide anymore. Hence, we call this special type of asymmetry “lopsidedness”. The centre of light $\vec x_\textrm{col}=\langle\vec x\rangle$ and the maximum position $\vec x_\textrm{max}$ coincide if and only if the light distribution is symmetric. As is evident from Fig. \[fig:lopsidedness\_on\_x0\_A\_C\], the lopsidedness is stronger for steeper profiles, where the maximum lopsidedness is $|\vec x_\textrm{col}-\vec x_\textrm{max}|/\beta\approx 0.25$. Moreover, Fig. \[fig:lopsidedness\_on\_x0\_A\_C\] demonstrates that, especially for steep profiles, estimates of asymmetry and concentration strongly depend on the choice of centroid. Asymmetry indices estimated with respect to maximum and centre of light may differ substantially in the presence of lopsidedness considering the allowed parameter range.[^11] Moreover, Fig. \[fig:lopsidedness\_on\_x0\_A\_C\] reveals that the concentration estimated with respect to the maximum position is almost insensitive to lopsidedness, whereas the concentration estimated with respect to the centre of light can be biased low by up to 15%. This also explains to some extent why the observed and predicted concentration indices differ in Fig. \[fig:C\_vs\_n\_S\_for\_ZEST\], because the observed concentration indices were estimated with respect to to the centre of light rather than the maximum position [cf. @Scarlata2007]. ![Impact of lopsidedness on centroid (a), asymmetry with respect to maximum (b), absolute difference of asymmetries with respect to centre of light and maximum (c), concentration with respect to maximum (d), and relative difference of concentrations with respect to centre of light and maximum (e). Lopsidedness leads to a difference in maximum position and centre of light. Furthermore, lopsidedness creates asymmetry. Asymmetries evaluated with respect to the maximum or centre of light can differ substantially given that $A\in[0,2]$. The concentration evaluated at the maximum position is almost insensitive to lopsidedness. However, the concentration with respect to centre of light is strongly underestimated. All Sérsic profiles are evaluated on a 1000$\times$1000 pixel grid using $\beta=50$. See footnote for explanation of the steps in panels (c) and (e).[]{data-label="fig:lopsidedness_on_x0_A_C"}](lopsidedness_on_x0_A_C){width="84mm"} We have demonstrated that the parametrisation results differ significantly depending on whether we use the centre of light or the maximum position as centroid. How do we resolve this ambiguity? And how do we get the maximum position in practice, when we suffer from pixel noise? If the parametrisation scheme was model-based, the model would define the centroid during the fit procedure – even in the presence of pixel noise. For instance, the Sérsic profile should use the maximum position as centroid, whereas shapelets can use both maximum position or centre of light. However, since $C$, $A$ and $M_{20}$ are not model-based, we have to resort to convention or ad-hoc solutions. In case of the asymmetry index, @Conselice2000b solved this problem by searching for the position that minimises the value of the asymmetry index, also considering resampling the image on a refined pixel grid. They were able to show that there are usually no local minima of asymmetry indices and hence that their method is stable. In case of the concentration index, using the maximum position appears to be more plausible than the centre of light, since $C_{max}$ appears to be robust against lopsidedness. Unfortunately, the concentration does not provide us with model and residuals, hence we cannot estimate the most likely maximum position in the presence of noise. However, we can apply the same ad-hoc solution that @Conselice2000b introduced for the asymmetry index, by searching the position that *maximises* the concentration estimate. Nevertheless, this method increases the computational effort tremendously such that the required computation time is approximately of the same order as, e.g., fitting a shapelet model. We conclude that concentration and asymmetry estimates are neither easy to implement nor computationally faster than model-based approaches. In case of $M_{20}$, there is a theoretical preference to use the centre of light, since it minimises the total second moments. Example IV: Impact of lopsidedness on ellipticity estimators\[sect:impact\_A\_on\_eps\] --------------------------------------------------------------------------------------- As our last example, we discuss the impact of asymmetry on estimators of ellipticity. Again, we simulate asymmetry as lopsidedness as in the previous section. We apply flexion transformations to two-dimensional Sérsic profiles without noise. However, we do *not* apply shear transformations, i.e. all profiles have no intrinsic ellipticity. From the pixellised images we then estimate the second moments of the light distribution, $$\label{eq:moments_Q} Q_{ij} = \frac{\sum_n I_n (x_{n,i}-x_{0,i})(x_{n,j}-x_{0,j})}{\sum_n I_n} \;\textrm{,}$$ where $\vec x_0$ is the point of reference, e.g. centre of light or maximum position. Using the second moments, we compute the estimator [e.g. @Bartelmann2001] $$\hat\chi = \frac{Q_{11}-Q_{22}+2i Q_{12}}{Q_{11}+Q_{22}} \;\textrm{.}$$ This estimator is related to the axis ratio via $q=\frac{b}{a} =\sqrt{\frac{1-|\hat\chi|}{1+|\hat\chi|}}\leq 1$ and to the orientation angle $\theta$ via $\tan(2\theta)=\frac{\Im(\hat\chi)}{\Re(\hat\chi)}$. If this estimator detects any ellipticity, it will be completely artificial, i.e. it will be a bias. ![Impact of lopsidedness on real (a) and imaginary (b) part of $\hat\chi_\textrm{col}$. Considering $0\leq|\hat\chi_\textrm{col}|<1$, the real part is strongly biased by the lopsidedness. The imaginary part is unbiased due to the geometry of $F_1$ (cf. Fig. \[fig:flexed\_gaussians\]). All Sérsic profiles are evaluated on a 1000$\times$1000 pixel grid using $\beta=50$.[]{data-label="fig:lopsidedness_on_eps"}](lopsidedness_on_eps){width="84mm"} Figure \[fig:lopsidedness\_on\_eps\] shows results of this simulation. For perfectly symmetric profiles ($F_1=0$) the estimator indeed does not detect any ellipticity. However, if $F_1$ increases, the ellipticity estimator will be biased. The bias is stronger for steeper profiles. The maximum bias is $\Re(\hat\chi_\textrm{col})\approx 0.13$ ($b/a\approx 0.877$), which is substantial. We conclude from this simulation that asymmetries have a potentially strong impact on ellipticity estimates, i.e. asymmetry and ellipticity are intertwined. For instance, this is relevant in case of using elliptical isophotes for estimating the concentration index. Reliability assessment\[sect:reliability\_assessment\] ------------------------------------------------------ In the previous sections we have demonstrated that some important morphological observables cannot be measured independently of one another. Given this, it cannot be guaranteed that estimates of an individual observable will result in a parametrisation which is unbiased by the other observables. As all the parametrisation schemes mentioned in Sect. 2 are derived on rather restrictive assumptions (cf. Sect. \[sect:assumptions\]), their flexibility in describing arbitrary galaxy morphologies is therefore limited. Consequently, it cannot be expected that these schemes provide accurate descriptions of *all* individual objects in a given data sample. Can we assess the quality or reliability of the parametrisation results for *individual* objects, i.e., can we detect objects where the parametrisation failed in order to sort them out?[^12] If we are using a model-based parametrisation scheme (e.g. shapelets or Sérsic profiles), the residuals of the resulting best fit will provide us with an estimate of the goodness of fit. For instance, a very large value of $\chi^2$ compared to the number of degrees of freedom indicates a poor fit, i.e. we should not rely on the parametrisation of this individual object. However, if the parametrisation scheme is not model-based – as in case of CAS, $M_{20}$ and Gini – we have no residuals and hence we have no way of assessing the reliability for individual objects.[^13] How to disentangle observables ------------------------------ As we showed above, morphological observables are intertwined and cannot be measured independently. Is there a way to get independent estimates? Let us consider two morphological observables $A$ and $B$ (e.g. Sérsic index and ellipticity). Intertwinement means that the joint probability of $A$ and $B$ does not factorise, i.e. $$\textrm{prob}(A,B|\textrm{data}) \neq \textrm{prob}(A|\textrm{data})\,\textrm{prob}(B|\textrm{data}) \;\textrm{.}$$ Using Bayes’ theorem, we can rewrite the joint probability of $A$ and $B$ as $$\textrm{prob}(A,B|\textrm{data}) = \frac{\textrm{prob}(A,B)\,\textrm{prob}(\textrm{data}|A,B)}{\textrm{prob}(\textrm{data})} \;\textrm{,}$$ where $\textrm{prob}(A,B)$ denotes the prior probability of $A$ and $B$, $\textrm{prob}(\textrm{data}|A,B)$ is the likelihood function and $\textrm{prob}(\textrm{data})$ a normalisation factor. A model that simultaneously measures $A$ and $B$ will provide us with the likelihood function, which in case of Gaussian residuals is $$\textrm{prob}(\textrm{data}|A,B) \propto e^{-\chi^2/2} \;\textrm{.}$$ We then get independent estimates of $A$ and $B$ via marginalisation $$\label{eq:marginalisation_A} \textrm{prob}(A|\textrm{data}) = \int dB\,\textrm{prob}(A,B|\textrm{data}) \;\textrm{,}$$ $$\label{eq:marginalisation_B} \textrm{prob}(B|\textrm{data}) = \int dA\,\textrm{prob}(A,B|\textrm{data}) \;\textrm{.}$$ Obviously, this only works for model-based parametrisation schemes, since otherwise we do not have residuals and cannot evaluate the likelihood function. In other words, even if we found a model-independent parametrisation scheme that accounts for all observables simultaneously, we would not know how to disentangle the estimates. In addition to reliability assessment, this is another strong argument in favour of model-based approaches. The marginalisation integrals of Eqs. (\[eq:marginalisation\_A\]) and (\[eq:marginalisation\_B\]) are usually very hard to evaluate, unless we use Markov-Chain Monte-Carlo [MCMC, e.g. @MacKay2008] methods. In case of MCMC methods, we get those marginalisations for free, without any further effort. Impact of PSF on the concentration index\[sect:impact\_PSF\] ============================================================ In Sect. 3, we introduced the notion of intertwinement that may systematically influence morphological parameters. Another important origin of systematic effects is the point-spread function (PSF), as we illustrate in this section. The fact that parameters such as the concentration index may be influenced by the PSF is not new but has been long known. For instance, @Scarlata2007 find that the PSF has a significant effect for objects with half-light radii smaller than two FWHM of the HST ACS PSF and with high Sersic index, while the effect is negligible for larger objects. In an attempt to overcome this bias, @Ferreras2009 applied a correction to the measured concentration parameter, based on the half-light radius. The aim of this section is to reassess the impact of the PSF on estimates of the concentration index. Forward vs. backward PSF modelling ---------------------------------- In case of model-based parametrisation schemes it is standard practice to account for the PSF by forward modelling, i.e. to fit a convolved model to the convolved data. In case of parametrisation schemes that are not model-based this is impossible and we have to resort to backward PSF modelling, i.e. we deconvolve the data before the actual parametrisation is done. However, deconvolution in the presence of pixel noise is numerically unstable, so forward PSF modelling is to be favoured if possible. This is another practical disadvantage of parametrisation schemes that are not model-based, because they need to perform either an unstable backward modelling or they need to invoke another ad-hoc correction calibrated in simulations. Such simulation-based calibrations introduce a further assumption into the parametrisation process. Model-based schemes are much more rigorous in this respect, since they allow for a mathematically well-defined PSF treatment that does not introduce any further assumption. Impact on concentration\[sect:impact\_PSF\_on\_C\] -------------------------------------------------- In case of the ZEST, @Sargent2007 accounted for the PSF by forward modelling when estimating the Sérsic index, while @Scarlata2007 neglected the PSF when estimating the concentration index. The fact that the results shown in Fig. \[fig:C\_vs\_n\_S\_for\_ZEST\] are in agreement with theoretical predictions suggests that in the case of the COSMOS data the PSF can indeed be neglected for the concentration index. Therefore, the theoretical prediction supports the claim by @Scarlata2007. Nevertheless, this single example should not mislead us to generalise this conclusion. It is *not* guaranteed that the PSF will have no impact on the concentration index for data sets other than COSMOS that exhibit different signal-to-noise, PSF, and resolution. In order to test the impact of the PSF on the concentration index, we generate two-dimensional Sérsic profiles with $n_S=0.5,1,2,4$ and convolve these profiles with a Gaussian kernel of increasing FWHM.[^14] We expect that the concentration indices of very steep Sérsic profiles are severly underestimated, since the PSF washes out the sharp peak. For lower Sérsic indices this effect becomes smaller. For $n_S=0.5$ the concentration should not be affected at all, since convolution of a Gaussian with a Gaussian yields a Gaussian, i.e. the steepness of the profile does not change. Figure \[fig:impactPSFonC\] confirms our expectation. If we ignore the PSF, we can significantly underestimate the concentration index. ![Impact of PSF on misestimation $\hat C-C$ of concentration index for different PSF sizes and Sérsic profiles. All Sérsic profiles are evaluated on a 1000$\times$1000 pixel grid using $\beta=50$ and $b_n=2 n_S - 1/3$. With increasing PSF size with respect to the object size the concentration index estimated from the convolved image is more and more underestimated.[]{data-label="fig:impactPSFonC"}](impactPSFonC_rel){width="84mm"} We conclude from this test that although the PSF is indeed negligible in case of the ZEST, this cannot be generalised to other data sets. Consequently, a PSF treatment is always necessary at least when using the concentration index. In particular concerning ground-based telescopes, the PSF is usually *not* small compared to the peak exhibited by highly concentrated objects. Parametrisation & classification\[sect:para\_&\_classification\] ================================================================ We now discuss the parametrisation of galaxy morphologies in the context of classification. First, we show that if we do not account for all morphological observables simultaneously, the effects discussed in the previous sections can dilute discriminative information. Second, we show that all parametrisation schemes discussed here form nonlinear or even discontinuous parameter spaces. Third, we comment on the problem of high-dimensional parameter spaces. Loss of discriminative information\[sect:loss\_discri\_info\] ------------------------------------------------------------- The conclusion from our investigation of the intertwinement was: If a parametrisation scheme does not account for all morphological observables simultaneously, the results will be systematically altered, i.e. biased. How does this influence classification results? For a large sample of objects, the origins of these systematic effects have random strength. Consequently, we have to expect an increase in the scatter of the resulting parameters. The sample distributions of the parameters will be broadened due to the additional scatter, i.e. peaks in the distributions are reduced and troughs between different peaks are washed out. In other words, we are loosing discriminative information. We now demonstrate this broadening of parameter distributions: We generate samples of two-dimensional Sérsic profiles with fixed Sérsic indices of $n_S=1,2,3,4$. We then add a random ellipticity and a random lopsidedness via the flexion transformation of Eq. (\[eq:flexion\_trafo\]). The flexion parameter $F_1$ is drawn from a uniform distribution on the interval $[-0.065,0.065]$. The ellipticity is drawn from the joint distribution of Sérsic indices and axis ratios of 2,000 COSMOS galaxies randomly drawn from the Zurich Structure & Morphology catalogue. We then sample the Sérsic profiles on a 1,000$\times$1,000 pixel grid using a scale radius of $\beta=50$. We convolve the resulting image with a Gaussian PSF of FWHM$=37.5$ chosen such that the effects of Fig. \[fig:impactPSFonC\] are present but moderate. There is no pixel noise in this simulation. From the pixellised image we then estimate the concentration with respect to the maximum position and the centre of light, since Sérsic index and concentration are two different estimators for the same morphological feature. Concentration estimates also take into account elliptical isophotes, where the ellipticity is estimated via Eq. (\[eq:moments\_Q\]) with respect to the maximum position and the centre of light, respectively. Figure \[fig:broadened\_distribution\_C\] shows the results of this simulation. The distributions of concentration indices have a finite width, in contrast to the distribution of the Sérsic indices, which are infinitely thin $\delta$-peaks. Consequently, we are indeed loosing discriminative information. In reality this loss may be even more severe, since the distribution of Sérsic indices has itself a finite width. Moreover, Fig. \[fig:broadened\_distribution\_C\] reveals that the loss of discriminative information is stronger for the concentration index evaluated at the centre of light. Especially for large Sérsic indices the peaks are lowered and broadened. This is a strong argument to evaluate the concentration at the maximum position (if it were accessible), since we conserve more discriminative information. In the presence of an unconsidered PSF, the parameter space is substantially biased. This has the advantage of reducing the width of the distributions, but it also shifts the different modes closer together. If the distribution of Sérsic indices had a finite width, this would wash out the troughs separating the peaks. ![Normalised sample distributions of concentration indices estimated with respect to (a) the maximum position of unconvolved image, (b) the centre of light of unconvolved image, and (c) the centre of light of convolved images. The modes in the distributions correspond to samples of 10,000 profiles each with fixed Sérsic indices of exactly $n_S=1,2,3,4$ (from left to right). The finite widths of all modes in all distributions indicate the loss of discriminative information. This is particularly evident in panel (b), where the modes of very compact objects are substantially broadened. All Sérsic profiles were evaluated on a 1000$\times$1000 pixel grid using a scale radius of $\beta=50$. The Gaussian convolution kernel for panel (c) was evaluate on the same pixel grid with FWHM$=37.5$.[]{data-label="fig:broadened_distribution_C"}](demo_loss_discri_info_C){width="84mm"} This simulation demonstrates that an incautious use of the concentration index (ignoring asymmetries and the PSF) can lead to a substantial loss of discriminative information. In practice, this loss causes sample distributions of the concentration index to be of low modality, despite the diversity of the galaxy population – a problem already mentioned by @Faber2007. Consequently, the concentration index can only provide a lower bound on the number of classes in a given data sample. If the sample distribution of the concentration is unimodel, this does *not* imply that all objects are of the same type. The loss of discriminative information implies that the mapping $\mathcal F_2^{-1}$ from Sect. \[sect:trinity\] does not always exist for the concentration index, i.e. drawing inference is a very difficult task. Nonlinear & discontinuous parameter spaces ------------------------------------------ This section highlights an additional problem, which is independent of the previous considerations. It is based on the fact that all parametrisation schemes discussed here are nonlinear in the data. As a direct consequence of this, the resulting parameter spaces form nonlinear spaces, too. If the parameter space is nonlinear, the distance metric will be nonlinear, too. Although this fact may be known, it is typically ignored in practice. Usually, the Euclidean metric is employed whenever a distance-based algorithm is used, e.g., a principal components analysis [@Scarlata2007] or classification algorithms [e.g. @Gauci2010]. The crucial question is: Does ignoring the nonlinearity and employing the Euclidean distance leads us to misestimate the true distances between galaxy morphologies in the parameter space? If so, galaxies will seem more similar or less similar than they actually are and hence distance-based classification algorithms may face serious problems. There are only few classification algorithms that do not rely on distances [e.g. @Fraix-Burnet2009]. ### Nonlinearity\[sect:nonlinearity\_of\_schemes\] Let us consider a parametrisation $P(I)$ of an image $I$. This parametrisation is said to be *linear* in the image data, if $$P(\alpha\,I_A + \beta\,I_B) = \alpha\,P(I_A) + \beta\,P(I_B)$$ for any two images $I_A$ and $I_B$ and any real-valued $\alpha$ and $\beta$. Otherwise $P$ is nonlinear. We begin by considering CAS (Eqs. (\[eq:def:concentration\])–(\[eq:def:clumpiness\])). Apart from the obvious nonlinearities in $C$ due to the logarithm and the ratio of radii, the computation of the radii containing 20% and 80% of the total flux itself is highly nonlinear. The nonlinearities in $A$ and $S$ are caused by the fractions and absolute values in the numerators. Gini (Eq. (\[eq:def:gini\])) and $M_{20}$ (Eq. (\[eq:def:M20\])) are both nonlinear in the data, too. For both of them the major nonlinearity is hidden in the sorting of the pixel values. The Sérsic model given by Eq. (\[eq:def:Sersic\_model\]) contains the Sérsic index and the scale radius as nonlinear parameters. The nonlinearity of (spherically symmetric) shapelets is due to the scale radius $\beta$ and the centroid $\vec x_0$. Both enter the basis functions nonlinearly, as is evident from Eq. (\[eq:def:shapelets\]). The nonlinearity of shapelets has been investigated in detail by @Melchior2007, so we do not need to elaborate on this here. In case of sérsiclets, the Sérsic index is another nonlinear model parameter in addition to the scale radius. ### Demonstration of nonlinearity of $C$, $A$ & Gini As emphasised above, CAS, Gini, $M_{20}$ and the Sérsic index are nonlinear in the data. The crucial question is: Is the nonlinearity severe or can we assume local flatness in the parameter space and use the Euclidean metric as an approximation? In order to answer this question, we now show a demonstration using three Sérsic profiles with different Sérsic indices and different flexion values as shown in Fig. \[fig:galaxies\_ABC\]. There is no pixel noise in this simulation. We perform a linear transformation in the image space such that two images $I_A$ and $I_B$ linearly transform into each other, i.e. $$\label{eq:linear_transformation} I(\alpha) = (1-\alpha)I_A + \alpha I_B \;\textrm{,}$$ where $\alpha\in[0,1]$ parametrises this linear transformation. In reality, the superpositions of this linear transformation may not represent viable galaxy morphologies, e.g. $\alpha=0.5$ for $I_1\leftrightarrow I_3$. A proper trajectory should be a geodesic on the submanifold of viable morphologies. If this submanifold is linear, the trajectory defined by Eq. (\[eq:linear\_transformation\]) will pass through viable morphologies only. If it is nonlinear, it will add additional nonlinearity to this test. This means that even though Eq. (\[eq:linear\_transformation\]) passes through unrealistic morphologies in this setup, it provides a lower limit to the nonlinearity. For 100 equidistant values of $\alpha\in[0,1]$ we evaluate the mixed image $I(\alpha)$ in pixel space and then estimate the concentration and asymmetry with respect to the maximum position. We also estimate the Gini coefficient. ![Two-dimensional profiles with different asymmetries used for demonstration of nonlinearity. All objects are evaluated on a 1,000$\times$1,000 pixel grid with scale radius $\beta=50$. No intrinsic ellipticity was applied. All maximum positions are identical. Profile $I_1$ (top left) has flexion $G_1=0.1$ and $n_S=0.5$. Profile $I_2$ (top right) has flexion $F_1=0.05$ and $n_S=1$. Profile $I_3$ (bottom) has flexion $G_1=-0.1$ and $n_S=4$.[]{data-label="fig:galaxies_ABC"}](nonlinearity_image_A "fig:"){width="3.5cm"} ![Two-dimensional profiles with different asymmetries used for demonstration of nonlinearity. All objects are evaluated on a 1,000$\times$1,000 pixel grid with scale radius $\beta=50$. No intrinsic ellipticity was applied. All maximum positions are identical. Profile $I_1$ (top left) has flexion $G_1=0.1$ and $n_S=0.5$. Profile $I_2$ (top right) has flexion $F_1=0.05$ and $n_S=1$. Profile $I_3$ (bottom) has flexion $G_1=-0.1$ and $n_S=4$.[]{data-label="fig:galaxies_ABC"}](nonlinearity_image_B "fig:"){width="3.5cm"} ![Two-dimensional profiles with different asymmetries used for demonstration of nonlinearity. All objects are evaluated on a 1,000$\times$1,000 pixel grid with scale radius $\beta=50$. No intrinsic ellipticity was applied. All maximum positions are identical. Profile $I_1$ (top left) has flexion $G_1=0.1$ and $n_S=0.5$. Profile $I_2$ (top right) has flexion $F_1=0.05$ and $n_S=1$. Profile $I_3$ (bottom) has flexion $G_1=-0.1$ and $n_S=4$.[]{data-label="fig:galaxies_ABC"}](nonlinearity_image_C "fig:"){width="3.5cm"} ![image](nonlinearity_C_vs_A){width="5.5cm"} ![image](nonlinearity_C_vs_G){width="5.5cm"} ![image](nonlinearity_A_vs_G){width="5.5cm"} Figure \[fig:trajectories\_CAG\] shows the trajectories in the subspaces of $C$, $A$ and Gini. Example objects $I_1$ and $I_2$ have very similar Sérsic indices and flexion parameters, hence their transition produces trajectories that are only moderately nonlinear. However, example object $I_3$ is very different from $I_1$ and $I_2$ and thus its transitions produce trajectories that exhibit substantial nonlinearities. Note that the nonlinearities in Fig. \[fig:trajectories\_CAG\] are primarily induced by the lopsidedness via the asymmetry parameter, as is evident from the centre panel where $A$ is not shown and virtually all nonlinearity is gone. We conclude from this simulation that for galaxy morphologies exhibiting realistic asymmetries the Euclidean distance is a very poor approximation to distances in parameter space. Consequently, any algorithm based on Euclidean distances would severely underestimate the true distances, i.e. objects would appear more similar than they actually are. This may be an explanation why the drop in the spectrum of eigenvalues of the principal components analysis of @Scarlata2007 – which justifies the reduction of dimensionality – is not very decisive. It may also partially account for the difficulty of recovering visual classifications using automated algorithms [see e.g. @Gauci2010]. This is no particular drawback of $C$, $A$ and Gini, but applies to all other parametrisation schemes discussed here. It is highly questionable whether a “calibration” of the Euclidean distance in order to account for the nonlinearity is possible. The reason for this is that, due to nonlinearity, the distance is an unknown function of the positions of both objects in parameter space, i.e. the distance depends on the morphology. One possible solution is to try to estimate the true distance via a linear transformation as given by Eq. (\[eq:linear\_transformation\]), although that is computationally very expensive. Another option is to employ a method called “diffusion distance” [@Richards2009] in order to estimate the true nonlinear distances. ### Discontinuity of spaces formed by $C$ and $M_{20}$\[sect:discontinuity\_C\_M20\] In Fig. \[fig:discontinuity\_C\_M20\] we investigate the behaviour of concentration and $M_{20}$ under a linear transformation between two Sérsic profiles. $C$ and $M_{20}$ exhibit substantial discontinuities due to pixellation effects. These effects increase for decreasing resolution (i.e. decreasing $\beta$ in Fig. \[fig:discontinuity\_C\_M20\]). ![Discontinuity of concentration (a) and $M_{20}$ (b). For poor sampling (small $\beta$), concentration index and $M_{20}$ exhibit substantial discontinuities. For better sampling (larger $\beta$) the discontinuities decrease. The transition was between two Sérsic profiles with $n_S=0.5$ and $n_S=2.0$ and no intrinsic ellipcitity or lopsidedness. The scale radii were $\beta=5$ (blue lines) and $\beta=15$ (red lines), respectively. The profiles were evaluated on a 300$\times$300 pixel grid.[]{data-label="fig:discontinuity_C_M20"}](discontinuity_concentration_and_M20){width="84mm"} In the case of $C$, the discontinuities occur because the radii containing 20% and 80% of the total image flux can only change in discrete steps. With increasing resolution, the pixel size decreases and the discontinuities of $R_{20}$ and $R_{80}$ become smaller (cf. panels (a) and (b) in Fig. \[fig:discontinuity\_C\_M20\]). Hence, this is not a problem for well resolved galaxies as in Fig. \[fig:trajectories\_CAG\]. However, it is a problem for poorly sampled galaxies. In this case, we can overcome this problem by interpolating the pixellised image and integrating numerically. Unfortunately, this would drastically increase the computational effort. In fact, the discontinuity of the concentration index has already been observed by @Lotz2006. In the case of $M_{20}$, the origin of the discontinuity is the sum over the second-order moments in the numerator of Eq. (\[eq:def:M20\]), which stops as soon as 20% of the total flux are reached. This threshold is the problem, as it causes the set of pixels fulfilling this criterion to change abruptly during the linear transformation. Again, the discontinuities of $M_{20}$ decrease with increasing resolution. However, for poorly sampled galaxies we cannot overcome these discontinuities by interpolation, since the definition of $M_{20}$ only makes sense for pixellised images. A parametrisation scheme forming discontinuous parameter spaces is problematic, because it is not guaranteed that objects with similar morphologies end up in neighbouring regions of the parameter space. This implies that distances in the space formed e.g. by $M_{20}$ do not necessarily correlate with the similarity of galaxy morphologies. *We need similar morphologies to have smaller distances than dissimilar morphologies*, but this is not guaranteed for $C$ and $M_{20}$ if the resolution is poor. Figure \[fig:discontinuity\_C\_M20\] suggests that such discontinuities become important when galaxies are smaller than 10 pixels in radius, maybe even earlier depending on the precise morphology. In this case, we even cannot rely on hard-cut classifications and it is questionable whether meaningful classification based on distances is possible at all. High-dimensional parameter spaces --------------------------------- Concerning classification, the current paradigm appears to favour low-dimensional parameter spaces [e.g. @Scarlata2007] that simplify the analysis or even allow a visual representation. However, we have to keep in mind that a high-dimensional parameter space may be necessary in order to differentiate between different groups of galaxy morphologies. There is no physical reason to expect that a two- or even three-dimensional parameter space should be able to host such groups without washing out their differences. This solely depends on the complexity of the physics governing galaxy morphologies. In particular, basis-function expansions typically form parameter spaces of high dimensionality. For instance, the morphological parameter space used by @Kelly2005 had 455 dimensions. Apart from problems with visualisation, we suffer from what is commonly called the [*curse of dimensionality*]{} [@Bellman1961]: The hypervolume of a (parameter) space grows exponentially with its number of dimensions.[^15] Consequently, the density of data points in this parameter space is suppressed exponentially. Therefore, it is impossible to reliably model a data distribution in a parameter space of several hundred dimensions, no matter how much data is available. Nevertheless, it is preferable to employ a parametrisation scheme that produces a high-dimensional parameter space. Loosely speaking, it is better to start with too much information than with too little. We can overcome the curse of dimensionality, if we compress the parameter space, i.e. if we reduce its number of dimensions by identifying and discarding unimportant or redundant information. For instance, @Kelly2004 [@Kelly2005] applied a principal component analysis in order to reduce the dimensionality of their parameter space.[^16] An alternative approach to overcome the curse of dimensionality is to employ a kernel approach by describing the data using a similarity measure. We demonstrated in @Andrae2010a that this yields excellent results, e.g., allowing us to classify 84 galaxies populating a 153-dimensional parameter space into three classes. Summary & conclusions\[sect:summary\] ===================================== In this paper we have described and compared two different approaches to the parametrisation of galaxy morphologies: First, model-independent schemes – CAS, Gini and $M_{20}$. Second, model-based schemes – Sérsic profiles and basis functions. Our most important result is that morphological features (steepness of light profile, ellipticity, asymmetry, substructures, etc.) are intertwined and (at least some) cannot be estimated independently without introducing potentially serious biases. This intertwinement stems from the violation of one or more assumptions invoked by the parametrisation schemes. We emphasise that combining separate estimates of individual observables does *not* overcome the intertwinement. For instances, combining an ellipticity estimate and the fit of a circular Sérsic profile does not give the same result as fitting an elliptical Sérsic profile. No parametrisation scheme discussed in this article accounts for all these observables simultaneously, i.e., their usage will inevitably cause problems when trying to parametrise large samples of galaxies that exhibit a huge variety of morphologies. In the context of classification of galaxy morphologies, which is an important application, we have the following results: - The intertwinement can wash out discriminative information in the context of classification. - All parametrisation schemes form nonlinear parameter spaces with a potentially highly nonlinear and unknown metric. Distance-based classification algorithms that employ the Euclidean distance measure therefore suffer from a loss of discriminative information. - For poorly resolved galaxies (object radius smaller than $\approx$10 pixels), concentration and $M_{20}$ form discontinuous parameter spaces that do not conserve neighbourhood relations of morphologies and may therefore fool classification algorithms. Due to the complexity of a nonlinear metric, it appears unlikely that calibrating results obtained from Euclidean distance is possible. As we cannot expect to find a parametrisation scheme that is linear in the data, a more promising approach is to estimate the nonlinear metric, e.g. via diffusion distances [@Richards2009], or to use a classification algorithm that is not distance-based. An example for such an algorithm can be found in @Fraix-Burnet2009. Arguments in favour of model-based approaches --------------------------------------------- In this paper we also collected arguments in favour of model-based approaches: - A (compact) model defines the term “centroid”, i.e. whether we have to use the centre of light or the maximum position. - A model allows us to disentangle observables by marginalising the joint posterior distribution of all observables. - A model allows us to assess reliability by providing residuals. - A model allows forward PSF modelling, which is more stable than backward modelling in the presence of pixel noise. Each of these arguments by itself disfavours model-independent approaches. Therefore, we conclude that schemes such as CAS, Gini and $M_{20}$ are problematic for three reasons: 1. They try to measure morphological features independently ignoring their intertwinement (e.g. concentration does not account for asymmetry and vice versa). 2. They do not provide residuals, i.e. we can neither assess reliability (to sort out failures for *individual* objects) nor marginalise. 3. They do not allow forward PSF modelling, i.e. we may suffer from the instability of backward modelling, or, we need to introduce further assumptions via calibrations. Moreover, we have seen that robust implementations of CAS and $M_{20}$ are neither easy nor computationally fast, since we have to consider centroid misestimations and – in the case of the concentration index – interpolation. We conclude that model-based parametrisation schemes are clearly superior. They provide reliable parametrisation schemes in *all* regimes of signal-to-noise ratios and resolutions. For low signal-to-noise ratios and low resolution the Sérsic profile allows excellent parametrisations [e.g. @Sargent2007]. In the limit of high signal-to-noise ratios and high resolutions the method of shapelets is flexible enough to provide excellent model reconstructions [e.g. @Andrae2010a]. With the advent of sérsiclets there will be another set of basis functions that is designed to provide even better parametrisations than shapelets [@Andrae2010c]. Trade-offs ---------- Throughout this work we were facing two important trade-offs when comparing different parametrisation schemes for arbitrary galaxy morphologies, namely 1. simplicity vs. reliability and 2. interpretation vs. flexibility. The first trade-off – simplicity vs. reliability – is obvious. When dealing with large data samples, we have to find a parametrisation scheme that is not too expensive from a computational point of view. Apart from computational aspects, we also favour simple solutions in general (Occam’s razor). However, we have to beware of *over*simplification which inevitably leads to unreliable results. The borderline between reasonable simplification and oversimplification should be defined by the data only and *not* by the researcher. The second trade-off – interpretation vs. flexibility – is at the heart of this article. We have seen that parametrisation schemes that easily offer interpretation often lack flexibility (e.g. CAS), whereas other schemes (e.g. shapelets) excell in flexibility but lack interpretation. This is still an open issue and more work is needed on the interpretation of basis-function expansions. We should also add that there is actually *no* trade-off concerning computational feasibility. The parametrisation of samples of galaxies is trivial to parallelise, i.e. it can be done on numerous computers simultaneously. Recommendations and outlook --------------------------- We do *not* conclude that CAS, Gini and $M_{20}$ should not be used anymore. According to their assumptions as given in Sect. \[sect:assumptions\], these parametrisation schemes are highly specialised on certain morphologies and their usage should be safe, if it is ensured that the sample of interest only contains galaxies of this special type. However, this obvious lack of flexibility renders these approaches inappropriate for general samples. Our most important recommendations for using CAS, Gini and $M_{20}$ are as follows: - A PSF treatment is necessary at least in case of the concentration index. - Beware of undersampling effects in case of concentration index and $M_{20}$. Discontinuities can appear for objects of up to 10 pixels in radius. - Beware of the centroid ambiguity: Even for galaxies with realistic asymmetries the centre of light and maximum position do not coincide. In case of the concentration index, we recommend to fit for the centroid by maximising $C$, similar to the method of @Conselice2000b. Concerning the concentration index, we also recommend to use it only in the regime of intermediate signal-to-noise ratios and resolutions. The reasons is that its assumptions (Sect. \[sect:assumptions\]) are almost identical to the assumptions of a Sérsic profile. As a rule of thumb we can say that the concentration index is reliable whenever the Sérsic profile is a good description, and vice versa. Currently the most reliable parametrisation scheme is the two-dimensional Sérsic profile enhanced by ellipticity, since it accounts for the steepness of the light profile and for ellipticity. These are definitely the two most important morphological observables. In the presence of asymmetries we recommend defining the centroid by fitting for the maximum position of the profile rather than fixing it to the centre of light. However, the Sérsic profile does not account for asymmetry or substructures and is thus of limited usefulness for samples containing highly irregular galaxies and in the regime of high signal-to-noise ratios and high resolutions. Moreover, we have shown that the scale radius of the Sérsic profile is difficult to interpret. In particular we have argued that the scale radii of profiles of different Sérsic indices *cannot* be compared directly. We also demonstrated that a redefinition of the Sérsic model may simplify the fitting procedure and provide more robust parameter estimates. Our main conclusion is: None of the existing parametrisation schemes is applicable to the task of parametrising arbitrary galaxy morphologies that occur in large samples, since they all have their drawbacks. Therefore, we need a new parametrisation scheme. Our recommendations for its design are as follows: 1. It should be model-based. 2. It should estimate all relevant morphological features simultaneously. 3. It should provide excellent model reconstructions of galaxies in the regime of high signal-to-noise ratios and high resolutions. 4. It should form a metric parameter space such that it is possible to estimate distances for classification purposes. One possible solution is to modify e.g. the Sérsic profile in order to account for asymmetries and substructures [Galfit 3, @Peng2010]. In our opinion basis functions are also promising candidates to describe arbitrary morphologies, since they are highly flexible. However, current sets of basis functions still lack direct physical interpretation. Currently, we reinvestigate the method of sérsiclets which appears to be the most promising approach given the considerations of this paper. acknowledgements {#acknowledgements .unnumbered} ================ RA thanks Eric Bell for discussions that initialised this work. RA also thanks Matthias Bartelmann, Thorsten Lisker, Aday Robaina Rapisarda, Mark Sargent, Paraskevi “Vivi” Tsalmantza, Glenn van de Ven, and Katherine Inskip for helpful comments on the content of this paper. Furthermore, we thank Claudia Scarlata for pointing out a mistake in an earlier version of this manuscript. RA is funded by a Klaus-Tschira scholarship. KJ is supported by the Emmy-Noether-programme of the DFG. PM is supported by the DFG Priority Programme 1177. Shear and flexion transformation {#app:shear_flexion_trafo} ================================ We now briefly resume the shear and flexion transformation we are using to simulate ellipticity and lopsidedness – the latter being a special kind of asymmetry. Given the complex ellipticity, $\epsilon = \epsilon_1 + i\,\epsilon_2$, with axis ratio $q=\frac{b}{a}=\frac{1-|\epsilon|}{1+|\epsilon|}$ and orientation angle $\theta=\frac{1}{2}\arctan(\frac{\epsilon_2}{\epsilon_1})$, the “sheared” coordinates, $(x_1^\prime,x_2^\prime)$, are given by $$\left(\begin{array}{c} x_1^\prime \\ x_2^\prime \end{array}\right) = \left(\begin{array}{cc} 1-\epsilon_1 & -\epsilon_2 \\ -\epsilon_2 & 1+\epsilon_1 \end{array}\right) \cdot \left(\begin{array}{c} x_1 \\ x_2 \end{array}\right) \,\textrm{.}$$ For given pixel coordinates $(x_1,x_2)$, we then evaluate the model at $(x_1^\prime,x_2^\prime)$. The flexion transformation [@Goldberg2005] is parametrised by the first flexion $$F = F_1 + i F_2$$ and the second flexion $$G = G_1 + i G_2 \;\textrm{.}$$ Given these parameters, we compute the derivatives of the gravitational shear $\boldsymbol\gamma=(\gamma_1,\gamma_2)$, $$\gamma_{1,1} = \frac{1}{2}(F_1 + G_1)$$ $$\gamma_{2,2} = \frac{1}{2}(F_1 - G_1)$$ $$\gamma_{1,2} = \frac{1}{2}(G_2 - F_2)$$ $$\gamma_{2,1} = \frac{1}{2}(G_2 + F_2) \;\textrm{.}$$ Based on these derivatives, we compute the two matrices $$D_{ij1} = \left(\begin{array}{cc} -2\gamma_{1,1} - \gamma_{2,2} & -\gamma_{2,1} \\ -\gamma_{2,1} & -\gamma_{2,2} \end{array}\right)$$ and $$D_{ij2} = \left(\begin{array}{cc} -\gamma_{2,1} & -\gamma_{2,2} \\ -\gamma_{2,2} & 2\gamma_{1,2} - \gamma_{2,1} \end{array}\right) \;\textrm{.}$$ Using these matrices, we do not evaluate a flexed Sérsic profile at position $\vec x=(x_1,x_2)$, but rather at position $$\label{eq:flexion_trafo} x_i^\prime = x_i + \frac{1}{2}D_{ijk}x_j x_k \;\textrm{.}$$ The scaling of the coordinates by the scale radius $\beta$ of the Sérsic profile is applied *prior* to this flexion transformation. \[lastpage\] [^1]: E-mail: [email protected] [^2]: There are several variations of the concentration index: Sometimes it is based on the ratio of $r_{90}$ and $r_{50}$. Some authors [e.g. @Bershady2000] consider the whole image for estimating $C$, others [e.g. @Scarlata2007] estimate $C$ within a region given by one Petrosian radius. [^3]: The scale radius $\beta$ is expressed in units of pixels, i.e. $\beta^{-1}$ is the pixel size relative to the object size. [^4]: This is a mathematical and deeply implicite assumption that is generally not realised when working with actual galaxy data: The “radii” used to compute the concentration index are estimated from a curve of growth. This curve of growth is actually a two-dimensional *integral* over the galaxy’s light profile (though it is usually reduced to a summation due to pixellation to allow a comment on an actually irrelevant practical detail). Nevertheless, it is inevitable to parametrise this integration in some way in order to be *capable* of evaluating it (analytically or numerically). In simple words, one has to *define* what “radius” means (e.g. spherical or elliptical radius) and this definition is the assumption. For instance, assuming spherical integration contours, the curve-of-growth integral of an image $f(r,\varphi)$ reads $p(R)=\int_0^R dr\,r\int_0^{2\pi}d\varphi\,f(r,\varphi)$, where the integral has been parametrised in polar coordinates. In fact, Figure \[fig:impact\_eps\_on\_C\] can be understood as investigating what happens if the curve of growth indeed takes this spherical form but the image data is not spherically symmetric but elliptical. More physically, though already beyond the point: In case of an image that has perfectly circular or elliptical symmetry, the azimuthal integration in $p(R)$ is well defined and so are the radii and the concentration index. However, if there is more complicated azimuthal structure than ellipticity, there is no simple way to generally define the curve of growth. Either, the integration is along true isophotes. In this case, the shape of the integration regions will vary from object to object and potentially also with radius. Then the resulting concentration indices would not be comparable. The other option is to integrate along given circular or elliptical isophotes to enforce comparability. This approach explicitely assumes that there is no azimuthal structure or else the radius in $p(R)$ has no strict relation to the galaxy, and the estimated curve of growth will be biased. The justification to use this in practice is to assume that in reality objects of similar type will catch similar biases, such that concentration indeces still have discriminative power in a differential sense, though their absolute values may be biased. Furthermore, the mere presence of such a bias does not automatically imply that the resulting estimates of the curve of growth and the concentration index, respectively, are not meaningful anymore. [^5]: This assumption stems from the term $(\vec x_n-\vec x_0)^2$ in Eq. (\[eq:def:2nd\_moment\_of\_pixel\]). [^6]: However, it is *not* true in general that model-independent schemes invoke fewer assumptions than model-based approaches. As an exception to this “rule”, compare concentration index and shapelets. [^7]: To be more precise, it is perfectly valid to use such idealised simulations to discover these biases, but in order to correct for them more realistic simulations are necessary. [^8]: http://irsa.ipac.caltech.edu/data/COSMOS/datasets.html [^9]: Obviously, Sérsic profiles are rather idealised and by far not as realistic as the sample used by @Bershady2000. However, this does *not* hamper the validity of this test, but rather serves the purpose of isolating this bias. Apart from that, there is no difference in both studies. [^10]: The flexion transformation of Eq. (\[eq:flexion\_trafo\]) will produce a second solution of $\vec x^\prime=0$, which corresponds to multiple images in weak lensing. We only consider cutouts with just one image, where the other image resulting from the second solution to $\vec x^\prime=0$ is far away. [^11]: The steps in panel (c) are due to the computation of $A_\textrm{col}$, since $\vec x_\textrm{col}$ is changing as $F_1$ increases. Whenever $\vec x_\textrm{col}$ enters a new pixel, the set of pixels used for computing $A_\textrm{col}$ changes. There are also steps in $C_\textrm{col}$, but they are very small. [^12]: Note that this task is completely different from testing the reliability using simulations. Such simulations allow to assess and calibrate a parametrisation scheme in general, but they do *not* help in detecting parametrisation failures for *individual* objects. [^13]: Note that reliability assessment and error estimation are two different things. Error estimation is possible for model-independent approaches, e.g. via bootstrapping. [^14]: We are aware that the COSMOS PSF is not a Gaussian. This test is meant to demonstrate the principle of this effect. [^15]: Consider a hypercube of edge length $L$ in $d$ dimensions. Its hypervolume $L^d$ grows exponentially with $d$. [^16]: Unfortunately, a principal component analysis (PCA) is a risky and often inappropriate tool in the context of classification. The reason is that PCA diagonalises the sample covariance matrix, i.e., it assumes that the whole data sample comes from a *single* Gaussian distribution. This assumption obviously jars with the goal of assigning objects to *different* classes.
--- author: - 'N. G. Kantharia, S. Ananthakrishnan, R. Nityananda, Ananda Hota[^1]' date: 'Received 27 October 2004 / Accepted 21 January 2005' title: 'GMRT Observations of the Group Holmberg 124: Evolution by Tidal Forces and Ram Pressure ?' --- Introduction ============ In an ongoing project of studying the radio emission from disk galaxies using the GMRT, the poor group of galaxies known as Ho 124 has been observed. Ho 124 consists of four galaxies: an inclined SBc galaxy, (), a Markarian galaxy, (), an IO galaxy, and an almost face-on Sc galaxy, (). Since the first three galaxies lie within a few arc minutes of each other, we refer to them as the triplet in the paper. NGC 2805 lies about $8'$ to the south-west of the triplet. This group is an interesting multiple interacting system. It was the first system in which a radio continuum bridge was detected (van der Hulst & Hummel [@hulst]). Although HI bridges and optical bridges had long before been detected, no radio continuum emission had been detected leading to the belief that magnetic fields play little role in confining the bridges. Since then many other interacting galaxies have shown the presence of radio bridges, e.g. the Taffy galaxies (Condon et al. [@condon]). Bridges, tails and arcs have been detected from many other interacting systems. Following Toomre & Toomre ([@toomre]), two long tails are expected if two galaxies of comparable masses interact A bridge extending from one galaxy to other is expected if one galaxy is massive and the other has a fraction of mass of the massive partner Many of the interacting systems occur in groups of a few galaxies, also known as poor groups. Moreover, although the gravitational interaction and ram pressure stripping of gas in members of clusters has been fairly well-studied, less is known about these processes in groups. The IGrM densities are at least an order of magnitude lower than the intracluster medium (ICM). Hence, processes like ram pressure stripping and galaxy harrassment which play an important role in the cluster evolution are not expected to be important in groups (Mulchaey [@mulchaey4]). The first X-ray detection of the IGrM was made only a decade ago by Mulchaey et al. ([@mulchaey1]). A more extensive X-ray survey of groups using ROSAT data was carried out by Mulchaey et al. ([@mulchaey2]) from which emerged the result that groups with at least one early-type galaxy have higher X-ray luminosities than groups with only late-type galaxies. Mulchaey et al. ([@mulchaey2]) gave some possible reasons including that the IGrM of groups with only late-type members had either lower temperatures or lower densities. In this paper, we present radio continuum observations at 240, 325, 610 and 1280 MHz and HI 21 cm observations using GMRT of one such group Ho 124 which consists of only late-type galaxies. No X-ray emission was detected from this group by Mulchaey et al. ([@mulchaey2], [@mulchaey]). We show that the IGrM densities in the group Ho 124 consistent with this upper limit could still be sufficient to determine the evolution of the members via ram pressure stripping. Additionally, we report the detection of a tidal bridge connecting the triplet in radio continuum at 325 MHz and marginal detection at 240 MHz and 610 MHz. We also detect HI 21 cm emission from the bridge and a large one-sided HI loop to the north of NGC 2820. Bosma et al. ([@bosma]) have studied this group in radio continuum, 21cm HI and in the optical band whereas Artamonov et al. ([@artamonov]) have studied the group using UBV photometry. Optical properties of the group members can be found in Table 1 of Bosma et al. ([@bosma]) and in Table 1 of Artamonov et al. ([@artamonov]). The plan of the paper is as follows. In section 2, we discuss the observations, data analysis and results. In section 3, we discuss the various morphological features in the group which we believe are due to the tidal interaction and in section 4, discuss various possible scenarios for the origin of the HI loop in NGC 2820. In section 5, we present a discussion of our results and in section 6 we present a summary. Bosma et al. ([@bosma]) have used a distance of 24 Mpc to the group based on the mean heliocentric radial velocity of 1670 kms$^{-1}$ and a Hubble constant of 75 kms$^{-1}$ Mpc$^{-1}$. At this distance, $1'$ corresponds to 7 kpc. We use the Bosma et al. ([@bosma]) values in the paper. Observations, Data analysis and Results ======================================= Parameter 1280 MHz 610 MHz 330 MHz 240 MHz HI -------------------------- --------------------------------------------- --------------------- ----------------------- --------------------- ----------------------- Date of observation 16/7/2002 6/9/2002 19/8/2002 6/9/2002 28/10/2003 On-source telescope time 4 hrs 2.5 hrs 3.3 hrs 2.5 hrs 5 hrs Effective bandwidth 9.3 MHz 4 MHz 9.3 MHz 4 MHz 64 kHz Phase calibrator 0834+555 0834+555 0834+555 0834+555 0834+555 Flux density of ph cal $8.4\pm 0.13$ Jy $8.05 \pm 0.15$ Jy $9.36\pm 0.25$ Jy $9.1 \pm 0.3$ Jy $7.01 \pm 0.25 $ Jy Synthesized beam$^1$ $19.9''\times 14.1''$ & $6.5''\times 4.7''$ $21.9''\times 13''$ $19.9''\times 14.1''$ $32.9''\times 14''$ $16.2''\times 15.2''$ PA $49^\circ.1$ & $46^\circ.2$ $69^\circ.9$ $49^\circ.1$ $82^\circ.1$ $25^\circ.8$ Continuum/line rms 0.09 mJy/b & 0.08 mJy/b 0.4 mJy/beam 1 mJy/beam 1.9 mJy/beam 0.2 mJy/beam \[tab1\] $^1$ this is the effective beamwidth of the images used in the paper and in most cases is larger than the best achievable. Radio Continuum --------------- The multi-frequency radio observations at 240, 330, 610 and 1280 MHz were conducted using the GMRT (Swarup et al. [@swarup], Ananthakrishnan & Rao [@ananth]) which consists of 30 antennas of 45m diametre each scattered over a 25 km region. The observational details are listed in the first four columns of Table \[tab1\]. These observations followed the sequence of interspersing 20 minutes on-source scans by a 5 minutes scan on the phase calibrator. The bandpass-cum-amplitude calibrator (3C 147) was observed in the beginning and at the end for half an hour each. The data was imported as a FITS file to NRAO AIPS software for further analysis. The general procedure followed at all bands included editing out corrupted data, gain calibration of one spectral channel data, bandpass calibration and channel averaging to obtain the continuum database. These were then imaged and CLEANed to obtain the final image. Wide-field imaging was used at 610, 330 and 240 MHz. We divided the primary beam into 9 facets for 610 MHz, and into 25 facets at 330 MHz and 240 MHz. The data were also 3-d self-calibrated. We used a uv taper of $12 k\lambda$ and a uv range of $15 k\lambda$ with robust weighting (ROBUST=0). Natural weighting did not seem to improve the image quality at these three frequencies, but degraded the beam and hence was not used. The 330 MHz and 240 MHz images are dynamic range limited. We have obtained a dynamic range of 2500 at 330 MHz. At 1280 MHz, a maximum uv baseline of 60 $k\lambda$ was used with natural weighting. We expect the flux density errors at all frequencies to be less than 10 %. All the images have been corrected for the gain of the primary beam. The low resolution image at 330 MHz clearly showing the bridge and the high resolution image at 1280 MHz showing fine structure in the galaxies are shown in Fig \[fig1\]. The images at 240 and 610 MHz look fairly similar to the 330 MHz map and hence are not presented. We have used maps of similar resolution at 610 MHz and 330 MHz for estimating the spectral index. We have detected the triplet in radio continuum at all the observed frequencies. Additionally, a bridge connecting the triplet is also detected at 330 MHz. This bridge was first reported by van der Hulst and Hummel ([@hulst]) at 1465 MHz. We have marginal detection of the bridge at 610 and 240 MHz. We have verified that although some short spacings are missing at 1280 MHz, this does not completely resolve out the bridge. However, alongwith the low brightness sensitivity, this made it difficult to detect the bridge with the present data. Faint radio emission is detected at 330 MHz from NGC 2805 (see Fig \[fig5\] (b)). This emission bears little resemblance to the optical emission (see Fig 5 (b)). The radio centre of NGC 2820, an almost edge-on galaxy with inclination $\sim 84^{\circ}$ (Hummel & van der Hulst [@hummel]), coincides with the optical centre within $5''$ (see Fig \[fig1\] (a)). The flux densities of the galaxies at different frequencies and the galaxy-integrated spectral index between 330 and 610 MHz are listed in Table \[tab2\]. --- ----------- ---------- ---------------------- ------- --------- Galaxy $\alpha^{330}_{610}$ 1280$^1$ 610 330 1 NGC 2820+ 35 $116$ $227$ $-1.06$ Mrk 108 (4) (4) 2 NGC2820 19.9 27 $-0.5$ peak (0.7) (1) 3 NGC 2814 6.7 $19$ $42$ $-1.25$ (1.6) $(4)$ (6) 4 NGC 2814 5.2 7.5 $-0.6$ peak (0.7) (1) --- ----------- ---------- ---------------------- ------- --------- : Radio flux densities of the triplet \[tab2\] $^1$ the flux density of NGC 2820 at 1280 MHz that we find is lower than the value quoted by others. Hummel & van der Hulst ([@hummel]) estimate a flux density of $60\pm5$ mJy at 1.465 GHz with the VLA whereas Bosma et al. ([@bosma]) have estimated a flux density of $48\pm5$ mJy at 21cm using the WSRT. Condon et al. ([@condon90]) find a flux density of 64.2 mJy at 1.49 GHz. Since the flux density, we estimate, is lower we do not quote the image errors which are comparatively insignificant.\ Halo emission is detected around NGC 2820 at all the observed frequencies and is prominent at 330 MHz (see Fig \[fig1\] (a)) which is our most sensitive low frequency. In their study of radio emission from six edge-on galaxies, Hummel & van der Hulst ([@hummel]), found that NGC 2820 had a radio halo with the largest z-extent (10% peak height of about 3.4 kpc) which they attributed to gravitational interaction with its companions. We find the 10% peak flux level z-extent of the radio halo is 4.2 kpc. The half power thickness of the halo is 2.2 kpc which is double the typical value. We estimated the spectral index between 330 and 610 MHz (from similar angular resolution maps) at a few positions in the halo and find it to be $-1.5$. NGC 2814 shows halo emission which is tilted with respect to the stellar disk traced by the DSS optical image (see Fig \[fig1\](a)). We find that the global spectral index of the galaxy is $-1.25$ whereas that of the radio peak is $-0.6$ (see Table \[tab2\]). It is difficult to separate the halo emission from the disk emission for this relatively small galaxy. The radio power of NGC 2820 is $1.2\times10^{22}$ Watt-Hz$^{-1}$ and that of NGC 2814 is $2.6\times 10^{21}$ Watt-Hz$^{-1}$ (estimated at 330 MHz). We also estimated the q factor which gives the ratio of FIR flux density to the radio continuum flux density at 1.4 GHz for NGC 2820 following Helou et al. ([@helou]). We find that q $= 2.02$ for NGC 2820. This value is consistent with the quoted value for spiral galaxies of q $= 2.3$ with a rms scatter of 0.2 (Condon [@condon1]). Thus, NGC 2820 follows the FIR-radio correlation. 21 cm HI -------- The details of the 21cm HI observations are listed in Table \[tab1\]. These data were initially analysed in a way similar to the continuum data sans self calibration. The continuum emission from the line data was removed and spectral channels imaged to generate a cube. We used a uv taper of $12 k\lambda$ and uv range of $15 k\lambda$ with robust weighting (ROBUST = 0) to obtain the final cube. The beamwidth is $16.2''\times 15.2''$ with a PA$=25.7^{\circ}$ which greatly improves on the arcmin resolution of Bosma et al. ([@bosma]). . \[fig4\] We detected HI from all members of the group. The channel maps showing HI line emission detected at different velocities for the triplet are shown in Fig \[fig2\]. The column density map of the group (estimated assuming HI is optically thin along the line of sight) is shown in Fig \[fig3\]. A zoomed-in HI column density map for the triplet is shown in Fig \[fig4\] (a). The first and second moment maps of HI for the line emission are shown in Figs \[fig4\] (b) and (c). The channel maps (see Fig \[fig2\]) clearly show the rotation in the disk of NGC 2820. The HI disk extends more to the northeast than in the southwest. The velocity of the gas in NGC 2820 varies from $\sim 1710$ kms$^{-1}$ in the north-east to $\sim 1445$ kms$^{-1}$ in the southwest (see Fig \[fig4\] (b)). Since the systemic velocity of the galaxy is 1577 kms$^{-1}$, this gives a difference velocity of 133 kms$^{-1}$ in the northeast and 134 kms$^{-1}$ in the southwest. The rotation speed of the gas is fairly symmetric over the centre unlike what Bosma et al. ([@bosma]) found. This is probably because the velocity of the HI gas in Mrk 108, which we can clearly distinguish in our maps (see Fig \[fig2\] - last two panels) due to our higher angular resolution is around 1410 kms$^{-1}$ and was presumably included within the Bosma et al. ([@bosma]) beam. It is of interest to note the nature of the isovelocity contours. In Fig \[fig4\] (b), it is seen that the contours in the northeast are different from those in the southwest and appear to be kinematically disturbed. It appears that the HI gas in the northeast has been affected by the interaction more than in the southwest. A ring of HI surrounds the optical center (see Fig \[fig4\] (a)). HI condensation is also seen near Mrk 108 and it is likely associated with a star forming region seen in the H$\alpha$ image (see Fig \[fig7\] (c)), possibly triggered by the interaction. HI is detected from the bridge except for a small region. The mean HI column density in the bridge is $\le 4.4 \times 10^{19}$ atoms cm$^{-2}$. HI emission shows interesting extraplanar features in NGC 2820. A symmetric HI loop is observed to the north of the galaxy - the channel maps (Fig \[fig2\]) between 1633 kms$^{-1}$ and 1525 kms$^{-1}$ clearly show the presence of extraplanar features. The loop opens at the top giving it an appearance of an outflow and then seems to turn back as if the HI gas is falling back towards the disk. The loop has enormous dimensions with a width parallel to the galactic disk of about 17.5 kpc ($\sim2.5'$) and a height of about 4.9 kpc ($\sim 0.7'$). No detectable radio continuum emission is associated with the HI loop. Moreover there are a couple of other protrusions visible to the east of the large HI loop. Interestingly, no HI filaments or protrusions are observed arising from the southern side of NGC 2820 and the HI shows a smooth boundary. Similarly, we find that the HI disk and radio continuum disk of NGC 2814 are sharply truncated in the north of the galaxy whereas the optical emission extends beyond. The HI disk of NGC 2814, like the radio continuum disk, is inclined to the optical disk. A high velocity streamer is seen emerging, almost perpendicularly, from the south of NGC 2814. This streamer is fairly long, extending to the northeast. The heliocentric velocity of NGC 2814 is 1707 kms$^{-1}$ whereas the streamer has a line of sight velocity range of $\sim 1452$ kms$^{-1}$ to $1510$ kms$^{-1}$. The velocity difference between the streamer and NGC 2814 is more than 200 kms$^{-1}$. The streamer velocity is more in tune with the velocity field seen in the southern parts of NGC 2820. Parametre NGC 2820 Mrk 108 NGC 2814 NGC 2805 streamer HI ’blobs’ HI Loop ------------------------------------------- ------------- --------- ---------- ---------- ---------- ------------ ---------- Heliocentric velocity ($km\,s^{-1}$) 1577 1417 1707 1745 1493 1725 1566 Half-power width ($km\,s^{-1}$) 350 52 136 99 72 42 137 Rotation velocity ($km\,s^{-1}$) 175 - - - - - Inclination ($^{\circ}$) 74 - - 20$^1$ - - - Position angle ($^{\circ}$) 66 - - - - - Dynamical centre (J2000) 09h21m45.6s - - - - - 64d15m31s - - - - - Linear size ($kpc$) 47.6 3 10.4 60 12.7 3.5 17.5 kpc HI mass $M_{HI}$ ($10^9 M_\odot$) 6.6 0.061 0.34 $5.3^2$ 0.13 0.11 0.6 Dynamical mass $M_{dyn}$ ($10^9 M_\odot$) 170 0.94 22 584 - 1.4 - $M_{HI}/M_{dyn}$ (%) 3.9 6.5 1.5 - - 7.9 - $^1$ From Bosma et al. ([@bosma]). This value is used in estimating the HI mass and dynamical mass\ $^2$This estimate is much less than the value of $12\times10^9$ $M_\odot$ of Bosma et al. ([@bosma]) and is likely because the galaxy is close to the half power points of the GMRT primary beam.\ \[tab3\] The HI distribution of NGC 2805 (see Fig \[fig5\] (a)) is asymmetric with larger column densities and higher radial velocities in the northern regions as compared to the southern parts. Since this galaxy was close to the half power point of the GMRT primary beam in our 21cm HI image, we cross checked the observed morphology with the Bosma et al. ([@bosma]) images. We find the two maps correlate well and the paucity of HI in the southern parts is real and not an artifact of the primary beam cutoff. However we are insensitive to the large scale HI emission seen from the face-on disk by Bosma et al. ([@bosma]). Global HI line profiles were obtained for all the galaxies and HI features. A gaussian function was fitted to the observed profiles (except for NGC 2820 which shows a classical double-humped HI profile with a sharp fall-off) and the resultant parameters were used to derive various physical quantities (Table \[tab3\]). The systemic velocity, rotation velocity, inclination and dynamical centre of NGC 2820 are results from running the task, GAL in AIPS on the velocity field of the galaxy. We model the observed data with a Brandt curve purely as a fit; it reproduces the solid body rotation in the central regions fairly well. We obtain a rotation velocity of 175 kms$^{-1}$ for NGC 2820. The rotation curve is shown in Fig \[fig8\]. The HI mass was calculated from the velocity-integrated line strength whereas the dynamical mass was estimated using $rv^2/G$. We find that the rotation curve fit gives an inclination of $74^{\circ}$ and position angle of $66^{\circ}$ for NGC 2820. The optical heliocentric velocity of NGC 2820 is 1577 kms$^{-1}$ which is in good agreement with the value of $1574\pm10$ kms$^{-1}$ quoted by Bosma et al. ([@bosma]). About 4% of the total mass of NGC 2820 is in HI whereas 6.5% mass of Mrk 108 appears to be in the form of HI (Table \[tab3\]). For the HI blobs, we find that about 8% of its dynamical mass is seen in the form of HI. We find that NGC 2805 is massive with a total dynamical mass of $5.8\times10^{11}$ M$_\odot$. The position-velocity (PV) curves plotted along and parallel to the major axis of NGC 2820 are shown in Fig \[fig9\]. Fig \[fig9\](a) shows that the gas in the central 5.8 kpc of NGC 2820 exhibits solid body rotation. Some asymmetry is visible between 1600 and 1650 kms$^{-1}$. The HI blob, NGC 2814 and the streamer are shown in the PV diagram (see Fig \[fig9\] (b)) of a slice parallel to the major axis of the galaxy. The streamer is seen to be kinematically independent of NGC 2814. Tidal Effects {#tidal} ============= NGC 2820 appears to have had a retrograde interaction with NGC 2814 and the two galaxies presently have a relative radial velocity of 130 kms$^{-1}$. Various morphological signatures which are likely due to the tidal interaction between NGC 2820, Mrk 108 and NGC 2814 are seen in our HI moment zero and radio continuum maps. The dominant signatures in HI are the streamer apparently emerging from NGC 2814 (but showing a different velocity field), the inclined disk of NGC 2814, the bridge between NGC 2820 and NGC 2814 and the detection of HI blobs to the north-east of NGC 2820. Star formation seems to have been triggered in the disk of NGC 2820 close to Mrk 108, in Mrk 108 and in a small tail of NGC 2814 by the tidal interaction as can be identified on the H$\alpha$ image of the triplet (see Fig \[fig7\] (c)). Moreover, Artamonov et al ([@artamonov]) from their UBV photometric observations, report enhanced star formation in Ho 124 due to tidal interaction. The tidal features which are readily discernible in the 330 MHz map are the steep spectrum radio bridge, the tilted radio disk of NGC 2814 and a radio tail issuing from NGC 2814 and extending southwards. However, the tidal origin of HI features like the HI loop arising from the northern side of NGC 2820 and the small HI protrusions is not clear. We discuss the origin of the loop in the next section. In this section, we briefly elaborate on some of the clear signatures of tidal interaction discernible in our images. Tidal Bridge ------------- van der Hulst & Hummel ([@hulst]) were the first to detect a radio continuum bridge connecting the triplet in Ho 124 at 1465 MHz. Since then radio bridges have been detected in many other systems; a famous one being the Taffy galaxies (Condon et al. [@condon]) in which one-half the total radio synchrotron emission of the system arises in the bridge. As shown in Fig \[fig1\](a), we have detected the bridge connecting the triplet at 330 MHz. Using the 1465 MHz result of van der Hulst and Hummel ([@hulst]) along with our data at 330 MHz we estimate a spectral index of $-1.8^{+0.3}_{-0.2}$ for the bridge. This spectral index is much steeper than the value of $-0.8$ quoted by van der Hulst and Hummel ([@hulst]) which might possibly have been corrupted by disk emission at their lower frequency. At 610 MHz and 240 MHz, we report marginal detection of the bridge and the brightness of the bridge is consistent with the estimated spectral index. We estimated the size of the bridge from our 330 MHz image. The projected linear extent of the bridge is 5.4 kpc. Our estimate of the projected length of the bridge is less than what van der Hulst & Hummel ([@hulst]) estimated. This is likely because we have not included the source west of Mrk 108 which we believe is part of the disk of NGC 2820 and not the bridge. The width of the bridge in the sky plane is 2.1 kpc. We assumed a similar extent for the bridge along the line-of-sight. Using equipartition and minimum energy arguments, we estimated a magnetic field of $\sim 3.4 \mu$G, and a minimum energy density of $1.1 \times 10^{-12}$ ergscm$^{-3}$. The minimum energy in the bridge is $7.7\times 10^{53}$ ergs. The magnetic pressure of the bridge is about 2600 Kcm$^{-3}$. We have assumed that there is 100 times more energy in protons than electrons for the above calculations. A few possible scenarios for confining the bridge were described in Ananthakrishnan et al. ([@ananth1]). We detected HI in the bridge except for a small region (see Fig \[fig4\] (a)). This HI is moving at a line-of-sight velocity of about 1545 kms$^{-1}$ which is different from the HI in the disk of NGC 2820 nearest to the extension of 1445 kms$^{-1}$ (see Fig \[fig4\] (b)). However the velocity of HI in the bridge is closer to the systemic velocity of NGC 2820. If the gas in the bridge is moving with a velocity of 100 kms$^{-1}$ (ie. $1545-1445$ kms$^{-1}$) with respect to the gas in NGC 2820, then the kinematic age of the bridge is 46 million years. If the gas is moving with a velocity larger than 100 kms$^{-1}$, the kinematic age will be lower. The mean column density in the bridge is $\le 4.4 \times 10^{19}$ cm$^{-2}$. Using a line-of-sight depth of 2.1 kpc for the bridge, we find that the atomic density in the bridge is $< 0.006$ cm$^{-3}$. If we assume a kinetic temperature of 5000 K, the thermal pressure of the gas in the bridge would be only 30 Kcm$^{-3}$ which is much less than the magnetic pressure of the bridge. We obtain a result similar to van der Hulst & Hummel ([@hulst]) in that the magnetic field and relativistic particles moving in it seem to dominate the bridge energetics. It appears likely that the bridge is confined by an ordered magnetic field. No H$\alpha$ emission is detected from the bridge (Gil de Paz et al. [@gil]) (see Fig \[fig7\] (c)) indicating that star formation has not been triggered in the bridge. This is not surprising since the column density of neutral atomic matter in the bridge is fairly low. Little molecular gas is therefore likely to be present in the bridge. Tidal Effects on NGC 2814 ------------------------- The disk of NGC 2814 has obviously been affected by tidal forces in its encounter with NGC 2820. The optical disk is aligned almost north-south whereas the radio continuum and HI disks are inclined towards the bridge clearly showing that they have been affected by tidal forces (see Fig \[fig1\](a) and Fig \[fig7\](a)) Also note the ’comma’-shaped HI disk and H$\alpha$ emission of NGC 2814. The H$\alpha$ image of NGC 2814 (see Fig \[fig7\] (c)) shows enhanced star formation in a small tail which is likely triggered by the tidal interaction. Star formation, triggered by the tidal interaction is also observed in and close to Mrk 108. A tail is observed in the radio continuum issuing and extending to the south of NGC 2814 (see Fig \[fig1\] (a)). The spectral index of this tail is $\sim -1.6$ and the tail is likely a result of the tidal interaction. Tidal streamer --------------- A HI streamer is observed to arise from the southern end of NGC 2814 and extend towards the north-east (see Fig \[fig4\] (a)) but which is kinematically distinct from the galaxy. The different velocities (difference of about 200 kms$^{-1}$) support a projection effect. A velocity gradient of about 5.3 kms$^{-1}$ kpc$^{-1}$ is observed along the streamer. We note that the velocity field seen in the streamer matches the velocities seen on the closer side of NGC 2820 and intriguingly the shape of the tail matches the outer edge of NGC 2820. We believe that the streamer is HI gas which has been stripped off NGC 2820 during the tidal interaction. Since we do not see an extra radial velocity that the streamer might have picked up during the tidal encounter, it might be in motion in the sky plane. If we assume that the streamer was dislocated from NGC 2820 and picked up a velocity of $100$ kms$^{-1}$ due to the tidal interaction, then it would have taken about 40 million years for the streamer to be at its current position. The average column density in the streamer is $4.4 \times 10^{19}$ cm$^{-2}$ and the mass is $\sim 9.1 \times 10^7 M_{\odot}$. The length of the streamer in the sky plane is about 12.6 kpc. A tidal dwarf galaxy ? ---------------------- The HI blobs (see Fig \[fig6\] (a)) detected to the north-east of NGC 2820 contain about $10^8 M_{\odot}$ of HI. One possibility to explain these blobs is that it is a tidal dwarf galaxy. It would be interesting to obtain a deep H$\alpha$ image of this region and check this possibility. The spectrum integrated over the blobs is shown in Fig \[fig6\](b). No rotation is discernible in the blobs. We do not find any optical counterparts to the blobs in the DSS images. The HI velocity field of the blobs is a continuation of the velocity field seen in the north-east tip of NGC 2820 probably indicating its origin. The blobs are located about 11.5 kpc away from the north-east tip of NGC 2820. The HI Loop =========== A large one-sided HI loop is detected to the north of NGC 2820. The loop extends out to about 4.9 kpc along the rotation axis of the galaxy and has a lateral dimension of about 17.5 kpc. No counterpart is detected in the south of the galaxy. Moreover, we do not detect any radio continuum from the loop and no H$\alpha$ emission is seen to be associated with the loop. In this section, we examine three possible scenarios for the origin of the HI loop, namely a) starburst driven superwind b) ram-pressure stripped HI and c) tidally stripped HI. We look for an origin which can explain the observed constraints: 1) the absence of H$\alpha$ and radio continuum in the loop 2) the one-sided nature of the loop 3) the symmetry of the loop about the rotation axis. Study of the velocity field of the HI loop gives the following inputs to the above scenarios: 1) The HI loop is trailing the disk rotation. The radial velocity at the top of the loop is close to the systemic velocity of the galaxy (see Fig \[fig4\] (b)). 2) The line width increases along the loop and is largest at the top of the loop (see Fig \[fig4\] (c)). 3) The global HI profile and the HI profile of the loop are centred on the systemic velocity of the galaxy and show no tail-like feature indicating that no gas is moving along the line of sight. Starburst driven superwind -------------------------- Gas is driven out of the disk by the underlying starburst to tens of kpc from the disk along the minor-axes and is known as superwind. Superwind cones are commonly observed in X-rays, radio continuum and H$\alpha$ but infrequently in HI. In case of NGC 253, HI has been observed to be confined within the optical disk and to outline the superwind cone of ionized gas in one half of the galaxy (Boomsma et al. [@boomsma]). Another study has shown that significant amounts of HI is observed to be in the halos of spiral galaxies with active star formation Fraternali et al. ([@fraternali]). In case of NGC 2820, the HI halo is not observed to be significantly larger than the optical disk (see Fig \[fig7\](a)). Anyway, here, we examine the possibility of the large HI loop that we observe in NGC 2820 being a superwind and estimate its energetics. The mass of the HI loop is $5.5 \times 10^{8} M_\odot$ and the full width at zero intensity of the HI line is 160 kms$^{-1}$. Assuming HI to be flowing along the surface of a superwind bi-cone of half angle $45^{\circ}$, the deprojected outflow velocity is 113 kms$^{-1}$. The kinetic energy contained in the outflowing HI is $7 \times 10^{55}$ ergs and the dynamical age of the outflow is 34.7 million years. We compare this with the energy contained in the supernovae in NGC 2820 to examine the feasibility of this scenario. We follow Condon ([@condon1]) and use the non-thermal radio continuum emission from the central parts of NGC 2820 to estimate a supernovae rate of $\sim 0.007$ per year. This rate is fairly low compared to typical superwind galaxies e.g. 0.1 supernovae per year in NGC 1482, M82 and 0.1 to 0.3 in NGC 253. If the kinetic energy imparted to the interstellar medium by a single supernova explosion is $10^{51}$ ergs, then the kinetic energy available during the dynamical age of the outflow is $3.4 \times 10^{56}$ ergs. However, note that the starburst would also drive other phases of the interstellar medium into the halo and hence only about $1-10\%$ of the estimated kinetic energy will be seen in HI. In this case, the energy available in the supernovae is at best comparable to the kinetic energy of the HI loop. We arrive at a kinetic energy within a factor of few higher from the above if we include the energy due to the stellar winds from massive young stars. This was estimated from the total FIR luminosity of NGC 2820 following the method by Heckman et al. ([@heckman]). Even if the energetics just about satisfy the starburst-driven origin of the loop, we note several reasons below why we do not favour this scenario: (a) No H$\alpha$ emitting gas is found along the HI loop unlike superwind galaxies. (b) NGC 2820 is not classified as a starburst galaxy and does not show any obvious signatures of being a starburst galaxy. (c) No HI loop of such large dimensions appears to have been observed in any superwind galaxy. (d) The HI loop is one-sided. The starburst process which can drive such a large loop in one direction should be energetic enough to drive it in the other direction also. Ram pressure stripping ---------------------- In this subsection, we examine the scenario of the observed HI loop as being the stripped HI from NGC 2820 due to ram pressure (Gunn & Gott [@gunn]) exerted by the IGrM. The reason we examine this possibility is the presence of a few interesting features in the group members. Firstly the southern edge of NGC 2820 shows a sharp cutoff in HI whereas the radio continuum extends beyond this cutoff (see Fig \[fig7\] (b)). Secondly the northern part of NGC 2814 shows a sharp cutoff both in radio continuum and HI whereas the optical disk does not show any such effect. Since ram pressure is expected to distort the various disk components differently (Davis et al. [@davis]) and the sharp edges can be caused by swept-back material due to ram pressure, we tend towards interpreting the above features to be a signature of ram pressure of the IGrM. Thus it appears that ram pressure could have played an important role in the evolution of the group. Hence, we examine the case of the HI loop seen to the north of NGC 2820 being a result of ram pressure stripping. We compare the pressure due to the IGrM as the galaxy moves in it and the gravitational pressure of the disk of NGC 2820. The relevant equation given by Gunn & Gott ([@gunn]) is $\rho\, v^2\, \ge\, 2\pi\, G\, \Sigma_*\, \Sigma_{gas}$ () where $\rho$ is the IGrM mass density, $v$ is the velocity dispersion of the group, G is the gravitational constant, $\Sigma_*$ is the surface mass density of stars, $\Sigma_{gas}$ is the surface mass density of gas. The line of sight velocity dispersion (Osmond & Ponman [@osmond]) of this group is $162\pm73$ kms$^{-1}$. No X-ray emission has been detected from this group and the upper limit on the bolometric X-ray luminosity is $2.88 \times 10^{40}$ ergs$^{-1}$ (Mulchaey et al. [@mulchaey]). If we assume that the temperature of the IGrM in this late-type group is $2\times10^6$ K (Mulchaey et al. [@mulchaey3]) and the medium is distributed over a sphere of radius 50 kpc, then we arrive at upper limits on the electron density of $8.8\times10^{-4}$ cm$^{-3}$ and mass of $1.15\times10^{10}$ M$_\odot$. We assume a IGrM density of $4\times10^{-4}$ cm$^{-3}$ (well within the upper limit) and using v$^2=3*\sigma^2$ (Sarazin [@sarazin]) we calculate a ram pressure of $\sim 6\times10^{-14}$ kgm$^{-1}$s$^{-2}$. We find an average stellar mass density of 133 $M_{\odot}$pc$^{-2}$ in the central 10 kpc region of NGC 2820 using the inclination-corrected total B-band magnitude and the average $\gamma_B$ (light to mass ratio) factor given by Binney and Merrifield ([@binney]). For $N_H = 4.4 \times 10^{19}$ cm$^{-2}$, the surface mass density of HI is 0.32 $M_{\odot}$pc$^{-2}$. These give the gravitational pressure of the disk to be to be $\sim 8\times10^{-14}$ kgm$^{-1}$s$^{-2}$. This is comparable to the ram pressure acting on the system. If we estimate the gravitational pressure of the disk in the outer regions by using the stellar mass density at the 25 mag-asec$^{-2}$ diametre (15.25 M$_\odot$pc$^{-2}$), which is close to where the loop is seen to emerge from the disk, then we find that the ram pressure exceeds the gravitational pressure of the disk by a factor of few. However this does not take into account the influence of dark matter in the outer regions. Mulchaey et al. ([@mulchaey2], [@mulchaey]) have concluded from their X-ray observations of many poor groups that the X-ray luminosity of the IGrM of groups with only late-type galaxies is lower than if the group has at least one early-type member. They infer that the late-type-only groups might have a lower temperature or a lower density. Our results seem to indicate that the IGrM should have densities that are almost sufficient to morphologically influence NGC 2820. Thus, the densities should not be too low ($<< 4\times10^{-4}$) to be inconsequential. Assuming ram pressure has played a role in giving rise to the loop, we outline two possible ways in which the loop could have formed. #### Model 1: NGC 2820 has been classified as a barred galaxy (Bosma et al. [@bosma]). Many barred galaxies show concentration of HI at the edge of the bar and the loop like structure could be due to the neutral gas from this region being stripped off in its interaction with the IGrM. In this scenario, the stripped HI lies well within the disk, just below the loop. The rotation of the disk causes a twist in the flow. Since HI in this region is likely to be strongly bound to the disk, this model requires extensive help from tidal effects in reducing the surface density of the neutral gas. The kinematic features can be explained as: the trailing velocity field could be due to a combination of vertical motion of gas stripped from different regions in the disk combined with twisting due to galactic rotation. The wider lines at the top of the loop could be because of gas acquiring higher random motions. The central hole in the loop is expected since ram pressure cannot strip the high density neutral gas in the central regions. #### Model 2: The alternate scenario is that HI has been stripped from the edges of the outermost spiral arms - one towards us and the other away from us. Since NGC 2820 is an almost edge-on galaxy, this model is not distinguishable from the above. This model can also explain the observed kinematic features of the loop. The trailing velocity field will be due to the rotation velocity of the gas in the outer spiral arms which might not be the tangent points for those lines of sight. The large line widths would again result from increased random motion as the stripped gas meets the IGrM. However given sufficient time, the central hole in HI should get filled up in this model. With the present data, we cannot distinguish between the two models. We estimate a velocity width of about 80 kms$^{-1}$ along the loop. This radial velocity is possibly dominated by rotation since NGC 2820 is a highly inclined galaxy. However, if we assume this to be the outflow velocity then for a loop height of 4.9 kpc, this translates to an age of 60 million years. Notice that the HI after reaching a certain height seems to be falling back towards the disk (see Fig \[fig4\] (a)). Vollmer et al. ([@vollmer]) in their simulations for investigating the role of ram pressure stripping in the Virgo cluster have found that ram pressure can cause a temporary increase in the central gas surface density and in some cases even lead to a significant fraction of the stripped off atomic gas to fall back onto the disk. Such a process could be active in the case of NGC 2820. Tidal stripping --------------- Interaction usually creates irregular morphology and tidal tails that can stretch the spiral arms. The shape of the parent galaxy and the tidal tails can assume a variety of shapes after the interaction (for example, see simulations by Toomre & Toomre [@toomre], Barnes [@barnes], Howard et al. [@howard]). The members of Ho 124, especially the triplet, have undergone close encounters which is evident from the numerous tidal features seen in the system as described in section \[tidal\]. However it is not clear how tidal interaction alone can give rise to the HI loop as observed in NGC 2820. If one assumes that all the HI features seen to the north of the disk of NGC 2820 ie. the warped extension, the small protrusions, the loop and the streamer are parts of a tidal tail, it is difficult to explain the origin of such a large distortion (stripped HI mass $>7\times10^8$ M$_\odot$) due to a retrograde interaction with a galaxy (NGC 2814) whose HI content is less than that in the HI loop. Moreover no such tail is visible in the opposite direction which such a strong tidal interaction should have produced. Thus, it appears unlikely that the loop is a result of tidal interaction alone. But on the other hand evolution of the system has obviously been affected by tidal interactions e.g. Artamonov et al. [@artamonov] note enhanced star formation due to the tidal interaction. Hence any model that explains the HI loop should include tidal interaction. However it is beyond the scope of this paper to make any quantitative estimates of the tidal interaction which require detailed simulations. [*We suggest that the loop could have been created by the combined effect of ram pressure and tidal forces acting on NGC 2820.*]{} Discussion ========== In this paper, we have described four main results arising from our observations, (i) the steep radio continuum bridge between the triplet, (ii) the sharp cutoffs in different galactic constituents observed in three members of the group, (iii) one-sided HI loop in NGC 2820 and (iv) various signatures left behind by the tidal interaction. HI is detected from the bridge with a mean column density of $4.4\times10^{19}$ cm$^{-2}$. The bridge has a steep synchrotron spectrum with a spectral index of $-1.8^{+0.3}_{-0.2}$ and hence has large energy losses caused by synchrotron and/or inverse Compton processes or a steep electron spectrum. From equipartition arguments, we find that relativistic particles and magnetic field dominate the bridge evolution. It contains a small fraction of the total sychrotron emission in the system and it is interesting to contrast it with the Taffy galaxies in which the bridge emission constitutes half of the total synchrotron emission in the system (Condon et al. [@condon]). A sharp cutoff in HI (see Fig \[fig3\] & Fig \[fig5\] (a)), radio continuum (see Fig \[fig1\] (a)) or optical blue band (see Fig \[fig5\]) is clearly evident in three members of Ho 124. The fourth member (Mrk 108) is small and too tidally disrupted. The above is schematically summarized in Fig \[fig10\]. We suggest that the sharp boundaries are caused by motion of the galaxies in the IGrM. In the case of NGC 2814, the optical disk appears to be viewed edge-on with zero position angle and the compression in radio continuum and HI is seen to be perpendicular to the major axis of the disk in the north in the sky plane. Moreover the radio continuum and HI are confined to well within the optical disk. In NGC 2820, the HI in the southern side is sharply truncated. The interaction between the triplet appears to have left behind a trail of tidal debris like the streamer, the HI blobs and the radio continuum tail of NGC 2814. The HI loop could be a result of both tidal effects and ram pressure. If the tidal effects have reduced the surface density of HI in the disk of NGC 2820, then ram pressure of the IGrM should have been able to strip the outlying HI giving rise to the HI loop. The solid arrows near NGC 2820 and NGC 2814 in Fig \[fig10\] indicate the direction of motion of those galaxies as we can deduce in the sky plane from the sharp truncations. However, the ambiguity of the sharp cutoff in HI in the north and the enhanced star formation in the south of NGC 2805 makes its direction of motion in the IGrM ambiguous. If we interpret the star formation ridge to be due to the interaction with IGrM, then NGC 2805 appears to be moving towards the south-west. If this is true then the three galaxies seem to be moving in different directions. However if we interpret the sharp HI boundary in the north of NGC 2805 as due to ram pressure, then the galaxy is moving towards the north. In short, we cannot comment on the direction of motion of NGC 2805 from the existing observations. We compared the ram pressure and the equivalent pressure in the different phases of the interstellar medium in NGC 2820 to examine its effect on different constituents of the medium which show distinct morphologies and hence understand the observational picture sketched in Fig \[fig10\]. Assuming a typical particle density of $4\times10^{-4}$ cm$^{-3}$ for the IGrM, we estimated a ram pressure of 4170 cm$^{-3}$K for the IGrM. We estimate a magnetic pressure $\sim 5000$ cm$^{-3}$K in the radio continuum halo. The two pressures are comparable. For the H$\alpha$ seen in the extended-diffuse ionized gas (e-DIG), Miller & Veilleux ([@miller]) estimate an emission measure of about 8 pccm$^{-6}$ and a size of 2 kpc. If we assume a filling factor of 0.1 (see Fig 10 in Miller & Veilleux ([@miller])) for e-DIG then for a temperature of $10^4$ K, the pressure in the gas is 2000 cm$^{-3}$K and the difference by a factor of two could easily be due to an incorrect filling factor assumed by us. The pressure in the e-DIG is also comparable to the ram pressure. Lastly we estimate the pressure in the HI gas. Since the sharp edge in HI is unresolved by our beam, we used a column density of $3.2\times10^{20}$ cm$^{-2}$ (the second contour) to estimate an atomic density of $0.02$ cm$^{-3}$ for a volume filling factor of 0.4 (taken from our Galaxy for warm neutral medium). If we use a temperature of 5000 K which is typical of the warm neutral medium in our Galaxy, then the pressure is 100 cm$^{-3}$K. The HI pressure is significantly lower than the ram pressure due to the IGrM. Thus we can understand the observed picture where HI has a swept-back appearance in the south of NGC 2820 which could be due to ram pressure affecting whereas the radio continuum and H$\alpha$ seem to extend out beyond HI and are either expanding or in pressure equilibrium with the IGrM. For this the galaxy should be in motion along the south-west as shown by the solid arrow in Fig \[fig10\]. The HI distribution in NGC 2805 is asymmetric with enhanced column densities in clumps towards the north (see Fig \[fig5\] (a)). The HI is extended east-west along the highly-disturbed northern optical spiral arm and is confined to the optical disk. The radio continuum emission in NGC 2805 is fairly weak (see Fig \[fig5\] (b)), with localised peaks in the south and northeast, and bearing little resemblance either to the optical or to the HI distribution. The central region of the galaxy is bright in the JHK near-IR bands and in the optical band but faint in radio continuum (Fig \[fig5\] (b)) which is intriguing. In short, this massive galaxy appears to be highly disturbed, the reason for which is not clear from existing observations. Thus, we end with a picture (somewhat speculative) of this group as derived from the present and other existing observations as shown in Fig \[fig10\]. NGC 2820 has probably undergone a retrograde tidal encounter with NGC 2814 which has left behind a trail of tidal debris. A HI streamer probably detached from NGC 2820 is seen projected onto NGC 2814 (see Fig \[fig10\]) but is kinematically distinct from NGC 2814. The interaction has also probably given rise to a bridge connecting the two galaxies and a tail of radio continuum emission in the south of NGC 2814. Star formation has been triggered in south-west parts of the disk of NGC 2820, in Mrk 108 and in the southern parts of NGC 2814 (see Fig \[fig7\] (c)). Using about half the upper limit on the electron density estimated from the upper limit on the X-ray emission (Mulchaey et al. [@mulchaey]), we estimate the ram pressure force of the IGrM to be comparable to the gravitational pull of the disk of NGC 2820. Since tidal interaction has obviously influenced the group, we suggest that the loop could have formed by ram pressure stripping if tidal effects had reduced the surface density of HI in NGC 2820. We suggest that the HI loop which is several kpc high and across could have been produced by the combined effects of ram pressure and tidal forces. Moreover we find sharp truncations to the HI in some of the group members which we believe supports the ram pressure explanation. If we assume that this is true then NGC 2814 is probably moving towards the north and NGC 2820 towards moving in the south-east direction (solid arrows in Fig \[fig10\]). From the existing observations there is ambiguity in the direction of motion of NGC 2805. Considering all this, we suggest that the group evolution is being influenced by both tidal forces due to the mutual interactions and ram pressure due to the motion of the galaxies in an IGrM. Since the IGrM is not detected in X-rays (Mulchaey et al. [@mulchaey]) but we tend to believe shows detectable effect on the galaxies in Ho 124, we suggest that the IGrM densities in this group should not be too low. Mulchaey et al. ([@mulchaey2],[@mulchaey]) have suggested that the non-detection of X-rays in late-type groups could be due to lower temperatures or densities. The detection of a 0.2 keV (Wang & McGray [@wang]) IGrM in the Local group which is a late-type group indicates that such groups do have a IGrM. It is more difficult to detect lower temperature gas in groups for several reasons like the enhanced absorption of such soft X-rays by galactic HI and the increased strength of the X-ray background. One clearly needs to explore other avenues of detecting this gas. Summary ======= - We detect the faint radio continuum bridge at 330 MHz connecting NGC 2820+Mrk 108 with NGC 2814 which was first detected by van der Hulst & Hummel ([@hulst]) at 1465 MHz. The bridge has a a spectral index of $-1.8^{+0.3}_{-0.2}$ which is steeper than the $-0.8$ quoted by van der Hulst & Hummel ([@hulst]). HI is detected from most of the bridge at a velocity close to the systemic velocity of NGC 2820 and has a mean column density of $4.4 \times 10^{19}$ cm$^{-2}$. No H$\alpha$ emission is associated with the bridge. - We detect radio contiuum from all the members of the group. A radio halo is clearly detected around NGC 2820 in the radio continuum with a $10\%$ peak flux density extent of 4.2 kpc at 330 MHz and a spectral index of $-1.5$. A radio halo is also detected around NGC 2814. The radio continuum at 330 MHz from NGC 2805 is fairly weak bearing little resemblance to either the HI distribution or the optical emission. The centre of the galaxy is intriguingly faint. - HI is detected from all the galaxies in the group. The heliocentric systemic velocity of NGC 2820 is 1577 kms$^{-1}$ and its rotation velocity is 175 kms$^{-1}$. The linear extent of the HI disk of NGC 2820 is about 48 kpc and its HI mass is $6.6 \times 10^{9}$ M$_\odot$. The HI emission associated with Mrk 108 is clearly detected at 1417 kms$^{-1}$ and it encloses a HI mass of $6.1\times10^7$ M$_\odot$. - We detect various tidal features close to NGC 2814. The radio continuum disk and HI disk of NGC 2814 are tilted with respect to the optical disk. A HI streamer is seen to emerge from the south of NGC 2814 but the two are are kinematically distinct. The velocity field of the streamer is similar to parts of NGC 2820 close to it. The streamer has a sky plane extent of 12.6 kpc and encompasses an HI mass of $1.3\times10^8$ M$_\odot$. A tail emerging from the south of NGC 2814 and extending westwards is detected in the radio continuum. The tail has a spectral index of $-1.6$. - We detect HI gas located about 11.5 kpc to the north-east of NGC 2820 whose dynamical mass is $1.4\times10^9$ M$_\odot$ and which might possibly be a tidal dwarf galaxy. However, deep H$\alpha$ observations are required to confirm this. The velocity of this gas is similar to the velocity field of the part of NGC 2820 closest to it. - We observe a sharp cutoff in HI on the southern rim of NGC 2820 and a sharp truncation in HI and radio continuum to the north of NGC 2814. We suggest that these features could be a result of ram pressure due to motion of the galaxies in the IGrM along the solid arrows shown in Fig \[fig10\] since simple estimates of pressure in different components of the interstellar medium in NGC 2820 suggest that ram pressure exceeds the pressure in HI by a factor of many. However this needs to be verified. - We report detection of a gigantic HI loop arising to the north of NGC 2820. The loop is $\sim 17.5$ kpc across and rises up to $\sim 4.9$ kpc. It encompasses an HI mass of $6\times10^8$ M$_\odot$. No radio continuum or H$\alpha$ emission is associated with this loop. We present possible origin scenarios which include a central starburst, ram pressure stripping and tidal stripping. We do not favour a central starburst mainly because of the absence of detectable ionized gas in the loop. We tend to favour the ram pressure scenario. Using the upper limit on the X-ray luminosity from Ho 124 (Mulchaey et al. [@mulchaey]), we estimate an upper limit on the electron density of $8.8\times10^{-4}$ cm$^{-3}$. Our calculations using half this electron density show that ram pressure force of the IGrM is comparable to the gravitational pull of the disk. Hence we suggest that this loop could have been formed due to ram pressure stripping if tidal forces had reduced the surface density of HI in NGC 2820. - The group under study exhibits multiple signatures of tidal interaction and possibly ram pressure. Thus, we suggest that the evolution of Ho 124 may be governed by both tidal interaction and ram pressure due to the motion of the galaxies in the IGrM. We thank the staff of the GMRT that made these observations possible. GMRT is run by the National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. The Digitized Sky Survey was produced at the Space Telescope Science Institute under U.S. Government grant NAG W-2166. The images of these surveys are based on photographic data obtained using the Oschin Schmidt Telescope on Palomar Mountain and the UK Schmidt Telescope. The plates were processed into the present compressed digital form with the permission of these institutions. NGK acknowledges a discussion with Amitesh Omar. We thank D. J. Saikia for going through the manuscript and providing useful inputs. We thank the anonymous referee for comments which have helped improve the paper. Ananthakrishnan, S., Kantharia, N. G., Nityananda, R., 2003, BASI, 31, 421 Ananthakrishnan, S., & Rao, A. P. 2002, Multicolour Universe. In International conference on Multi Colour Universe, ed. R. Manchanda & B. Paul, 233 Artamonov, B. P., Bruevich, V. V., Popravko, G. V. 1994, A Rep, 38, 597 Barnes, J. E. 1988, ApJ, 331, 699 Binney, J. & Merrifield, M. 1998, Galactic Astronomy, Princeton University Press Bland, J. & Tully, R. B. 1988, Nature, 334, 43 Boomsma, R., Osterloo, T. A., Fraternali, F., van der Hulst, J. M., Sancisi, R. 2004, astro-ph/0410055. Bosma, A., Casini, C., Heidmann, J., van der Hulst, J. M., van Woerden, H. 1980, A&A, 89, 345 astro-ph/0406169 Braine, J., Davoust, E., Zhu, M., et al.. 2003, A&A, 408, l13 Clemens, M. S., Alexander, P., Green, D. A. 2000, MNRAS, 312, 236 Condon, J. J. 1992, ARA&A, 30, 575 Condon, J. J., Helou, G., Sanders, D. B., Soifer, B. T. 1990, ApJS, 73, 359 Condon, J. J., Helou, G., Sanders, D. B., Soifer, B. T. 1993, AJ, 105, 1730 Davis, D. S., Mulchaey, J. S., Henning, P. A., 1997, AJ, 114, 613 Fraternali, F., Oosterloo, T. A., Sancisi, R., van Moorsel, G. 2001, ApJ, 562, L47 Gil de Paz, A., Madore, B. F., Pevunova, O., 2003, ApJS, 147, 29 Gunn, J. E., & Gott III, J. R. 1972, ApJ, 176,1 Heckman, T. M., Armus, L., Miley, G. K., 1990, ApJS, 74, 833 Helou, G., Soifer, B. T., Rowan-Robinson, M. 1985, ApJ Lett., 298, L7 Howard, S., Keel, W. C., Byrd, G., Burkey, J. 1993, ApJ, 417, 502 van der Hulst, J. M., & Hummel, E. 1985, A&A, 150, l7 Hummel, E., & van der Hulst, J. M. 1989, A&AS, 81, 51 Miller, S. T. & Veilleux, S. 2003, ApJS, 148, 383 Mulchaey, J. S., David, D. S., Mushotzky, R. F., Burstein, D. 1993, ApJ, 404, L9 Mulchaey, J. S., David, D. S., Mushotzky, R. F., Burstein, D. 1996, ApJ, 456, 80 Mulchaey, J. S., Mushotzky, R. F., Burstein, D., David, D. S. 1996b, ApJ, 456, L5 Mulchaey, J. S., David, D. S., Mushotzky, R. F., Burstein, D. 2003, ApJS, 145, 39 Mulchaey, J. S. 2004, in Clusters of Galaxies: Probes of Cosmological Structure and Galaxy Evolution from the Carnegie Observatories Centennial Symposia, Carnegie Observatories Astrophysics Series, Vol 3 ed: J. S. Mulchaey, A. Dressler, A. Oemler, Cambridge Univ. Press, 354 Osmond, J. P. F., & Ponman, T., 2004, MNRAS, 350, 1511 Sarazin, C. L. 1986, Revs of Mod Phys, 58, 96 Swarup, G., Ananthakrishnan, S., Kapahi, V., et al. 1991, Current Science, 60, 95 Toomre, A., & Toomre, J. 1972, ApJ, 178, 623 Vollmer, B., Cayatte, V., Balkowski, C. 2001, ApJ, 561, 708 Wang, Q. D. & McCray, R. 1993, ApJ, 409, L37 [^1]: Also Joint Astronomy Programme, Dept. of Physics, Indian Institute of Science, Bangalore - 560012, India
--- author: - | Ömer Can Gürdoğan\ İstanbul Teknik Üniversitesi Fizik Böümü, 34469, Maslak İstanbul, Turkey [^1] bibliography: - '0303.bib' title: 'Walking solutions in the string background dual to $\mathcal{N}=1$ SQCD-like theories' --- Introduction ============ The AdS/CFT correspondence [@Maldacena:1997re] conjectures an equivalence between string theory backgrounds and gauge field theories. The original duality proposes a correspondence between the $3+1$ dimensional $\mathcal{N}=4$ supersymmetric Yang – Mills (SYM) theory and type IIB string theory on $AdS_{5}\times S^{5}$. The conjecture is extended by further dualities between different string backgrounds and field theories with different properties than $\mathcal{N}=4$ SYM, such as spacetime dimensionality, less number of supersymmetries, etc. The string background studied in this work is a generalisation of the one proposed in [@Maldacena:2000yy] and utilised in [@Casero:2007jj] for the construction of a string dual to $\mathcal{N}=1$ supersymmetric QCD-like theories. The string duals of minimally supersymmetric QCD-like field theories employs D5-branes with world-volumes extended along the Minkowski directions and wrapped on a compact two – sphere. Matter is added by the flavour D5-branes separated from the colour branes along a radial direction [@Casero:2006pt]. Then the theory includes matter fields in the fundamental representation, which is more suitable to phenomenological studies. Even though the exact solutions to this background are so far unknown, solutions for specific values of $N_{f}$ and $N_{c}$, possible asymptotics and numerical solutions are already available and their properties have been studied to some extent. For some papers where the flavoured MN background has been studied see [@Gaillard:2008wt; @Caceres:2007mu; @Casero:2007pz; @Cotrone:2007qa; @Bertoldi:2007sf]. For other papers dealing with flavours added by backreacting flavour branes see [@Benini:2006hh; @Benini:2007gx; @Ramallo:2008ew; @Bigazzi:2008ie; @Arean:2008az; @Burrington:2007qd; @Bigazzi:2008qq; @Apreda:2006bu; @Bigazzi:2005md; @Bigazzi:2008zt; @Bigazzi:2009gu]. One of the known solutions has the property that the coupling constant in its dual field theory has the so called walking feature [@Nunez:2008wi]. This is a required feature for the coupling between the techniquarks of technicolor models that are used as a natural model for the electroweak symmetry breaking. Similar to the case of QCD, the coupling constant diverges at the low energies. In this work a new numerical solution to the background with a walking gauge coupling is presented. The major difference to the one in [@Nunez:2008wi] is that the coupling constant remains bounded everywhere. This is achieved by adjusting some parameters so that the background functions have a certain asymptotical behaviour. These asymptotics have already been classified in [@Casero:2006pt; @Casero:2007jj; @HoyosBadajoz:2008fw]. The second section is a review of the background with its known solutions. The properties of the known solutions is presented as well as the classification of the asymptotics. In the third section, the bounded walking solutions are constructed by imposing the solutions to have certain classes of asymptotics. It turns out to be that there is a constraint on the parameters of the system to produce the desired kind of solutions. In the fourth section, the Wilson loops have been calculated for the presented solution. The heavy quark potential exhibits a phase transition. Similar phase transitions of the heavy quark potential have been related to the phase transitions of the van der Waals gases [@Bigazzi:2008gd] and [@Bigazzi:2008qq]. Finally in the fifth section, not relevant to the bounded walking solutions, the effect of flavours is studied by finding flavoured solutions in a perturbative way. Review of the string dual to $\mathcal{N}=1$ SQCD-like theories =============================================================== Description of the background ----------------------------- In this chapter the background that is proposed to be dual to SQCD like theories will be reviewed very briefly. It is built on the Maldacena – Nuñez background, which is a type IIB supergravity solution introduced and proposed to be dual to a $\mathcal{N}=1$ SYM-like theory in [@Maldacena:2000yy] ( see also [@Chamseddine:1997nm] where a 4d solution was found). This theory includes $N_{c}$ colour D5-branes extending along the Minkowski directions $\vec{x}_{1,3}$ and the compact directions ($\theta$, $\phi$) that provide the colour symmetry of the theory. Then, $N_{f}$ flavour D5-branes smeared along four compact coordinates ($\theta$, $\varphi$, $\tilde{\theta}$, $\tilde{\varphi}$) are added to introduce an open string sector. Their worldvolume extends along the Minkowski directions, the radial coordinate that will be denoted as $\rho$ and a compact direction $\psi$. The open string sector gives rise to fundamental fields in the field theory. This system is governed by the action $S=S_{IIB}+S_{flavour}$ which consists of the Type IIB supergravity action and the Dirac – Born – Infeld + Wess – Zumino (DBI+WZ) action for the flavour branes. They read: $$\begin{split} S_{IIB}=&\frac{1}{2\kappa^{2}_{(10)}}\int d^{10}x\sqrt{-g}\left[R-\frac{1}{2}(\partial_{\mu}\phi)(\partial^{\mu}\phi) -\frac{1}{12}e^{\phi}F^{2}_{(3)}\right]\\ S_{flavour}=&T_{5}\displaystyle\sum^{N_{f}}\left(-\int_{\mathcal{M}_{6}}d^{6}xe^{\frac{\phi}{2}}\sqrt{-\hat{g}_{(6)}}+\int_{\mathcal{M}_{6}}P[C_{6}] \right) \end{split}$$ where $\hat{g}_{6}$ is the pullback of the metric and $P[C_{6}]$ s the pullback on the worldvolume of the RR six-form of the background. Having added the flavour branes, their effect on the background cannot be neglected, thus they are said to be backreacting. The action of the D5 flavour branes is six dimensional and they live on constant values of the two spheres’ coordinates $(\theta, \phi)$ and $(\tilde{\theta}, \tilde{\phi})$. The smearing mentioned in the above paragraph avoids the need of delta functions for these six dimensional objects in the ten dimensional theory and this technique was first used in [@Bigazzi:2005md]. The ansatz for the metric of the string background is a generalisation of the background in [@Maldacena:2000yy]. The metric describes a space of the topology $\mathbb{R}^{1,3}\times\mathbb{R}\times S^{2}\times S^{3}$ and it reads in the notation of [@HoyosBadajoz:2008fw]: $$\begin{split} %ds^{2}=\alpha'g_{s}N_{c}e^{\frac{\phi}{2}}\left[\frac{1}{\alpha'g_{s}N_{c}}dx_{1,3}^{2}+d\rho^{2}+e^{2h(\rho)}\left(d\theta^{2}+\sin^{2}(\theta)d\varphi^{2}\right)+\frac{e^{2g(\rho)}}{4}\left(\left(\tilde{\omega}_{1} +a(\rho)d\theta\right)^{2} \right. \right. \\ \left.\left. +\left(\tilde{\omega}_{2}+a(\rho)\sin(\theta)d\varphi\right)^{2}\right)+\frac{e^{2k(\rho)}}{4}\left(\tilde{\omega}_{3}+\cos(\theta)d\varphi\right)^{2}\right] ds^{2}=&e^{\frac{\phi}{2}}\left[ dx_{1,3}^{2}+Y(\rho)\left(4 d\rho^{2} +(\omega_{3}+\tilde{\omega}_{3})^{2}\right)+\frac{1}{2}P(\rho)\sinh\left(\tau(\rho)\right)(\omega_{1}\tilde{\omega}_{1}-\omega_{2}\tilde{\omega}_{2})\right.\\ &+\left.\frac{1}{4}\left(P(\rho)\cosh\left(\tau(\rho)\right)+Q(\rho)\right)(\omega_{1}^{2}+\omega_{2}^{2})+\frac{1}{4} \left(P(\rho)\cosh\left(\tau(\rho)\right)-Q(\rho)\right)(\tilde{\omega}_{1}^{2}+\tilde{\omega}_{2}^{2})\right] \end{split}$$ The dilaton $\phi(\rho)$ is a function of the radial coordinate and the RR 3-form is given by $$\begin{split} F_{(3)}=&\frac{N_{c}}{4} \left[ - \left(\tilde{\omega}_{1}+b(\rho)d\theta\right)\wedge\left(\tilde{\omega}_{2}-b(\rho)\sin{(\theta)d\varphi} \right)\wedge\left(\tilde{\omega}_{3} + \cos{(\theta)}d\varphi\right) \right.\\ &\left.+b'(\rho)d\rho\wedge\left(-d\theta\wedge \tilde{\omega}_{1}+\sin{(\theta)}d\varphi\wedge\tilde{\omega}_{2}\right)+\left(1-b^{2}(\rho) \right)\sin{(\theta)d\theta}\wedge d\varphi\wedge\tilde{\omega}_{3} \right]\\ &-\frac{N_{f}}{4}\sin{(\theta)}d\theta\wedge d\varphi\wedge\left(d\psi+\cos{(\tilde{\theta})d\tilde{\varphi}}\right) \end{split}$$ where $$\begin{aligned} \begin{split} \tilde{\omega}_{1}&=\cos{(\psi)}d\tilde{\theta}+\sin{(\psi)}\sin{(\tilde{\theta})}d\tilde{\varphi},\\ \tilde{\omega}_{2}&=-\sin{(\psi)}d\tilde{\theta}+\cos{(\psi)}\sin{(\tilde{\theta})}d\tilde{\varphi},\\ \tilde{\omega}_{1}&=d\psi+\cos{(\tilde{\theta})}d\tilde{\varphi}. \end{split}\end{aligned}$$ The background functions $P$, $Q$, $Y$, $\phi$, $a$ and $b$ are determined by a set of equations. These are BPS equations which are first order differential equations obtained from the supersymmetry transformations of the fermions and they ensure that the background satisfies the equations of motion of the gravity plus flavour branes action. After making some redefinitions like $a=\frac{P\sinh(\tau)}{P\cosh(\tau)-Q}$, $b=\frac{\sigma}{N_{c}}$ and the introduction of the function $$\omega:=\sigma-\tanh(\tau)\left(Q+\frac{2N_{c-N_{f}}}{2}\right)$$ the BPS equations are written as follows [@HoyosBadajoz:2008fw]: $$\begin{aligned} \label{BPS-P} \begin{split} P'& =8Y-N_{f},\\ \partial_{\rho}\left(\frac{Q}{\cosh(\tau)}\right)&=\frac{(2N_{c}-N_{f})}{\cosh^{2}(\tau)}-\frac{2\omega}{P^{2}}(P^{2}-Q^{2})\tanh(\tau)\\ \partial_{\rho}\left(\frac{\Phi}{\sqrt{P^2-Q^2}}\right)&=2\cosh(\tau)\\ \partial_{\rho}\left(\frac{\Phi}{\sqrt{Y}}\right)&=\frac{16Y}{P^{2}-Q^{2}}\\ \tau'(\rho)+2\sinh(\tau)&=-\frac{2Q\cosh(\tau)}{P^{2}}\omega\\ \omega'&=\frac{2\omega}{P^{2}\cosh(\tau)}\left(P^{2}\sinh^{2}(\tau)+Q\left(Q+\frac{2N_{c}-N_{f}}{2}\right)\right)\\ \end{split}\end{aligned}$$ with an algebraic constraint $\omega=0$ and $\Phi\equiv \left(P^{2}-Q^{2}\right)Y^{\frac{1}{2}}e^{2\phi} $. One then calculates $\tau$ and $Q$ using (\[BPS-P\]) as: $$\begin{split} \label{Q} \cosh (\tau)&=\coth (2\rho)\\ Q&=\left(Q_{0}+\frac{2N_{c}-N_{f}}{2}\right)\cosh(\tau)+\frac{2N_{c}-N_{f}}{2}(2\rho\cosh(\tau)-1) \end{split}$$ These equations can be cast into a single differential equation for $P$ that reads [@HoyosBadajoz:2008fw]: $$P''+(P'+N_{f})\left(\frac{P'+Q'+2N_{f}}{P-Q}+\frac{P'-Q'+N_{f}}{P+Q}-4\cosh(\tau)\right)=0. \label{masterP}$$ Once having obtained $P$, the other functions can be calculated in terms of it using the equations (\[BPS-P\]). The equation (\[masterP\]) will be referred as the master equation. Its detailed derivation can be found in [@Nunez:2008wi; @HoyosBadajoz:2008fw]. Type A backgrounds ------------------ In particular, the backgrounds described above represent a relatively general case and they are called type N backgrounds in the notation of [@Casero:2007jj]. There is a special case in which the backgrounds are called type A. In this case, the fibration of the $S^{2}$ and the $S^{3}$ becomes simpler. On the field theory side, as opposed to type N backgrounds, the gaugino condensate in the field theory vanishes. It is possible to reproduce the type A backgrounds by setting $\sigma$=$\tau$=0. This unifying approach was presented in [@HoyosBadajoz:2008fw]. When dealing with type A backgrounds, it is useful to redefine the background functions: $$\label{PQHGtrans} \begin{split} H=\frac{P+Q}{4}\\ G=\frac{P-Q}{4} \end{split}$$ In this notation the metric and the 3-form of the type A backgrounds are given by: $$\begin{split} ds^{2}=e^{\phi(\rho)}\left[ dx^{2}_{1,3}+4Y(\rho)d\rho^{2}+H(\rho)(d\theta^{2}+sin^{2}(\theta) d\varphi^{2})+G(\rho)(d\tilde{\theta}^{2}+sin^{2}\tilde{\theta}d\tilde{\varphi}^{2})\right.\\ \left.+Y(\rho)(d\psi+\cos(\tilde{\theta})d\tilde{\phi}+\cos(\theta) d\varphi)^{2}\right] \end{split}$$ $$F_{(3)}=-d\left[ \sigma(\rho)(\omega_{1}\wedge\tilde{\omega}_{1}-\omega_{2}\wedge\tilde{\omega}_{2} \right]-\left(\frac{N_{f}-N_{c}}{4}\omega_{1}\wedge\omega_{2}+\frac{N_{c}}{4}\tilde{\omega}_{1}\wedge\tilde{\omega}_{2}\right)\wedge(\omega_{3}\wedge\tilde{\omega}_{3})$$ with $\omega_{1}=d\theta$, $\omega_{2}=\sin(\theta)d\varphi$, $\omega_{3}=\cos(\theta)d\varphi$. The master equation in type A backgrounds reduces to $$\label{Heq} H''-\left(\frac{1}{2}\partial_{\rho}H+\frac{1}{4}(N_{f}-N_{c})\right)\left[-2\frac{\partial_{\rho}H+N_{f}-N_{c}}{H}-\frac{N_{f}+2\partial_{\rho H}}{H+\frac{N_{f}-2N_{c}}{2}\rho-C} \right]=0.$$ As before, the other type A background functions can be calculated in terms of $H(\rho)$ and integration constants [@Casero:2007jj]. \[BPS-H\] G=H+-C\ Y=\_H+(N\_[f]{}-N\_[c]{})\ =\_[0]{}+d Some known solutions -------------------- Although the master equation is a highly non-linear second order differential equation, it permits simplifications by setting its parameters to certain values. For these specific cases there are some known analytic solutions. The ones that are considered in this work are listed below. While only $P(\rho)$ or $H(\rho)$ is given here, it is possible to calculate the remaining functions by the equations and the definitions of the previous section. For type N backgrounds see the equations (\[BPS-P\] - \[Q\]). Equations (\[BPS-H\]) are valid for the type A case. It is always possible to translate between the “$P$-$Q$” and “$H$-$G$” notations using equations (\[PQHGtrans\]). ### Unflavoured solutions $N_{f}=0$ {#unflavoured-solutions-n_f0 .unnumbered} - A type N solution from [@Maldacena:2000yy] is: $$P=2N_{c}(\rho-\rho_{0}),\ Q_{0}=-N_{c}-2N_{c}\rho_{0},\ \rho \geq \rho_{0} > -\infty \label{turo}$$ - The type A limit ($\tau, \sigma \rightarrow 0$) of the type N solution in eq. (\[turo\]) above is: $$H=N_{c}(\rho-\rho_{*}),\ \rho_{*}:=-\frac{1}{2N_{c}}\left(Q_{0}\frac{N_{c}}{2}\right),\ \rho \geq \rho_{*}$$ ### Solutions for $N_{f}=2N_{c}$ {#solutions-for-n_f2n_c .unnumbered} $N_{c}=2N_{f}$ is a special case of the background where the functions have some different properties. In this case $Q(\rho)$ does not diverge unlike the other cases but it goes to a constant. - A type A solution called the “conformal” solution in [@Casero:2007jj] reads $$H=\frac{N_{c}}{\xi}$$ where $\xi$ is a real number between $0$ and $4$. - A deformation of the above solution $$H=\frac{N_{c}}{16}(9\pm 3)+\frac{c_{+}}{4}e^{\frac{4\rho}{4}}$$ also solves the master equation and it is of type A. Further solutions for this case have been presented in [@Caceres:2007mu]. ### An arbitrarily flavoured type A solution {#an-arbitrarily-flavoured-type-a-solution .unnumbered} The following type A solution is an obvious solution of equation (\[Heq\]). However it is not physical since the background function $Y(\rho)$ turns out to be zero (see eq. (\[BPS-H\])). $$\label{halfro} H(\rho)=\frac{N_{f}-N_{c}}{2}(c_{1}-\rho)$$ IR and UV asymptotics and their classification ---------------------------------------------- Besides the exact solutions that are valid for specific cases, there are asymptotic expansions written for the generic cases. There are two kinds of asympotics for the UV and three for the IR limit. The UV asymptotics are referred as “Class I and II” while the IR ones are “Type I, II and III”. ### UV asymptotics {#uv-asymptotics .unnumbered} As discussed in [@HoyosBadajoz:2008fw], equation (3.17) of [@HoyosBadajoz:2008fw] implies that the $P$ must diverge as $\rho \rightarrow \infty$ unless $N_{f}=2N_{c}$. There are two possibilities. $P$ either asymptotes to $\propto \rho$ or $\propto e^{\frac{4\rho}{3}}$. They are called Class I and II UV behaviours, respectively. - **Class I In cases where $N_{f}\neq N_{c}$, the class I asymptotics are of two kinds depending on whether $N_{f}$ is larger or smaller than $2N_{c}$. For $N_{f}>2N_{c}$, $H(\rho)$ asymptotes to $\frac{1}{4}(N_{f}-N_{c})$, while for the other case it asymptotes to $\frac{1}{2}(2N_{c}-N_{f})\rho$. This class of asymptotics contains terms up to the first order in $\rho$. The leading term of $G$ and $Y$ as $\rho \rightarrow \infty$ is a constant, namely $\frac{N_{c}}{4}$. The dilation like $e^{4\phi}\propto e^{4\rho}/\rho$.** - **Class II In class II solutions, $H(\rho)$ and $G(\rho)$ asymptote to $c_{+}e^{\frac{4\rho}{3}}$ regardless of the parameters. $G$, $Y$ are also proportional to $e^{\frac{4\rho}{3}}$ and the exponential of the dilaton $e^{4\phi}$ approaches a constant.** Since $\tau$ vanishes at the large values of $\rho$ , there is no distinction between type N and type A backgrounds in the UV region. ### IR asymptotics {#ir-asymptotics .unnumbered} The three types of IR behaviours are classified by the vanishing of some of the background functions in the IR region. The definition range of $\rho$, thus its smallest value called the IR value varies from case to case. The asymptotics of special type A backgrounds are reviewed here, since it is the relevant case for this work. In type II backgrounds the IR limit is either $\rho=0$ or $\rho=-\infty$. - **Type I: In such backgrounds the IR limit is defined as $\rho \rightarrow -\infty$. The series expansion for $H$ has the following form: $$\begin{aligned} H(\rho)=\frac{N_{f}-N_{c}}{2}(c_{1}-\rho)+\displaystyle\sum_{k\geq 1} \mathcal{P}_{k}(\rho)e^{4k\rho}\end{aligned}$$ where $\mathcal{P}_{k}$ are polynomials of order $k+1$ in $\rho$. From this $Y$ is found to be proportional to $e^{\frac{4\rho}{3}}$. As stated above, the solution given in eq. (\[halfro\]) is a bad solution because of the vanishing of $Y(\rho)\ \forall\rho$ . The type I solutions are solutions that do not suffer from this, however they approach eq. (\[halfro\]) as $\rho\rightarrow \infty$.** - **Type II: In backgrounds with type II IR behaviour, the IR limit is defined as $\rho \rightarrow 0$. The series expansion for the background functions $H$, $G$, $Y$, $\phi$ is either of the forms $$\begin{aligned} \begin{split} H&=h_{1}\rho^{\frac{1}{2}}+\left(\frac{h_{1}^{2}}{3C}-\frac{N_{f}}{2}\right)\rho+\dotsb\\ G&=-C+h_{1}\rho^{\frac{1}{2}}+\left(\frac{h_{1}^{2}}{3C}-\frac{N_{f}}{2}\right)\rho+\dotsb\\ Y&=\frac{h_{1}}{4\rho^{\frac{1}{2}}}+\frac{1}{12}\left(\frac{2h_{1}^{2}}{C}+3N_{c}-3N_{f}\right)+\frac{3}{4}h_{1}\frac{72C^{2}+20h_{1}^{2}+3C(10N_{c}-7N_{f})}{72C^{2}}\rho^{\frac{1}{2}}+\dotsb\\ \phi&=\phi_{0}+\frac{N_{f}-N_{c}}{2h_{1}}\rho^{\frac{1}{2}}+\frac{3C(N_{f}-N_{c})^{2}-h_{1}^{2}(2N_{c}+N_{f})}{12Ch_{1}^{2}}\rho+...\ \ \ \ \ \rm{for}\ C<0\\ \rm{or}\\ H&=C+h_{1}\rho^{\frac{1}{2}}-\left(\frac{h_{1}^{2}}{3C}+\frac{N_{f}}{2}\right)\rho+\dotsb\\ G&=h_{1}\rho^{\frac{1}{2}}-\left(\frac{h_{1}^{2}}{3C}+\frac{N_{f}}{2}\right)\rho+\dotsb\ \\ Y&=\frac{h_{1}}{4\rho^{\frac{1}{2}}}-\frac{1}{2}\left(\frac{h_{1}^{2}}{C}+\frac{N_{c}}{2}\right)+h_{1}\frac{72C^{2}+20h_{1}^{2}+3C(10N_{c}-3N_{f})}{96C^{2}}\rho^{\frac{1}{2}}+\dotsb\\ \phi&=\phi_{0}+\frac{N_{c}}{2h_{1}}\rho^{\frac{1}{2}}+\frac{3C-h_{1}^{2}(3N_{f}-2N_{c})}{12Ch_{1}^{2}}\rho+\dotsb\ \ \ \ \ \rm{for}\ C<0\\ \end{split}\end{aligned}$$ depending on the sign of C. Note that one of the functions vanishes at $\rho=0$ while the other is non-zero, $C$. For $C=0$, both of the functions are zero and these kind of backgrounds are called type III. Their leading behaviour is like $\rho^{\frac{1}{3}}$ as described below.** - **Type III: Finally, there is the third kind of IR asymptotics that read. $$\begin{aligned} \begin{split} H&=h_{1}\rho^{\frac{1}{3}}+\frac{1}{10}(5N_{c}-7N_{f})\rho+\dotsb\\ G&=h_{1}\rho^{\frac{1}{3}}-\frac{1}{10}(5N_{c}+2N_{f})\rho+\dotsb \end{split}\end{aligned}$$** Type III asymptotics are valid instead of type II when $C=0$. Both $H$ and $G$ are zero in the IR limit. The above review summarises the known possible asymptotical IR and UV behaviours of the solutions for the type A BPS equations of the background being studied. A new numerical solution found with class II UV and type II IR behaviour as well as the conditions for its existence and some of its implications on the field theory side such as the gauge coupling will be presented in the following section. Bounded walking solutions of the background =========================================== Walking is a desired property of the so-called techniquark coupling constants of technicolor models that are used as a natural model for the electroweak symmetry breaking. Because of phenomenological reasons their running coupling constant must differ from the QCD coupling constant. The techniquarks need to remain strongly coupled for some higher energy scale. In other words, the coupling constant is needed to “walk” at a certain value before it decays to zero in the far UV. This kind of walking has already been observed in the background studied in this paper [@Nunez:2008wi]. The coupling constant can be translated for type A backgrounds from the background functions as (see [@Casero:2007jj]): $$g^{2} \propto \frac{1}{H+G}.$$ While the coupling constant in [@Nunez:2008wi] diverges in the IR like the QCD coupling, it can be useful to find backgrounds with bounded field theory coupling constants. $g^{2}$ in backgrounds with type II IR and class II UV asymptotics is expected to be constant for a certain interval and drop quickly at a certain point in the UV. That is indeed what happens at least for some values of the parameters $h_{1}$ and $C$ in the IR expansion. To investigate the parametrical constraints to obtain the desired kind of solutions a set of sample parameters $$(C,h_{1})\in \{1, 10, 50, 500, 5000\}\times\{0.01, 0.7, 0.78545, 0.78552, 0.78657, 0.85, 10, 15\}$$ has been considered where $N_{f}$ has been taken as zero at the first step. These solution is obtainable with some of the combinations of parameters from this set. Figure \[sampleplots\] contains the plots of the background functions and the coupling constant for such a solution. The functions $H(\rho)$ and $G(\rho)$ have a square root behaviour in the IR while they diverge exponentially in the UV region. The dilation has no singularities and asymptotes to a constant in the UV. The coupling constant has the desired shape. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![A sample walking solution of the background with type II IR and Class II UV asymptotics. The plots are for $C=5000$, $h_{1}=0.7852$. There is no significant change in the coupling constant until the far UV values of $\rho$.[]{data-label="sampleplots"}](./C5000HIR.eps "fig:"){width="1.60in"} ![A sample walking solution of the background with type II IR and Class II UV asymptotics. The plots are for $C=5000$, $h_{1}=0.7852$. There is no significant change in the coupling constant until the far UV values of $\rho$.[]{data-label="sampleplots"}](./C5000GIR.eps "fig:"){width="1.60in"} ![A sample walking solution of the background with type II IR and Class II UV asymptotics. The plots are for $C=5000$, $h_{1}=0.7852$. There is no significant change in the coupling constant until the far UV values of $\rho$.[]{data-label="sampleplots"}](./C5000dil.eps "fig:"){width="1.60in"} ![A sample walking solution of the background with type II IR and Class II UV asymptotics. The plots are for $C=5000$, $h_{1}=0.7852$. There is no significant change in the coupling constant until the far UV values of $\rho$.[]{data-label="sampleplots"}](./C5000HUV.eps "fig:"){width="1.60in"} ![A sample walking solution of the background with type II IR and Class II UV asymptotics. The plots are for $C=5000$, $h_{1}=0.7852$. There is no significant change in the coupling constant until the far UV values of $\rho$.[]{data-label="sampleplots"}](./C5000GUV.eps "fig:"){width="1.60in"} ![A sample walking solution of the background with type II IR and Class II UV asymptotics. The plots are for $C=5000$, $h_{1}=0.7852$. There is no significant change in the coupling constant until the far UV values of $\rho$.[]{data-label="sampleplots"}](./C5000cc.eps "fig:"){width="1.60in"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- The table in figure \[cconstants\] contains plots of the coupling constant for different parameters $C$ and $h_{1}$. As it can be seen from that table, the existence of the desired solutions depends on both of the integration constants. Moreover, the bound of one parameter is related to the value of the other. Parameter constraints for the desired kind of solutions ------------------------------------------------------- A more precise list of observations that can be concluded from figure \[cconstants\] is the following: - For solutions of common $h_{1}$, whose coupling constants blow at a certain point in the UV, become a smoothly and rapidly decaying behaviour as $C$ increases. At even larger $C$, they turn into Class II solutions ($\propto e^{\frac{4\rho}{3}}$). See the change of the yellow ($h_{1}=0.78552$) solution for different $C$. - For fixed $C$, smaller $h_{1}$ is required to have longer plateau. However, there is a minimum for $h_{1}$ that decreases as $C$ increases. - While for the ($h_{1}=0.85$) and ($h_{1}=0.78657$) lines $C=5000$ results in a longer plateau than $C=500$, for the solutions ($h_{1}=0.78552$) the case is opposite. --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![The plots of the field theory coupling constant $g^{2}$ for different values of $C$ and $h_{1}$ Each cell contains plots with common $C$. The colours represent the value of $h_{1}$ \[cconstants\] ](./C1.eps "fig:"){height="1.5in"} ![The plots of the field theory coupling constant $g^{2}$ for different values of $C$ and $h_{1}$ Each cell contains plots with common $C$. The colours represent the value of $h_{1}$ \[cconstants\] ](./C10.eps "fig:"){height="1.5in"} C=1 C=10 ![The plots of the field theory coupling constant $g^{2}$ for different values of $C$ and $h_{1}$ Each cell contains plots with common $C$. The colours represent the value of $h_{1}$ \[cconstants\] ](./C50.eps "fig:"){height="1.5in"} ![The plots of the field theory coupling constant $g^{2}$ for different values of $C$ and $h_{1}$ Each cell contains plots with common $C$. The colours represent the value of $h_{1}$ \[cconstants\] ](./C50.eps "fig:"){height="1.5in"} C=50 C=500 ![The plots of the field theory coupling constant $g^{2}$ for different values of $C$ and $h_{1}$ Each cell contains plots with common $C$. The colours represent the value of $h_{1}$ \[cconstants\] ](./C5000.eps "fig:"){height="1.5in"} C=5000 --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ### Adding $N_{f}$ {#adding-n_f .unnumbered} To get an intuition on the effect of flavours, a similar investigation has been made for flavoured solutions. Figure \[fig:ccflavour\] contains plots of solutions with two different $C$ and constant $h_{1}=0.78659$. Different lines correspond to different $x\equiv\frac{N_{f}}{N_{c}} \in \{ 0,0.5,1\}$. ![On the left hand side are the plots for $C=500$, the flavoured solutions do not look as desired. The right hand side plot represents the solutions with $C=5000$ however, one sees that all of them are good and exactly the same.[]{data-label="fig:ccflavour"}](./nfC500.eps "fig:"){height="1.8in"}![On the left hand side are the plots for $C=500$, the flavoured solutions do not look as desired. The right hand side plot represents the solutions with $C=5000$ however, one sees that all of them are good and exactly the same.[]{data-label="fig:ccflavour"}](./nfC5000.eps "fig:"){height="1.8in"} Cases with $N_{f} > 0$ are only nice for larger $C$ than the cases with $N_{f}=0$. However, once $C$ is large enough the value of $x$ has no effect on the solutions. Wilson loops ============ Wilson loops are gauge invariant operators that allow to compute the energy of an quark – anti-quark pair. Their relation to gauge / gravity conjecture has been stated in [@Maldacena:1998im] and the calculation is reviewed by [@Sonnenschein:1999if] in detail. The Wilson Loop operator constructed over a closed space-time loop $\mathcal{C}$ representing the world-line of a quark – anti-quark pair. The world-line of pair we consider forms a rectangle where the pair is created in one side of it and annihilated in the other. When $T$, the time leg of the rectangle we consider is very large, the energy of the pair as a function of the separation of the pair can be estimated as follows: $$\label{energy} <W(\mathcal{C})>=A(L)e^{-TE(L)}$$ On the other hand, it is proposed in [@Maldacena:1998im] that the expectation value of the Wilson loop operator can be calculated from the Nambu – Goto action for a string stretching along the radial coordinate $\rho$. The endpoints of the string trace the quarks’ world-lines living at the UV region (large $\rho$). $$\label{wilsonev} <W(\mathcal{C})>\propto e^{-S}$$ $$S=\frac{1}{2\pi \alpha'}\int d\tau d\sigma \sqrt{|det(h_{\alpha\beta})|}.$$ where $h_{\alpha \beta}=g_{\mu\nu}\partial_{\alpha}X^{\mu}\partial_{\beta}X^{\nu}$ is the induced metric on the string world-sheet. After fixing the world sheet parametrisation to $\tau = t$ and $\sigma = x$), the determinant of the induced metric for the static string stretched to along the $\rho$ - $x$ plane is: $$\begin{aligned} det(h_{\alpha \beta})& =& g_{tt}g_{xx}+g_{tt}g_{\rho\rho} \left(\partial_{x}\rho\right)^{2}\\ &=&\alpha '^{2} e^{2\phi}\left(-1-4Y\rho'^{2}\right) \nonumber\end{aligned}$$ where $\rho '=\frac{d\rho}{dx}$. Since there is no $t$ dependence in the integrant, the $t$ integration can be performed straight away to obtain: $$\label{action} S=\frac{T}{2\pi}\int dx e^{\phi}\sqrt{1+Y\rho '^{2}}=\frac{T}{2\pi}\int d \mathcal{L}$$ The Lagrangian does not depend explicitly on $x$, therefore the $x$ – Hamiltonian $$\mathcal{H}=\frac{\partial \mathcal{L}}{\partial \rho'}\rho'-\mathcal{L}$$ is a constant of motion. In fact, it is equal to $-e^{\phi(\rho_{0})}$ where $\rho_{0}$ is the point on which $\rho'$ and consequently $\frac{\partial \mathcal{L}}{\partial{\rho'}}$ vanishes. Using this it is possible to write $$\frac{d\rho}{dx}=\frac{1}{2\sqrt{Y(\rho)}}\frac{\sqrt{e^{2\phi(\rho)}+e^{2\phi(\rho_{0})}}}{e^{\phi(\rho_{0})}}$$ to express the quark separation in terms of $\rho_{0}$, the deepest coordinate that the string reaches: $$L=\int dx=2\int_{\rho_{0}}^{\rho_{1}} 2\sqrt{Y(\rho)}\frac{e^{\phi(\rho_{0})}}{\sqrt{e^{2\phi(\rho)}+e^{2\phi(\rho_{0})}}}d\rho.$$ Similarly, writing the energy of the pair from the equations , as a function of $\rho_{0}$ as $$V=\int_{\rho{0}}^{\rho_{1}} \mathcal{L} \frac{dx}{d\rho} d\rho - 2\times\int_{0}^{\rho_{1}}2\sqrt{Y(\rho)}e^{\phi(\rho)}d\rho \label{VQQ}$$ one obtains a parametric relation between the quark – anti-quark separation and energy. In equation (\[VQQ\]), two quark masses calculated as the length of a string stretching from the bottom of the space to the quarks, have been subtracted from the energy to consider just the potential energy of the pair. Here we calculate the Wilson loops on a background with bounded walking coupling constants described previously. We take the loop to be the on the $x$ – $t$ plane at a very large radial coordinate $\rho=\rho_{1}$. ![The plot on the left shows the energy – quark separation calculated using Wilson loop for quarks of masses $m_{Q}\approx 500$. The on the right hand side is the same for $m_{Q}=3700$.[]{data-label="fig:VL"}](./wilsonUV5.eps "fig:"){width="2.3in"} ![The plot on the left shows the energy – quark separation calculated using Wilson loop for quarks of masses $m_{Q}\approx 500$. The on the right hand side is the same for $m_{Q}=3700$.[]{data-label="fig:VL"}](./wilsonUV7.eps "fig:"){width="2.3in"} Figure \[fig:VL\] shows the plots of energy – separation relation for configurations with quark masses $m_{Q}\approx500$ and $m_{Q}\approx3700$. One sees two branches and the lower one has linear confinement. The upper non-linear branch is less stable due to its larger energy. The branches coincide at a certain point at a very high $\rho_{0}\lesssim \rho_{1}$ after which the string cannot stretch any further. This is interpreted as string breaking. Even though there are no flavour branes in this background to allow formation of dynamical quarks where the string breaks, the derivative of the background function $H(\rho)$ is singular at the origin. That affects the energy and the separation integrals and thus is said to allow phenomena as if there were flavour branes. Such behaviours have been observed and discussed in similar solutions of this background, see for example [@Casero:2007jj; @Bigazzi:2008gd; @Bigazzi:2008cc]. In [@Bigazzi:2008gd], [@Bigazzi:2008cc] and [@Bigazzi:2008qq] flavoured systems with massive dynamical quarks are found out to have the energy - separation relation that qualitatively resembles the phase transition in the Gibbs free energy – pressure relation of van der Waals gases that involves “quasi-static phases”. Therefore, it has been argued [@Bigazzi:2008qq] that it can be interesting to study quark potentials in gauge theories with string duals by relating them to the phase transitions of van der Waals gases. In the present background, another interesting phase transition occurs if one puts the brane where the heavy quarks live on at distances of smaller values of $\rho$. In other words, one reduces the mass of the quarks. Then a phase transition occurs at the configurations with about $\rho \approx 0.1\rho_{1}$. Figure \[wilsonsacma\] contains the plot of the entire picture and the zoom of the region of transition \[wilsonsacma\] ![Decreasing the heavy quark masses, the phase transition at $\rho \lesssim \rho_{1}$ is replaced by another transition at a lower value of $\rho$. The plot to the left shows the general picture while a zoom of the transition region is presented on the right hand side.](./wilsonsacmamacro.eps "fig:"){height="1.5in"} ![Decreasing the heavy quark masses, the phase transition at $\rho \lesssim \rho_{1}$ is replaced by another transition at a lower value of $\rho$. The plot to the left shows the general picture while a zoom of the transition region is presented on the right hand side.](./wilsonsacma.eps "fig:"){height="1.5in"} Figure \[wilsonVrho0\] shows the relation between the quark potential and the deepest coordinate that the string reaches for the three cases mentioned previously in this chapter. The phase transitions occur at the points where the potential reaches a maximum or a minimum. \[wilsonVrho0\] ![ The relation between the quark potential energy $V$ and the deepest coordinate that the string reaches $\rho_{0}$. The plot to the left belongs to the case with $M_{Q}\approx 3700$, the plot in the centre belongs to the case with $M_{Q}\approx 500$ while the one to the right belongs to the case with $M_{Q}\approx 225$](./wilsonV4.2.eps "fig:"){height="1in"} ![ The relation between the quark potential energy $V$ and the deepest coordinate that the string reaches $\rho_{0}$. The plot to the left belongs to the case with $M_{Q}\approx 3700$, the plot in the centre belongs to the case with $M_{Q}\approx 500$ while the one to the right belongs to the case with $M_{Q}\approx 225$](./wilsonV5.eps "fig:"){height="1in"} ![ The relation between the quark potential energy $V$ and the deepest coordinate that the string reaches $\rho_{0}$. The plot to the left belongs to the case with $M_{Q}\approx 3700$, the plot in the centre belongs to the case with $M_{Q}\approx 500$ while the one to the right belongs to the case with $M_{Q}\approx 225$](./wilsonV7.eps "fig:"){height="1in"} | Concluding this chapter, it is possible to say that one observes phase transitions of the heavy quark potential energy also in the bounded walking solutions of the background we presented in this work. Where the transition occur and what their qualitative properties are, depends on where one places the heavy quarks. None of the phase transitions here is exactly of the same type as the one of the van der Waals gases. However, there is still a possible relation since the pattern matches partially. Perturbative addition of flavours ================================= For the non-flavoured cases there are exact solutions known to the background functions. The idea of this section is to look for small $x=\frac{N_{f}}{N_{c}}$ expansions of some flavoured solutions, whose $x \rightarrow 0$ limit are the known non-flavoured solutions. In other words, we perturb the non-flavoured solutions around $x=0$ and plug the expression $$H(\rho)=H_{0}(\rho)+h_{1}(\rho)x+h_{2}(\rho)x^{2}+...$$ into the $H$ - equation and solve the differential equations obtained for $h_{n}(\rho)$ functions. The observation is that the properties of the solutions change with the addition of flavours. Backgrounds with $H(\rho)=\frac{\rho}{2}$ {#backgrounds-with-hrhofracrho2 .unnumbered} ----------------------------------------- As mentioned around equation (\[halfro\]), this solutions are known to be unphysical [@Casero:2007jj] since they have vanishing background function $Y=0$ for all $\rho$ and therefore cannot be considered as a 10 dimensional background. However, as soon as one adds flavour corrections up to the $k$th order in $x$, one obtains correction terms of the form $$\mathcal{P}_{k}(\rho)\times e^{4k\rho}.$$ where $\mathcal{P}_{k}$ are polynomials of order $k+1$. This solutions are as reviewed before known as type I and they are well-behaved solutions. Thus we say, a solution that is not even a background turns into a physical background after the addition of flavours. That the solution $H=\rho/2$ being the limit of type I solutions as $\rho \rightarrow -\infty$ has already been discussed in [@Casero:2007jj]. Backgrounds with $H(\rho)=\rho$ {#backgrounds-with-hrhorho .unnumbered} ------------------------------- Another solution to the H-equation is $H(\rho)=\rho$. From this solution one calculates a dilaton that diverges at $\rho=0$. Diverging dilatons in the IR are considered as bad singularities causing unphysical effects - see section 5 of [@Maldacena:2000mw] . That is because $e^{\phi}$ has to be finite in the IR limit. Following the perturbative flavouring procedure described above, one considers correction terms that come in powers of $x$ and solves the differential equations for the coefficients that are functions of $\rho$. Then one obtains the solutions of the form known as type II. In type II solutions the dilaton does not have a bad singularity [@HoyosBadajoz:2008fw]. An example for a type II dilaton is depicted in figure \[sampleplots\]. Backgrounds with $P=2\rho$ {#backgrounds-with-p2rho .unnumbered} -------------------------- This unflavoured type N solution is known to be a smooth solution that has no bad singularities. Perturbing this solution with flavours brings it to the following form (up to the first order in $x$ and second order in $\rho$). $$P(\rho)=2\rho+x\left(\frac{c_{2}}{\rho^{2}}+c_{1}\rho-\frac{1}{2} +\mathcal{O}(\rho^{2})\right)+\mathcal{O}(x^{2})$$ As usual, the dilaton is calculated from the equations (\[BPS-P\]) as: $$e^{4\phi}\propto\frac{1}{(P^{2}-Q^{2})Y\sinh^{2}(\tau)} %\phi\propto\sqrt{P^{2}-Q^{2}}\sinh (2\rho)$$ One sees that the dilaton diverges to $-\infty$ in the IR after adding the flavours. This divergence is not considered as bad since $e^{\phi}$ remains bounded. Figure \[hpq\] contains the plots of the background functions $P$ and the dilaton $\phi$ before and after the perturbative addition of flavours. ![The change of the background $P(\rho)=2\rho$ with flavours: The plot to the left shows $P$ and the one to the right shows $\phi$. The solid line is the unflavoured solution while the dashed line is the solution perturbed with solutions[]{data-label="hpq"}](./IIIP.eps "fig:"){height="1.5in"} ![The change of the background $P(\rho)=2\rho$ with flavours: The plot to the left shows $P$ and the one to the right shows $\phi$. The solid line is the unflavoured solution while the dashed line is the solution perturbed with solutions[]{data-label="hpq"}](./IIIdilatonoriginal.eps "fig:"){height="1.5in"} Conclusions =========== In this work a solution to the string background has been presented which is dual to SQCD-like field theory with a gauge coupling that has a walking property. It has been observed that the integration constants that come from the BPS equations must be be constrained by a relation between each other in order to obtain such kind of backgrounds with walking gauge couplings. After obtaining the exact numerical solutions it is possible to make calculations for the quantities of the field theory such as the quark – anti-quark potential using the Wilson loops. The Wilson loop calculations pointed out that the heavy quark potential for the field theory experiences phase transitions. The qualitative properties of those phase transitions depend on the quark masses which depends on the radial coordinate of the D-brane on which the quarks live. Possibly these phase transitions can be linked to the phase transitions of van der Waals gases by analogy. Finally, in addition to the calculations related to the presented solution, the effect of the presence flavours has been investigated. Some already known unflavoured exact solutions have been flavoured by a small $x=\frac{N_{f}}{N_{c}}$ perturbations. This procedure has cured the certain problems of the unflavoured backgrounds and generated previously known smooth solutions. Acknowledgements {#acknowledgements .unnumbered} ================ It is a pleasure to thank the physics department of the University of Wales, Swansea where I was kindly hosted for the realisation of the present work. I am grateful to Dr. Carlos Núñez for his invaluable guidance during this work. I also would like to thank Dr. Ioannis Papadimitriou and Dr. Maurizio Piai for the related discussions. [^1]: Visiting Department of Physics, Swansea University, Swansea, SA2 8PP, UK
--- abstract: 'We consider a half-duplex wireless relay network with hybrid-automatic retransmission request (HARQ) and Rayleigh fading channels. In this paper, we analyze the outage probability of the multi-relay delay-limited HARQ system with opportunistic relaying scheme in decode-and-forward mode, in which the *best* relay is selected to transmit the source’s regenerated signal. A simple and distributed relay selection strategy is proposed for multi-relay HARQ channels. Then, we utilize the non-orthogonal cooperative transmission between the source and selected relay for retransmitting of the source data toward the destination if needed, using space-time codes or beamforming techniques. We analyze the performance of the system. We first derive the cumulative density function (CDF) and probability density function (PDF) of the selected relay HARQ channels. Then, the CDF and PDF are used to determine the outage probability in the $l$-th round of HARQ. The outage probability is required to compute the throughput-delay performance of this half-duplex opportunistic relaying protocol. The packet delay constraint is represented by $L$, the maximum number of HARQ rounds. An outage is declared if the packet is unsuccessful after $L$ HARQ rounds. Furthermore, closed-form upper-bounds on outage probability are derived and subsequently are used to investigate the diversity order of the system. Based on the derived upper-bound expressions, it is shown that the proposed schemes achieve the full spatial diversity order of $N+1$, where $N$ is the number of potential relays. Our analytical results are confirmed by simulation results.' author: - | \ [^1] bibliography: - 'references.bib' title: 'Efficient Relay Selection Scheme for Delay-Limited Non-Orthogonal Hybrid-ARQ Relay Channels ' --- Introduction ============ Cooperation among devices has been considered to provide diversity in wireless networks where fading may significantly affect single links [@nos04]. Initial works have emphasized on relaying, where a cooperator node amplifies (or decodes) and forwards, possibly in a quantized fashion [@cov79], the information from the source node in order to help decoding at the destination node [@lan04; @mah09twc; @mah08nov]. The achieved throughput can be increased with the integration of cooperation and coding, i.e., by letting the cooperator send incremental redundancy to the destination [@hun06]. In particular, it has been shown in [@hun06] that coded cooperation achieves a diversity order of two, while decode-and-forward reaches only a diversity order one, when the transmissions of source and cooperator are orthogonal. The capacity of cooperative networks using both the decode-and-forward and coded cooperation has been extensively studied [@hun06; @kra05] for simple networks with simple medium access control (MAC) protocols. In [@lin06], a system with two transmission phases that makes use of convolutional codes is analyzed and characterized by means of partner choice and performance regions. Resource allocation for space-time coded cooperative networks has been studied in [@mah09eur; @mah11tc], where the analysis of bit error rate and outage probability are also derived. Unfortunately, in cooperative relaying the diversity gain is increased at the expense of throughput loss due to the half-duplex constraint at relay nodes. Different methods have been proposed to recover this loss. In [@fan07], successive relaying using repetition coding has been introduced for a two relay wireless network with flat fading. In [@tan08], relay selection methods have been proposed for cooperative communication with decode-and-forward (DF) relaying. A prominent alternative to reducing the throughput loss in relay-aided transmission mechanisms is the combination of both ARQ and relaying. This approach would significantly reduce the half-duplex multiplexing loss by activating ARQ for rare erroneously decoded data packets, when they occur. Approaches targeting the joint design of ARQ and relaying in one common protocol have recently received more interest (see for instance [@nar08; @qi09]). Motivated by the above suggestion, we investigate and analyze throughput efficient cooperative transmission techniques where both ARQ and relaying are jointly designed. To this end, a diversity effect can be introduced to a relay networks by simply allowing the nodes to maintain previously received information concerning each active message. Each time a message is retransmitted, either from a new node or from the same node, every node in the relay network will increase the amount of resolution information it has about the message. Once a node has accumulated sufficient information it will be able to decode the message and can act as a relay and forward the message (as in decode-and-forward [@lan04; @mah09iet]). This diversity effect can be viewed as a space-time generalization of the time-diversity effect of hybrid-automatic repeat request (HARQ) as described in [@cai01]. Thus, the HARQ scheme which is used in this paper is a practical approach to designing wireless ad hoc networks that exploit the spatial diversity, which is achievable with relaying. The retransmitted packets could originate from any node that has overheard and successfully decoded the message. Current and future wireless networks based on packet switching use HARQ protocols at the link layer. Hence, the performance of HARQ protocols in relay channels has attracted recent research interest [@tab05; @zha05jsac; @nar08]. In this paper, we propose an efficient HARQ multi-relay protocol which leads to full spatial diversity. We assume a delay-limited network with the maximum number of HARQ rounds $L$, which represents the delay constraint. The protocol uses a form of incremental redundancy HARQ transmission with assistance from the selected relay via non-orthogonal transmission in the second transmission phase if the relay decodes the message before the destination. Note that by non-orthogonal transmission, we mean that the source node and the selected relay simultaneously retransmit the source data using space-time codes or beamforming techniques. We introduce a distributed relay selection scheme for HARQ multi-relay networks by using acknowledgment (ACK) or non-acknowledgment (NACK) signals transmitted by destination. Closed-form expressions are derived for the outage probability, defined as the probability of packet failure after $L$ HARQ rounds, in half-duplex. For sufficiently high SNR, we derive a simple closed-form average outage probability expression for a HARQ system with multiple cooperating branches, and it is shown that the full diversity is achievable in the proposed HARQ relay networks. The simulations shows that the throughput of the relay channel is significantly larger than that of direct transmission for a wide range of signal-to-noise ratios (SNRs), target outage probabilities, delay constraints and relay numbers. The remainder of this paper is organized as follows: In Section II, the system model and protocol description are given. The performance analysis The closed–form expressions for the outage probability and asymptotic analysis of the system are presented in Section III, which are utilized for optimizing the system. In Section IV, the overall system performance is presented for different numbers of relays and channel conditions, and the correctness of the analytical formulas are confirmed by simulation results. Conclusions are presented in Section V. *Notations*: The superscripts $(\cdot)^t$, $(\cdot)^H$, and $(\cdot)^*$ stand for transposition, conjugate transposition, and element-wise conjugation, respectively. The expectation operation is denoted by $\mathbb{E}\{\cdot\}$. The union and intersection of a collection of sets are denoted by $\bigcup$ and $\bigcap$, respectively. The symbol $|x|$ is the absolute value of the scalar $x$, while $[x]^+$ denotes $\max\{x,0\}$. The logarithms $\log_2$ and $\log$ are the based two logarithm and the natural logarithm, respectively. System Model and Protocol Description ===================================== ![Wireless relay network consisting of a source, a destination, and *N* relays.[]{data-label="fa"}](relayHARQ.eps "fig:"){width="\columnwidth"}\ ![Example of HARQ protocol for relay selection system. The selected relay decodes message after HARQ round $k$. The source and selected relay simultaneously transmit $s_l$ and $\hat{s}_l$, respectively, for all HARQ rounds $l > k$. In this figure, the destination decodes the message after HARQ round $M$ where $k<M\leq L$.[]{data-label="f0"}](HARQ2.eps "fig:"){width="\columnwidth"}\ Consider a network consisting of a source, one or more relays denoted $i=1, 2, \ldots, N$, and one destination. The wireless relay network model is illustrated in Fig. \[fa\]. It is assumed that each node is equipped with a single antenna. We consider symmetric channels and denote the source-to-destination, source-to-*i*th relay, and *i*th relay-to-destination links by $f_0$, $f_i$, and $g_i$, respectively. Suppose each link has Rayleigh fading, independent of the others. Therefore, $f_0$, $f_i$, and $g_i$ are i.i.d. complex Gaussian random variables with zero-mean and variances $\sigma_0^2$, $\sigma_{f_i}^2$, and $\sigma_{g_i}^2$, respectively. As in [@nar08], all links are assumed to be long-term quasi-static wherein all HARQ rounds of a single packet experience a single channel realization. Subsequent packets experience independent channel realizations. Note that such an assumption, applicable in low-mobility environments such as indoor wireless local area networks (WLANs), clearly reveals the gains due to HARQ since temporal diversity is not present. Relay Selection Strategy ------------------------ In this paper, we use selection relaying, a.k.a. opportunistic relaying [@ble06b], which selects the best relay among $N$ available relays. Inspired by the distributed algorithm proposed in [@ble06b], which uses request-to-send (RTS) and clear-to-send (CTS) signals to select the best relay, we propose the following selection procedure for HARQ systems: - In the first step, the source node broadcast its packet toward the relays and the destination. Thus, relays can estimate their source-to-relay channels. - If the destination decodes the packet correctly, the relays would not cooperate. Otherwise, relays exploit the NACK signal which is transmitted by the destination to estimate their corresponding relay-to-destination channel. - The $i$th relay, $i=1,\ldots,N$ has a timer $T_i$ which its value is proportional to the inverse of $\min\left\{|f_{i}|^2,|g_{i}|^2\right\}$. - The relay with maximum amount of $\min\left\{|f_{i}|^2,|g_{i}|^2\right\}$ has a smallest $T_i$. Whenever the first relay finished its timer, it broadcasts a flag packet toward the other relays to make them silent and announce her as the selected relay. Note that the process of selecting the best relay could be also done in a centralized manner by the destination. This is feasible since the destination node should be aware of both the backward and forward channels for coherent decoding. Thus, the same channel information could be exploited for the purpose of relay selection. After selecting the best relay, a feedback packet containing the index of the best relay should be sent from the destination toward the source and relay nodes. Transmission Strategy --------------------- Let $s$ and $\hat{s}$ denote the transmitted signals from the source and the selected relay, respectively. As shown in Fig. \[f0\], during the first HARQ round, the relays and destination listen to the source transmit block $s$. At the end of the transmission, the destination transmits both the source and relays a one-bit ACK or NACK indicating, respectively, the success or failure of the transmission. The NACK/ACK is assumed to be received error-free and with negligible delay. Then, with the procedure given above, the best relay is selected. As long as NACK is received after each HARQ round and the maximum number of HARQ rounds is not reached, the source successively transmits subsequent HARQ blocks of the same packet. As illustrated in Fig. \[f0\], suppose the selected relay decodes the message after HARQ round $k$, while the destination has not yet decoded the message correctly. For all HARQ rounds $l > k$, the source and the selected relay simultaneously transmit $s$ and $\hat{s}$, respectively. For this non-orthogonal transmission, the destination can be benefited from the spatial diversity using the following methods: ### Space-Time Code Transmission The Alamouti code can be used to transmit the coded packets, hence, no interference occurs due to the simultaneous transmissions of the source and relay. The effective coding rate after $l$ HARQ rounds is $R/l$ bps/Hz, where $R$ is the spectral efficiency (in bps/Hz) of the first HARQ round. Let $x$ and $\hat{x}$ denote the Alamouti code transmitted signals from the source and the selected relay, respectively. The received signal $y$ at the destination can be written as follows: $$\label{1} y=\left\{\begin{array}{cc} f_0 x + g_r \hat{x}+n, & \text{if } l > k, \\ f_0 x +n, & \text{if } l \leq k. \end{array}\right.$$ where the index $r$ refers to the index of the selected relay and $n$ is a complex white Gaussian noise sample with variance $N_0$. ### Beamforming Transmission An alternative way of simultaneous transmission of the coded packets is beamforming. Assuming the knowledge of channel phases of $f_0$ and $g_r$ at the source and the selected relay $r$, respectively, the transmitted message can be recovered at the destination. Moreover, using beamforming, we can achieve array gain of two, comparing to space-time code usage, in expense of higher data exchange overhead for phase estimation at the transmitters. However, ACK/NACK transmission from the destination can be exploited for the phase estimation of the channels. Thus, no signaling overhead is added when beamforming technique is used. In this case, the received signal at the destination is given by $$\label{1e} y=\left\{\begin{array}{cc} |f_0| s + |g_r| \hat{s}+n, & \text{if } l > k, \\ |f_0| s +n, & \text{if } l \leq k. \end{array}\right.$$ Average Throughput ------------------ Two definitions of throughput are considered. A frequently used metric for throughput analysis is the long-term (LT) average throughput, given by [@elg06] $$\label{b3} \bar{G}_{LT}=\frac{R}{\mathbb{E}\{l\}}=\frac{R}{\sum_{l=0}^{L-1}P_{\text{out}}(l)},$$ where $\mathbb{E}\{l\}$ is the average number of HARQ rounds spent transmitting an arbitrary message and $P_{\text{out}}(l)$ denotes the probability that the packet is incorrectly decoded at the destination after $l$ HARQ rounds. In the next section, we calculate closed-form solutions for the outage probability terms $P_{\text{out}}(l)$ used in . The definition in relies on the steady-state behavior of several message transmissions. During this time, the probabilities $P_{\text{out}}(l)$ are assumed to be constant. This assumption is removed by considering the delay-limited (DL) throughput, which is the throughput of a single packet, defined by [@nar08] $$\label{b2} \bar{G}_{DL}=\sum_{l=1}^{L}\frac{R}{l}\left[P_{\text{out}}(l-1)-P_{\text{out}}(l)\right],$$ An advantage of definition , which does not resort to long-term behavior, is the ability to track slow time variations in the channels. In [@zor03a], the request to an automatic repeat request (ARQ) is served by the relay closest to the destination, among those that have decoded the message. However, distance-dependent relay selection does not consider the fading effect of wireless networks and leads to a maximum diversity of two. Therefore, in this work, the request to an ARQ is served by the relay with the best instantaneous channel conditions. Similar to [@ble06b], we choose the relay with the maximum of $\min\left\{\gamma_{f_i},\gamma_{g_i}\right\}$, $i=1,\ldots,N$, as the best relay, where $\gamma_{f_i}=|f_{i}|^2$ and $\gamma_{g_i}=|g_{i}|^2$. We define $$\begin{aligned} \label{2} \gamma_{\max}&\triangleq\min\left\{\gamma_{f_r},\gamma_{g_r}\right\} \nonumber\\ &=\max\left\{\min\left\{\gamma_{f_1},\gamma_{g_1}\right\},\ldots,\min\left\{\gamma_{f_N},\gamma_{g_N}\right\}\right\}\end{aligned}$$ where $$\label{3} r=\text{arg}\max_{i=1,\ldots,N}\left\{\min\left\{\gamma_{f_i},\gamma_{g_i}\right\}\right\}.$$ Performance Analysis ==================== In this section, we calculate the outage probability of the HARQ relay selection system proposed in the previous section. Besides achieving a performance metric, outage probability expression is needed in both throughput definitions in and . Let $\chi$ denote the earliest HARQ round after which the relay stops listening to the current message. The outage probability for the relay channel after $l$ HARQ rounds is given by [@tab05] $$\begin{aligned} \label{4} P_{\text{out}}(l)=&\sum_{k=1}^{l-1}P_{\text{out}}(l\,|l>k)\,\text{Pr}[\chi=k] \nonumber\\ &+\sum_{k=l}^{L}P_{\text{out}}(l\,|l\leq k)\,\text{Pr}[\chi=k].\end{aligned}$$ To compute $\text{Pr}[\chi=k]$, the mutual information between source and relay for each HARQ round is given by $$\label{5} I_{f_r}=\log_2\left(1+\frac{P}{N_0}\gamma_{f_r}\right),$$ where $P$ is the average transmit power from the source and $\gamma_{f_r}$ is an exponentially distributed random variable with mean $\sigma_{f_r}^2$. For $k = 1, \ldots , l - 1$, $\chi=k$ if the message is successfully decoded by the relay at the $k$th HARQ round, and we have $$\begin{aligned} \label{6} \text{Pr}[\chi=k]&=\text{Pr}[(k-1)I_{f_r}<R,\,k \, I_{f_r}>R] \nonumber\\ &=\text{Pr}[(k-1)I_{f_r}<R]-\text{Pr}[k \,I_{f_r}<R] \nonumber\\ &=\text{Pr}[\gamma_{f_r}<\mu_{k-1}]-\text{Pr}[\gamma_{f_r}<\mu_{k}],\end{aligned}$$ where $$\label{7} \mu_{k}=\frac{N_0}{P}\left(2^{R/k}-1\right).$$ For $k = l, \ldots , L$, $\chi=k$ if the relay did not decode the message successfully after $(l-1)$ HARQ rounds, and thus, we have $$\begin{aligned} \label{8} \text{Pr}[\chi=k]&=\text{Pr}[(l-1)I_{f_r}<R] =\text{Pr}[\gamma_{f_r}<\mu_{l-1}].\end{aligned}$$ From and , $\text{Pr}[\chi=k]$ can be calculated as $$\begin{aligned} \label{9} \text{Pr}[\chi=k]&=\left\{\begin{array}{cc} \text{Pr}[\gamma_{f_r}<\mu_{k-1}]-\text{Pr}[\gamma_{f_r}<\mu_{k}], & \text{if } k<l, \\ \text{Pr}[\gamma_{f_r}<\mu_{l-1}], & \text{if } k\geq l. \end{array}\right.\end{aligned}$$ Exact Outage Probability ------------------------ Since the index $r$ given in is dependent on channels, $\gamma_{f_r}$ and $\gamma_{g_r}$ are *not independent* for $N>1$. Thus, obtaining a closed-form for PDF is not straightforward. As it is seen from , for computing $\text{Pr}[\chi=k]$, the CDF of random variable $\gamma_{f_r}$ is required. In the following, the CDF of the random variable $\gamma_{f_r}$ is derived. \[a\]Let $\gamma_{f_{i}}$ and $\gamma_{g_{i}}$, $i=1,\ldots,N$, be independent exponential random variables with means $\sigma^{2}_{f_{i}}$ and $\sigma^{2}_{g_{i}}$, respectively. The CDF and PDF of $\gamma_{f_r}$, where $r$ is defined as , are given by and , respectively. $$\begin{aligned} \label{P1} \textnormal{F}_{\gamma_{f_{r}}}(\gamma)&=\prod_{i=1}^{N}\Big(1-e^{-(\frac{1}{\sigma^{2}_{f_{i}}}+\frac{1}{\sigma^{2}_{g_{i}}})\gamma}\Big)-\sum_{j=1}^{N}\frac{1}{\sigma^{2}_{g_{j}}}e^{-\frac{\gamma}{\sigma^{2}_{f_{j}}}}\int_{0}^{\gamma}e^{-\frac{\beta}{\sigma^{2}_{g_{j}}}}\prod_{\underset{i\neq j}{i=1}}^{N}\big(1-e^{-(\frac{1}{\sigma^{2}_{f_{i}}}+\frac{1}{\sigma^{2}_{g_{i}}})\beta}\big)d\beta\hspace{2mm},\hspace{2mm}\gamma \geqslant 0 ,\end{aligned}$$ $$\begin{aligned} \label{P2} \textnormal{f}_{\gamma_{f_{r}}}(\gamma)&=\sum_{j=1}^{N}\Big(\frac{1}{\sigma^{2}_{f_{j}}} e^{-(\frac{1}{\sigma^{2}_{f_{j}}}+\frac{1}{\sigma^{2}_{g_{j}}})\gamma}\prod_{\underset{i\neq j}{i=1}}^{N}\big(1-e^{-(\frac{1}{\sigma^{2}_{f_{i}}}+\frac{1}{\sigma^{2}_{g_{i}}})\gamma}\big) \nonumber\\ &\,\,\,+\frac{1}{\sigma^{2}_{f_{j}}\sigma^{2}_{g_{j}}}e^{-\frac{\gamma}{\sigma^{2}_{f_{j}}}}\int_{0}^{\gamma}e^{-\frac{\beta}{\sigma^{2}_{g_{j}}}}\prod_{\underset{i\neq j}{i=1}}^{N}\big(1-e^{-(\frac{1}{\sigma^{2}_{f_{i}}}+\frac{1}{\sigma^{2}_{g_{i}}})\beta}\big)d\beta\Big)\hspace{2mm},\hspace{2mm}\gamma \geqslant 0.\end{aligned}$$ The proof is given in Appendix I. It is noteworthy that the integrals in and can be easily calculated, as all terms in the expansion of integrands are of the exponential form. When $\sigma^{2}_{f_{i}}=\sigma^{2}_{f}$ and $\sigma^{2}_{g_{i}}=\sigma^{2}_{g}$ for all $i=1,2,...,N$, CDF and PDF of $\gamma_{f_{r}}$ are respectively simplified as follows: $$\begin{aligned} \label{P3} \textnormal{F}_{\gamma_{f_{r}}}(\gamma)&=(1-e^{-(\frac{1}{\sigma^{2}_{f}}+\frac{1}{\sigma^{2}_{g}})\gamma}\big)^{N}\nonumber \\ &-\frac{N\sigma^{2}_{f}}{\sigma^{2}_{f}+{\sigma^{2}_{g}}}e^{-\frac{\gamma}{\sigma^{2}_{f}}}\,\mathcal{B}\!\left(1-e^{-(\frac{1}{\sigma^{2}_{f}}+\frac{1}{\sigma^{2}_{g}})\gamma};N,\frac{\sigma^{2}_{f}}{\sigma^{2}_{f}+{\sigma^{2}_{g}}}\right) \\ \textnormal{f}_{\gamma_{f_{r}}}(\gamma)&=\frac{N}{\sigma^{2}_{f}}e^{-(\frac{1}{\sigma^{2}_{f}}+\frac{1}{\sigma^{2}_{g}})\gamma}(1-e^{-(\frac{1}{\sigma^{2}_{f}}+\frac{1}{\sigma^{2}_{g}})\gamma}\big)^{N-1}\nonumber \\ &+\frac{N}{\sigma^{2}_{f}+{\sigma^{2}_{g}}}e^{-\frac{\gamma}{\sigma^{2}_{f}}}\,\mathcal{B}\!\left(1-e^{-(\frac{1}{\sigma^{2}_{f}}+\frac{1}{\sigma^{2}_{g}})\gamma};N,\frac{\sigma^{2}_{f}}{\sigma^{2}_{f}+{\sigma^{2}_{g}}}\right)\end{aligned}$$ where $\mathcal{B}(x;a,b)=\int_{0}^{x}t^{a-1}(1-t)^{b-1}dt$ is the incomplete beta function [@gra96]. For high values of SNR, i.e. when $\gamma=\mu_{k}\rightarrow 0$, the closed form solution for can be obtained as $$\textnormal{F}_{\gamma_{f_{r}}}(\gamma)\cong \gamma^{N}\big(\prod_{i=1}^{N}(\frac{1}{\sigma^{2}_{f_{i}}}+\frac{1}{\sigma^{2}_{g_{i}}})\big)\big(\frac{1}{N}\sum_{i=1}^{N}\frac{\sigma^{2}_{g_{i}}}{\sigma^{2}_{f_{i}}+\sigma^{2}_{g_{i}}}\big) .$$ From , $\text{Pr}[\chi=k]$ in can be written as $$\begin{aligned} \label{9f} \text{Pr}[\chi=k]&=\left\{\begin{array}{cc} \textnormal{F}_{\gamma_{f_{r}}}(\mu_{k-1})-\textnormal{F}_{\gamma_{f_{r}}}(\mu_{k}), & \text{if } k<l, \\ \textnormal{F}_{\gamma_{f_{r}}}(\mu_{l-1}), & \text{if } k\geq l. \end{array}\right.\end{aligned}$$ Next, the conditional probabilities $P_{\text{out}}(l\,|l> k)$ and $P_{\text{out}}(l\,|l\leq k)$ in will be calculated. After correct decoding of the source packet at the relay, the relay helps the source by simultaneous transmission according to the Alamouti code. Hence, assuming the relay transmits the same power $P$ as the source, the mutual information of the effective channel is given by $$\label{17} I_{s,r,d}=\log_2\left(1+\frac{P}{N_0}\gamma_{f_0}+\frac{P}{N_0}\gamma_{g_r}\right).$$ Let $I_{\text{tot},k,l}$ denote the total mutual information accumulated at the destination after $l$ HARQ rounds and when $\chi=k$. For $k < l$, the relay listens for $k$ HARQ rounds and transmits the message simultaneously with the source using the Alamouti code for the remaining $(l - k)$ HARQ rounds. For $k \geq l$, the relay does not help the source during the $l$ HARQ rounds. Hence, $$\begin{aligned} \label{18} I_{\text{tot},k,l}&=\left\{\begin{array}{cc} k\,I_{f_0}+(l-k)\,I_{s,r,d}, & \text{if } k=1, \ldots , l - 1, \\ l I_{f_0}, & \text{if } k = l, \ldots , L, \end{array}\right.\end{aligned}$$ where $I_{f_0}$ is the mutual information between the source and destination at each HARQ round and can be written as $I_{f_0}=\log_2\left(1+\frac{P}{N_0}\gamma_{f_0}\right)$. Therefore, for $k \geq l$, we have $$\begin{aligned} \label{20} P_{\text{out}}(l\,|l\leq k)&=\text{Pr}[l I_{f_0}<R] = 1-\exp\left(\frac{-\mu_{l}}{\sigma_{f_0}^2}\right).\end{aligned}$$ From , the conditional probability $P_{\text{out}}(l\,|l> k)$ can be calculated as $$\begin{aligned} \label{21} &P_{\text{out}}(l\,|l> k)=\text{Pr}[I_{\text{tot},k,l}<R] \nonumber\\ &=\text{Pr}\!\left\{\log_2\!\left[\!\left(1\!+\!\frac{P}{N_0}\gamma_{f_0}\right)^{\!k} \!\left(1\!+\!\frac{P}{N_0}\gamma_{f_0}\!+\!\frac{P}{N_0}\gamma_{g_r}\right)^{\!l-k}\!\right]\!<R\!\right\} \nonumber\\ &=\text{Pr}\left\{\gamma_{g_r}<\frac{2^{R/(l-k)}}{\frac{P}{N_0}\,\left(1+\frac{P}{N_0}\gamma_{f_0}\right) ^{k/(l-k)}}-\gamma_{f_0} -\frac{N_0}{P}\right\} \nonumber\\ &=\int_{\gamma_{f_0}=0}^{\mu_l}\int_{\gamma_{g_r}=0}^{\beta(\gamma_{f_0})} \frac{e^{-\frac{\gamma_{f_0}}{\sigma^2_{f_0}}}}{\sigma^2_{f_0}}\, \textnormal{f}_{\gamma_{g_r}}(\gamma_{g_r})\, d\gamma_{f_0}\,d\gamma_{g_r}\triangleq \Upsilon(l,k),\end{aligned}$$ where $\beta(\gamma_{f_0})=\frac{2^{R/(l-k)}N_0}{P\left(1+\frac{P}{N_0}\gamma_{f_0}\right)^{k/(l-k)}}-\gamma_{f_0}-\frac{N_0}{P}$. Due to symmetry, the PDF of random variable $\gamma_{g_r}$, i.e., $\textnormal{f}_{\gamma_{g_r}}(\gamma)$, is same as the PDF of random variable $\gamma_{f_r}$, with perhaps different mean. Thus, the PDF of $\gamma_{g_r}$ can be found by the derivation of $\text{Pr}\{\gamma_{g_r}<\gamma\}$ in as $$\begin{aligned} \label{22q} &\textnormal{f}_{\gamma_{g_r}}(\gamma)=\sum_{j=1}^{N}\Big(\frac{1}{\sigma^{2}_{g_{j}}} e^{-(\frac{1}{\sigma^{2}_{f_{j}}}+\frac{1}{\sigma^{2}_{g_{j}}})\gamma}\prod_{\underset{i\neq j}{i=1}}^{N}\big(1-e^{-(\frac{1}{\sigma^{2}_{f_{i}}}+\frac{1}{\sigma^{2}_{g_{i}}})\gamma}\big) \nonumber\\ &\,\,\,+\frac{1}{\sigma^{2}_{f_{j}}\sigma^{2}_{g_{j}}}e^{-\frac{\gamma}{\sigma^{2}_{g_{j}}}} \int_{0}^{\gamma}e^{-\frac{\beta}{\sigma^{2}_{f_{j}}}}\prod_{\underset{i\neq j}{i=1}}^{N}\big(1-e^{-(\frac{1}{\sigma^{2}_{f_{i}}}+\frac{1}{\sigma^{2}_{g_{i}}})\beta}\big)d\beta\Big)\hspace{2mm},\hspace{2mm}\gamma \geqslant 0.\end{aligned}$$ By substituting $\textnormal{f}_{\gamma_{g_r}}(\gamma)$ from into , $P_{\text{out}}(l\,|l> k)$ is obtained. Therefore, using , , and , the outage probability in the $l$th stage of HARQ process can be achieved as $$\begin{aligned} \label{21o} &P_{\text{out}}(l)= \sum_{k=1}^{l-1}(\textnormal{F}_{\gamma_{f_{r}}}(\mu_{k-1})-\textnormal{F}_{\gamma_{f_{r}}}(\mu_{k}))\Upsilon(l,k) +\sum_{k=l}^{L}\textnormal{F}_{\gamma_{f_{r}}}(\mu_{l-1})\left(1-e^{\frac{-\mu_l}{\sigma_{f_0}^2}}\right),\end{aligned}$$ where $\Upsilon(l,k)$ is defined in . Approximate Outage Probability ------------------------------ In the previous subsection, we were able to derive the outage probability in the $l$-th round of HARQ. However, a triple integral should be solved to get $\Upsilon(l,k)$, and also $\textnormal{F}_{\gamma_{f_{r}}}(\mu_{k-1})$ is in an integral form. To give an insight on the diversity and find ways of optimizing the system, in this subsection, we try to find a simpler solution for the outage probability. In the following, an approximation of the CDF of the random variable $\gamma_{f_r}$ is derived. \[a\]Let $\gamma_{f_i}$ and $\gamma_{g_i}$, $i=1,\ldots,N$, be set of independence exponential random variables with mean $\sigma^2_{f_i}=\sigma^2_{g_i}=\sigma^2_{i}$. The cumulative density function of $\gamma_{f_r}$, where $r$ is defined as , can be approximated as $$\begin{aligned} \label{14q} &\textnormal{F}_{\gamma_{f_{r}}}(\gamma) \approx 1-\sqrt{1-\prod_{i=1}^N\left(1-e^{-\frac{2\gamma}{\sigma^2_{i}}}\right)}.\end{aligned}$$ The proof is given in Appendix II. In Fig. \[f1\], we have compared the approximated PDF of $\gamma_{f_r}$, which is obtained by the derivation of CDF in , with the simulated PDF of $\gamma_{f_r}$. As it can be seen from Fig. \[f1\], for the case of a single-relay network ($N=1$), the analytical and simulated results have the same performance. This is because of the fact that the independence assumption for $\gamma_{f_i}$ and $\gamma_{g_i}$ becomes valid for $N=1$, and the approximation in turns into equality. For the opportunistic relaying case, i.e., $N>1$, it can be seen that the analytical curves appropriately approximate the simulation result. ![Comparison of the PDF of the received SNR at the selected relay $r$ in a network with $N$ relays.[]{data-label="f1"}](PDF.eps "fig:"){width="\columnwidth"}\ From , $\text{Pr}[\chi=k]$ in can be approximated as $$\begin{aligned} \label{15q} \text{Pr}[\chi=k]&\approx \sqrt{1-\prod_{i=1}^N\left(1-e^{-\frac{2\mu_{k}}{\sigma^2_{i}}}\right)} \nonumber\\ &-\sqrt{1-\prod_{i=1}^N \left(1-e^{-\frac{2\mu_{k-1}}{\sigma^2_{i}}}\right)}\triangleq \Omega_1(k),\end{aligned}$$ for $k<l$, and $$\begin{aligned} \label{16q} &\text{Pr}[\chi=k]\approx 1-\sqrt{1-\prod_{i=1}^N\left(1-e^{-\frac{2\mu_{l-1}}{\sigma^2_{i}}}\right)}\triangleq \Omega_2(l),\end{aligned}$$ for $k\geq l$. Due to symmetry, the PDF of random variable $\gamma_{g_r}$, i.e., $\textnormal{f}_{\gamma_{g_{r}}}(\gamma)$, is same as the PDF of random variable $\gamma_{g_r}$, with perhaps different mean. Thus, the closed-form approximation for PDF of $\gamma_{g_r}$ can be found by the derivation of $\textnormal{F}_{\gamma_{f_{r}}}(\gamma)$ in as $$\begin{aligned} \label{22q} &\textnormal{f}_{\gamma_{g_r}}(\gamma)\approx\frac{1}{\sqrt{1-\displaystyle\prod_{i=1}^N\left(1-e^{-\frac{2\gamma}{\sigma^2_{i}}}\right)}} \sum_{i=1}^{N}\frac{ e^{-\frac{2\gamma}{\sigma^2_{i}}} }{\sigma^2_{i}}\prod_{\underset{j\neq i}{j=1}}^N\!\!\left(1\!-e^{\!-\frac{2\gamma}{\sigma^2_{j}}}\right).\end{aligned}$$ By substituting $\textnormal{f}_{\gamma_{g_r}}(\gamma)$ from into , $P_{\text{out}}(l\,|l> k)$ can be approximated as $$\begin{aligned} \label{21q} &P_{\text{out}}(l\,|l> k)\approx\int_{\gamma_{f_0}=0}^{\mu_l}\int_{\gamma_{g_r}=0}^{\beta(\gamma_{f_0})} \frac{e^{-\frac{\gamma_{f_0}}{\sigma^2_{f_0}}}}{\sigma^2_{f_0}}\, \textnormal{f}_{\gamma_{g_r}}(\gamma_{g_r})\, d\gamma_{f_0}\,d\gamma_{g_r}\triangleq \Upsilon_2(l,k).\end{aligned}$$ Therefore, using , , and , the outage probability in the $l$th stage of HARQ process can be achieved as $$\begin{aligned} \label{21oq} &P_{\text{out}}(l)\approx \sum_{k=1}^{l-1}\Omega_1(k)\Upsilon_2(l,k)+\sum_{k=l}^{L}\Omega_2(l)\left(1-e^{\frac{-\mu_l}{\sigma_{f_0}^2}}\right).\end{aligned}$$ Upper-Bound on Outage Probability --------------------------------- For calculating the minimum diversity gain of HARQ wireless relay networks when selection strategy in is used, it is enough to derive an upper-bound on the outage probability $P_{\text{out}}(l)$. The random variable $\gamma_{f_r}$, which is corresponding the source-relay channel of the selected relay, can be bounded as $$\label{4c} \gamma_{\max}\leq \gamma_{f_r}\leq \gamma^s_{\max},$$ where $\gamma_{\max}$ is given in and $\gamma^s_{\max}$ is defined as $$\label{4a} \gamma^s_{\max}=\max_{i=1,\ldots,N}\left\{\gamma_{f_i}\right\}.$$ The CDF of $\gamma^s_{\max}$ can be written as $$\begin{aligned} \label{6ab} \text{Pr}\{\gamma^s_{\max}<\gamma\}&=\text{Pr}\{\gamma_{f_1}<\gamma,\gamma_{f_2}<\gamma,\ldots, \gamma_{f_N}<\gamma\} \nonumber\\ &=\prod_{i=1}^N\left(1-e^{-\frac{\gamma}{\sigma^2_{f_i}}}\right).\end{aligned}$$ Thus, it is easy to show that the CDF of $\gamma_{f_r}$ can be bounded as $$\begin{aligned} \label{18d} \text{Pr}\{\gamma^s_{\max}<\gamma\}&\leq\text{Pr}\{\gamma_{f_r}<\gamma\}\leq\text{Pr}\{\gamma_{\max}<\gamma\}.\end{aligned}$$ Therefore, combining and , an upper-bound on $\text{Pr}[\chi=k]$ will be obtained as follows $$\begin{aligned} \label{10c} \text{Pr}[\chi=k]\leq\text{Pr}[\gamma_{\max}<\mu_{k-1}]-\text{Pr}[\gamma^s_{\max}<\mu_{k}],\end{aligned}$$ for $k<l$. From , , and , $\text{Pr}[\chi=k]$ for $k<l$ can be calculated as $$\begin{aligned} \label{10cc} \text{Pr}[\chi=k]&\leq \prod_{i=1}^N\!\left(1-e^{-\mu_{k-1}\left(\frac{1}{\sigma^2_{f_i}}+\frac{1}{\sigma^2_{g_i}}\right)}\right) \!-\!\prod_{i=1}^N\!\left(1-e^{-\frac{\mu_{k}}{\sigma^2_{f_i}}}\right) \nonumber\\ & \triangleq \Lambda_1(k).\end{aligned}$$ For $k\geq l$, by combining , , and , we have $$\begin{aligned} \label{11cc} &\text{Pr}[\chi=k]\leq\text{Pr}[\gamma_{\max}<\mu_{l-1}] \nonumber\\ &=\prod_{i=1}^N\left(1-e^{-\mu_{l-1}\left(\frac{1}{\sigma^2_{f_i}}+\frac{1}{\sigma^2_{g_i}}\right)}\right)\triangleq \Lambda_2(l).\end{aligned}$$ Next, $P_{\text{out}}(l\,|l> k)$ in can be upper-bounded as $$\begin{aligned} \label{21j} &P_{\text{out}}(l\,|l> k) \nonumber\\ &\leq\text{Pr}\left\{\gamma_{\max}<\frac{2^{R/(l-k)}}{\frac{P}{N_0}\,\left(1+\frac{P}{N_0}\gamma_{f_0}\right) ^{k/(l-k)}}-\gamma_{f_0} -\frac{N_0}{P}\right\} \nonumber\\ &=\int_{\gamma_{f_0}=0}^{\mu_l}\int_{\gamma_{\max}=0}^{\beta(\gamma_{f_0})} \frac{e^{-\frac{\gamma_{f_0}}{\sigma^2_{f_0}}}}{\sigma^2_{f_0}}\, p_{\gamma_{\max}}(\gamma_{\max})\, d\gamma_{f_0}\,d\gamma_{\max}.\end{aligned}$$ The PDF of random variable $\gamma_{\max}$, i.e., $\textnormal{f}_{\gamma_{\max}}(\gamma)$ can be found by the derivative of $\text{Pr}\{\gamma_{\max}<\gamma\}$ in . Thus, we have $$\begin{aligned} \label{14hh} \textnormal{f}_{\gamma_{\max}}(\gamma)&= \sum_{i=1}^{N} \left(\frac{1}{\sigma^2_{f_i}}+\frac{1}{\sigma^2_{g_i}}\right) e^{-\gamma \left(\frac{1}{\sigma^2_{f_i}}+\frac{1}{\sigma^2_{g_i}}\right)} \nonumber\\ &\times \prod_{\underset{j\neq i}{j=1}}^N \left(1-e^{-\gamma\left(\frac{1}{\sigma^2_{f_j}}+\frac{1}{\sigma^2_{g_j}}\right)}\right).\end{aligned}$$ Therefore, by substituting $\text{Pr}[\chi=k]$ from and , and $P_{\text{out}}(l\,|l\leq k)$ and $P_{\text{out}}(l\,|l> k)$ from and , respectively, in , an upper-bound on outage probability the $l$th stage of HARQ process, i.e., $P_{\text{out}}(l)$ can be achieved. A tractable definition of the diversity gain is [@jaf05 Eq. (1.19)] $$\begin{aligned} \label{14h} G_d=-\lim_{\rho\rightarrow\infty}\frac{\log \left(P_{\text{out}}\right)}{\log \left(\rho\right)},\end{aligned}$$ where $\rho=\frac{P}{N_0}$. Thus, in the following, we investigate the asymptotic behavior and diversity order of $P_{\text{out}}(l)$ in . From , an upper-bound for $p_{\gamma_{\max}}(\gamma)$ can be found as $$\begin{aligned} \label{14k} \textnormal{f}_{\gamma_{\max}}(\gamma)&\leq N \gamma^{N-1}\prod_{i=1}^N \left(\frac{1}{\sigma^2_{f_i}}+\frac{1}{\sigma^2_{g_i}}\right),\end{aligned}$$ which is a tight bound when $\gamma\rightarrow 0$. Note that in high SNR scenario, the the behavior of the fading distribution around zero is important (see, e.g., [@rib05]). Using and the fact that the exponential distribution is a decreasing function of $\gamma_{f_0}$, $P_{\text{out}}(l\,|l> k)$ in can be further upper-bounded as $$\begin{aligned} \label{kp} &P_{\text{out}}(l\,|l> k) \nonumber\\ &\leq\int_{\gamma_{f_0}=0}^{\mu_l}\int_{\gamma_{\max}=0}^{\beta(\gamma_{f_0})} \frac{1}{\sigma^2_{f_0}}\, \gamma_{\max}^{N-1}\prod_{i=1}^N \left(\frac{1}{\sigma^2_{f_i}}+\frac{1}{\sigma^2_{g_i}}\right)\, d\gamma_{f_0}\,d\gamma_{\max} \nonumber\\ & \leq\frac{\mu_l}{\sigma^2_{f_0}}\, \mu_{l-k}^{N}\prod_{i=1}^N \left(\frac{1}{\sigma^2_{f_i}}+\frac{1}{\sigma^2_{g_i}}\right)\triangleq \Psi(l,k).\end{aligned}$$ Combining , , , , and , a closed-form upper-bound for the outage probability after $l$ HARQ round can be obtained as $$\begin{aligned} \label{21p} &P_{\text{out}}(l)\leq \sum_{k=1}^{l-1}\Lambda_1(k)\Psi(l,k)+\sum_{k=l}^{L}\Lambda_2(l)\left(1-e^{\frac{-\mu_l}{\sigma_{f_0}^2}}\right).\end{aligned}$$ Furthermore, using , and , another closed-form approximation for $P_{\text{out}}(l)$ can be obtained as $$\begin{aligned} \label{21w} &P_{\text{out}}(l)\approx \sum_{k=1}^{l-1}\Omega_1(k)\Psi(l,k)+\sum_{k=l}^{L}\Omega_2(l)\left(1-e^{\frac{-\mu_l}{\sigma_{f_0}^2}}\right).\end{aligned}$$ \[a\]Assuming a HARQ system with $N$ potential relays nodes, the relay selection strategy based on can achieve the full diversity order of $N+1$. The proof is given in Appendix III. Numerical Analysis ================== In this section, the performance of the proposed relay-selection HARQ system is studied through numerical results. We used the equal power allocation among the source and the selected relay. Assume the relays and the destination have the same value of noise power, and all the links have unit-variance Rayleigh flat fading, i.e., $\sigma^2_{f_i}=\sigma^2_{g_i}=\sigma^2_{f_0}=1$. It is also assumed that rate $R$ is normalized to 1. We compare the transmit SNR $\frac{P}{N_0}$ versus outage probability performance. The block fading model is used, in which channel coefficients changed randomly in time to isolate the benefits of spatial diversity. The simulation result is averaged over 3’000’000 transmitted symbols (channel realization trials). Fig. \[f3\] confirms that the analytical results attained in Section III for the outage probability have an accurate performance as the simulation results. We consider the maximum number of HARQ rounds to be $L=5$. The outage probability at the 2nd HARQ round, i.e., $P_{\text{out}}(l=2)$, is compared for two different number of relays $N=2,4$. One can see the approximate outage probability derived in has the similar performance as the simulated curved for all values of SNR. In addition, the closed-form outage probability expression in well approximates the simulated results, especially in medium and high SNR conditions. Furthermore, Fig. \[f3\] shows that the upper-bound expression in is a tight upper-bound. The asymptotic outage probability derived in is also depicted in Fig. \[f3\] which confirms the full-diversity order of the proposed scheme. ![The outage probability $P_{\text{out}}(l)$ curves of delay-limited HARQ networks employing opportunistic relaying with 2 and 4 relays, when $R=1$ bits/sec, $L=5$ is the maximum number of rounds, and we consider HARQ round of $l=2$. []{data-label="f3"}](outage1.eps "fig:"){width="\columnwidth"}\ It is straightforward to show that the outage probability for direct transmission after $l$ HARQ round is [@nar08 Eq. (7)] $$\begin{aligned} \label{61} &P_{\text{out},d}(l)= 1-\exp\left(-\frac{2^{R/l}-1}{\rho \, \sigma^2_{f_0}}\right).\end{aligned}$$ In Fig. \[g2\], the outage probability at the $l=L=5$th HARQ round of the system with different number of relays are considered. After selecting the best relay, Alamouti code is employed in the second transmission phase. Compared to the single HARQ relaying system proposed in [@nar08], the proposed HARQ opportunistic relaying system with $N=2,3,4$ relays outperforms considerably for all SNR conditions. For example, it can be seen that in outage probability of $10^{-3}$, the system with two relays saves around $8$ dB in SNR compared to the single relay HARQ system. Furthermore, it can be checked that the system with $N$ relays can achieve the diversity order of $N+1$. A delay-limited throughput where defined in explicitly accounts for finite delay constraints and associated non-zero packet outage probabilities. It can be shown that for small outage probabilities, this delay-limited throughput is greater than the conventional long-term average throughput defined in In addition to finite delay constraint, represented by the maximum number $L$ of HARQ rounds, higher-layer applications usually require that $P_{\text{out}}\leq \rho_{\max}$, where $\rho_{\max}$ is a target outage probability. The total LT and DL throughput are studied in Fig. \[g3\] subject to user QoS constraints, represented by outage probability target $\rho_{\max}$ and delay constraint $L$. In Fig. \[g3\], the total LT and DL throughputs of opportunistic relying HARQ system with $N=2,4$ relays are plotted as a function of SNR and compared with the direct transmission HARQ system with $L=3$, $\rho_{\max}=10^{-3}$, and the following linear relay geometry: $\sigma^2_{f_r}=\sigma^2_{g_r}=\sigma^2_{0}=1$. As expected, the presence of the relays significantly increases the throughput. Furthermore, in agreement with [@nar08 Eq. (5)], it can be seen that the delay-limited throughput is greater than the long-term average throughput. An interesting observation is that as well as the diversity gain achieved by the opportunistic relaying HARQ system, which is previously shown in Fig. \[g2\], obtaining higher throughputs are possible. This behavior underscores the importance of the proposed system. ![The outage probability performance of the proposed relay selection HARQ system versus transmit SNR in a network with different number of relays, $R=1$ bits/sec, $L=l=5$ HARQ rounds, and $\sigma^2_{f_i}=\sigma^2_{g_i}=\sigma^2_{f_0}=1$. []{data-label="g2"}](outage2.eps "fig:"){width="\columnwidth"}\ ![ The delay-limited (DL) and long-term (LT) throughputs of direct transmission and relay selection HARQ system versus transmit SNR for target outage probability $\rho_{\max}=10^{-3}$, $L=3$ HARQ rounds, physical layer rate $R=1$ bits/sec, and $\sigma^2_{f_r}=\sigma^2_{g_r}=\sigma^2_{0}=1$, $\sigma^2_{f_i}=\sigma^2_{g_i}=\sigma^2_{f_0}=1$. []{data-label="g3"}](rate_SNR.eps "fig:"){width="\columnwidth"}\ Conclusion ========== In this paper, we proposed a throughput-efficient relay selection HARQ system over Rayleigh fading. The throughput-delay performance of a half-duplex multi-branch relay system with HARQ was analyzed. A distributed relay selection scheme was introduced for HARQ multi-relay networks by using ACK/NACK signals transmitted by destination. We evaluated the average throughput and outage error probability performance and showed that the proposed technique significantly reduces the multiplexing loss due to the half-duplex constraint while providing attractive outage error probability performance. The closed-form expressions outage probability were derived, defined as the probability of packet failure after $L$ HARQ rounds, in half-duplex. For sufficiently high SNR, we derived a simple closed-form average outage probability expression for a HARQ system with multiple cooperating branches. Based on the derived upper-bound expressions, it was shown that the proposed scheme achieves the full spatial diversity order of $N+1$ in a non-orthogonal relay network with $N$ parallel relays. The analysis presented here allows quantitative evaluation of the throughput-delay performance gain of the relay selection channel compared to direct transmission. The numerical results confirmed that the proposed schemes can bring diversity and multiplexing gains in the wireless relay networks. Proof of Proposition 1 ====================== First, we define the auxalary random variables $m_{i}\triangleq\min\{\gamma_{f_{i}},\gamma_{g_{i}}\}$ for $i=1,2,...,N$. Since $\{\gamma_{f_{i}},\gamma_{g_{i}}\}_{i=1}^{N}$ are independent exponential random variables, $m_{i}$s are also independent exponential random variables with the following CDF: $$\label{PE1} \textnormal{F}_{m_{i}}(x)=1-e^{-(\frac{1}{\sigma^{2}_{f_{i}}}+\frac{1}{\sigma^{2}_{g_{i}}})x}\hspace{2mm},\hspace{2mm}x\geqslant 0$$ Also, using partitioning theorem, we have $$\label{PE3} {\ensuremath{\operatorname{Pr}}}\{\gamma_{f_{r}}\leqslant \gamma\}=\sum_{j=1}^{N}{\ensuremath{\operatorname{Pr}}}\{(\gamma_{f_{r}}\leqslant \gamma)\cap(r=j)\} .$$ For $j=1,2,...,N$, the summands of can be obtained as follows $$\begin{aligned} \label{PE4} {\ensuremath{\operatorname{Pr}}}\{(&\gamma_{f_{r}}\leqslant \gamma)\cap(r=j)\} \nonumber \\ &={\ensuremath{\operatorname{Pr}}}\{(m_{j}\leqslant \gamma)\cap\big(\bigcap_{\underset{i\neq j}{i=1}}^{N}(m_{i}<m_{j})\big)\cap(\gamma_{f_{j}}\leqslant \gamma)\} \nonumber \\ &={\ensuremath{\operatorname{Pr}}}\{(m_{j}\leqslant \gamma)\cap\big(\bigcap_{\underset{i\neq j}{i=1}}^{N}(m_{i}<m_{j})\big)\} \nonumber \\ &\hspace{3.3mm}-{\ensuremath{\operatorname{Pr}}}\{(m_{j}\leqslant \gamma)\cap\big(\bigcap_{\underset{i\neq j}{i=1}}^{N}(m_{i}<m_{j})\big)\cap(\gamma_{f_{j}}> \gamma)\} .\end{aligned}$$ By substituting in , we obtain $$\begin{aligned} \label{PE4.1} {\ensuremath{\operatorname{Pr}}}\{\gamma_{f_{r}}&\leqslant \gamma\}=\sum_{j=1}^{N}{\ensuremath{\operatorname{Pr}}}\{(m_{j}\leqslant \gamma)\cap\big(\bigcap_{\underset{i\neq j}{i=1}}^{N}(m_{i}<m_{j})\big)\} \nonumber \\ &-\sum_{j=1}^{N} {\ensuremath{\operatorname{Pr}}}\{(m_{j}\leqslant \gamma)\cap\big(\bigcap_{\underset{i\neq j}{i=1}}^{N}(m_{i}<m_{j})\big)\cap(\gamma_{f_{j}}> \gamma)\} \nonumber \\ &={\ensuremath{\operatorname{Pr}}}\{\max(m_{1},m_{2},...,m_{N})\leqslant \gamma\} \nonumber \\ &-\sum_{j=1}^{N} {\ensuremath{\operatorname{Pr}}}\{(m_{j}\leqslant \gamma)\cap\big(\bigcap_{\underset{i\neq j}{i=1}}^{N}(m_{i}<m_{j})\big)\cap(\gamma_{f_{j}}> \gamma)\} .\end{aligned}$$ Since $m_{i}$s are independent, the first term on the right side of is given by $$\begin{aligned} \label{PE4.2} {\ensuremath{\operatorname{Pr}}}\{&\max(m_{1},m_{2},...,m_{N})\leqslant \gamma\}={\ensuremath{\operatorname{Pr}}}\{\bigcap_{i=1}^{N}(m_{i}\leqslant\gamma)\} \nonumber \\ &=\prod_{i=1}^{N}{\ensuremath{\operatorname{Pr}}}\{m_{i}\leqslant\gamma\}=\prod_{i=1}^{N}\textnormal{F}_{m_{i}}(\gamma) .\end{aligned}$$ Also, the summand of the summation on the right side of is obtained as follows $$\begin{aligned} \label{PE6} {\ensuremath{\operatorname{Pr}}}\{(&m_{j}\leqslant \gamma)\cap\big(\bigcap_{\underset{i\neq j}{i=1}}^{N}(m_{i}<m_{j})\big)\cap(\gamma_{f_{j}}> \gamma)\} \nonumber \\ &={\ensuremath{\operatorname{Pr}}}\{\gamma_{f_{j}}>\gamma\} {\ensuremath{\operatorname{Pr}}}\{(m_{j}\leqslant \gamma)\cap\big(\bigcap_{\underset{i\neq j}{i=1}}^{N}(m_{i}<m_{j})\big)|\gamma_{f_{j}}>\gamma\} \nonumber \\ &={\ensuremath{\operatorname{Pr}}}\{\gamma_{f_{j}}>\gamma\} {\ensuremath{\operatorname{Pr}}}\{(\gamma_{g_{j}}\leqslant \gamma)\cap\big(\bigcap_{\underset{i\neq j}{i=1}}^{N}(m_{i}<\gamma_{g_{j}})\big)\}\nonumber \\ &=e^{-\frac{\gamma}{\sigma^{2}_{f_{j}}}}\int_{0}^{\gamma}\frac{1}{\sigma^{2}_{g_{j}}}e^{-\frac{\beta}{\sigma^{2}_{g_{j}}}}{\ensuremath{\operatorname{Pr}}}\{\bigcap_{\underset{i\neq j}{i=1}}^{N}(m_{i}<\beta)\}d\beta \nonumber \\ &=\frac{1}{\sigma^{2}_{g_{j}}}e^{-\frac{\gamma}{\sigma^{2}_{f_{j}}}}\int_{0}^{\gamma}e^{-\frac{\beta}{\sigma^{2}_{g_{j}}}}\prod_{\underset{i\neq j}{i=1}}^{N}\textnormal{F}_{m_{i}}(\beta)d\beta .\end{aligned}$$ Substituting from into and , one can obtain the CDF in using to . Also, taking derivative of with respect to $\gamma$, results in the PDF of $\gamma_{f_{r}}$, given by . Proof of Proposition 2 ====================== For deriving the CDF of $\gamma_{f_r}$, we should first find the CDF of $\gamma_{\max}$, which can be written as $$\begin{aligned} \label{10} \text{Pr}\{\gamma_{\max}<\gamma\}&=\text{Pr}\{\gamma_1<\gamma,\gamma_2<\gamma,\ldots,\gamma_N<\gamma\} $$ where $\gamma_i=\min\left\{\gamma_{f_i},\gamma_{g_i}\right\}$ is again an exponential random variable (RV) with the parameter equal to the sum of parameters of exponential RV $\gamma_{f_i}$ and $\gamma_{g_i}$, i.e., $1/\sigma^2_{f_i}$ and $1/\sigma^2_{g_i}$, respectively. Thus, assuming that all channel coefficients are independent of each others, we can rewrite as $$\begin{aligned} \label{11} \text{Pr}\{\gamma_{\max}<\gamma\}&=\prod_{i=1}^N\left(1-e^{-\gamma\left(\frac{1}{\sigma^2_{f_i}} +\frac{1}{\sigma^2_{g_i}}\right)}\right). $$ On the other hand, we have $$\begin{aligned} \label{12} &\text{Pr}\{\gamma_{\max}<\gamma\}=1-\text{Pr}\{\min\left\{\gamma_{f_r},\gamma_{g_r}\right\}>\gamma\} \nonumber\\ &=1-\text{Pr}\{\gamma_{f_r}>\gamma,\gamma_{g_r}>\gamma\}\approx 1-\text{Pr}\{\gamma_{f_r}\!>\!\gamma\}\text{Pr}\{\gamma_{g_r}\!>\!\gamma\},\end{aligned}$$ where the last equality is an approximation as if $\gamma_{f_r}$ and $\gamma_{g_r}$ are independent. For simplicity, we assume equidistance source-relay and relay-destination links, i.e., that $\sigma^2_{f_i}=\sigma^2_{g_i}=\sigma^2_{i}$. Since we have assumed that $\gamma_{f_i}$ and $\gamma_{g_i}$ have the same statistics, using and , we have $$\begin{aligned} \label{13} &\text{Pr}\{\gamma_{f_r}<\gamma\}=\text{Pr}\{\gamma_{g_r}<\gamma\} \approx 1-\sqrt{1-\prod_{i=1}^N\left(1-e^{-\frac{2\gamma}{\sigma^2_{i}}}\right)}.\end{aligned}$$ Proof of Proposition 3 ====================== From a Taylor series expansion, it can be shown that the first term in is $O(1/\rho^{2N+1})$. From , and by representing the factor $\mu_{k}$ in terms of the SNR ratio $\rho$, the outage probability in high SNR can be written as $$\begin{aligned} \label{60} &P_{\text{out}}(l)\leq \frac{\Delta(l)}{\rho^{N+1}},\end{aligned}$$ where $$\Delta(l)=\left(2^{\frac{R}{l}}-1\right)\left(2^{\frac{R}{l-1}}-1\right)^{\!N} \frac{L-l+1}{\sigma_{f_0}^2} \prod_{i=1}^N \! \left(\frac{1}{\sigma^2_{f_i}}+\frac{1}{\sigma^2_{g_i}}\right).$$Hence, observing , the diversity order defined in is equal to $N+1$, which is the full spatial diversity for $N+1$ transmitting nodes. [^1]: Behrouz Maham and Aydin Behnad are with School of ECE, College of Engineering, University of Tehran, North Karegar, Tehran 14395-515, Iran. Mérouane Debbah is with Alcatel-Lucent Chair on Flexible Radio, SUP[É]{}LEC, Gif-sur-Yvette, France. Preliminary version of a portion of this work was appeared in *Proc. IEEE Vehicular Technology Conference (VTC 2010-Fall)*. Emails: [[email protected], [email protected], [email protected]]([email protected], [email protected], [email protected]).
--- abstract: 'We analyze the fate of the unbroken SU(2) color gauge interactions for 2 light flavors color superconductivity at non zero temperature. Using a simple glueball lagrangian model we compute the deconfining/confining critical temperature and show that is smaller than the critical temperature for the onset of the superconductive state itself. The breaking of Lorentz invariance, induced already at zero temperature by the quark chemical potential, is shown to heavily affect the value of the critical temperature and all of the relevant features related to the deconfining transition. Modifying the Polyakov loop model to describe the SU(2) immersed in the diquark medium we argue that the deconfinement transition is second order. Having constructed part of the equation of state for the 2 color superconducting phase at low temperatures our results may be relevant for the physics of compact objects featuring a two flavor color superconductive state.' address: | $^1$ NORDITA, Blegdamsvej 17, DK-2100 Copenhagen Ø, Denmark\ $^2$ LAPTH, F-74941 Annecy-le-Vieux Cedex, France author: - 'Francesco [Sannino]{}$^1$ Nils [Marchal]{}$^{1,2}$ Wolfgang [Schäfer]{}$^1$' date: February 2001 title: Partial Deconfinement in Color Superconductivity --- Introduction {#uno} ============ Quark matter at very high density is expected to behave as a color superconductor [@REV]. Possible physical applications are related to the physics of compact objects [@REV], supernovae cooling [@Carter:2000xf] and explosions [@HHS] as well as to the Gamma Ray Bursts puzzle [@OS]. Here we concentrate on some features related to color superconductivity with 2 light flavors (2SC). The low-energy effective Lagrangian describing the in medium fermions and the broken sector of the $SU_c(3)$ color groups for the 2 flavor color superconductor (2SC) has been constructed in Ref. [@CDS; @OS2]. The 3 flavor case (CFL) has been developed in . The effective theories describing the electroweak interactions for the low-energy excitations in the 2SC and CFL case can be found in [@CDS2001]. The global anomalies matching conditions and constrains are discussed in [@S]. An interesting property of the $2SC$ state is that the three color gauge group breaks spontaneously to a left over $SU(2)$ subgroup and it can play a role for the physics of compact objects [@OS]. In Reference [@rischke2k] it has been shown that the confining scale of the unbroken $SU(2)$ color subgroup is lighter than the superconductive gap $\Delta$. The confined degrees of freedom, glueball-like particles, are expected to be light with respect to $\Delta$, and the effective theory based on the anomalous variation of the dilation current has been constructed in [@OS2]. Clearly for the physics of compact objects and more generally for a complete understanding of the QCD phase diagram it is relevant to know at what temperature the $SU(2)$ color gauge group confines/deconfines, the order of the phase transition and the equation of state. Investigating the deconfinement phase transition is, in general, a complex problem. At zero density importance sampling lattice simulations are able to provide vital information about the nature of the temperature driven phase transition for 2 and 3 colors Yang-Mills theories with and without matter fields (see [@Karsch:2001jb] for a review). Different models are used in literature to tackle/study the features of this phase transition from a theoretical stand point. Some models compute non zero temperature corrections for the glueball Lagrangian with or without elementary gluon degrees of freedom (the latter added to describe the deconfined side of the phase transition). Others rely on mean field theories encoding the symmetries of the Polyakov loops [@Pisarski:2001pe]. Hence we consider a simple, but predictive, model for the deconfinement temperature which makes use of the glueball Lagrangian valid at non zero quark density [@OS2]. We investigate the one loop thermal effective potential corrections for the dilatonic Lagrangian and observe that as we increase the temperature a new local minimum sets in for a lower value of the gluon condensate with respect to the zero temperature one. The critical temperature is defined as the value for which the two local minima have the same free energy value. Above this critical temperature the model is no longer applicable since new degrees of freedom like the unconfined gluons are expected to appear (see e.g. [@Carter:1998ti]) and we will briefly comment on their effects. An amusing feature of the model is that the critical temperature can be determined analytically. This is so since the new minimum appears for a zero vacuum expectation value of the gluon condensate [^1] and at this point, in the one loop approximation, one can exactly compute the effective thermal potential yielding the following estimate for the critical temperature: $$T_{c}=\sqrt[4]{\frac{90\,v^3}{2e\pi^2}}\hat{\Lambda} \ . \label{ApproxTc}$$ Here $e$ is the Euler number, $\hat{\Lambda}$ is the confining scale of the $SU(2)$ gluon-dynamics in 2SC and $v$ is the gluon [@rischke2k] as well as light glueball velocity [@OS2]. Equation (\[ApproxTc\]) is a good approximation also for the in vacuum theory. Here when Eq. (\[ApproxTc\]) is adjusted to take into account the gluonic degrees of freedom the higher order (in loop) contributions are shown to be less than 10% (see [@Carter:1998ti]). We find that the deconfining/confining critical temperature is smaller than the critical temperature $T_{2SC}$ for the superconductive state itself which is estimated to be $T_{2SC} \approx 0.57~\Delta$ with $\Delta$ the 2SC gap [@PR]. Actually the breaking of Lorentz invariance, due to the quark chemical potential and encoded in the glueball velocity, further reduces the critical temperature by a factor $v^{3/4}$ relative to the in vacuum case. This is a general feature independent of the model Lagrangian, also observed in [@Casalbuoni:1999zi]. The temperatures in play are much less than the value of the quark chemical potential. The situation is more involved if also rotational invariance breaks spontaneously due, for example, to the appearance of a spin one condensate [@Sannino:2001fd]. We study the glueball mass as function of temperature, chemical potential and $\Delta$. In the confined phase the mass is, at a very good approximation, constant with respect to the temperature. By computing the glueball thermal effective potential we provide part of the equation of state for the 2SC phase at low temperatures. In particular we can compute the pressure, the energy density and the entropy of the system below the critical deconfining temperature. It is important to stress that in this paper we are considering an ideal 2SC state where the up and down quarks are massless and the strange quark is infinitely massive. When computing properties related to the physics of compact stars it is very important to introduce in the model the effects of the quark masses as well as the ones induced by a not too heavy strange quark. These effects may affect the $SU(2)$ gluon properties and can be investigated using for example the effective theories near the fermi surface [@REV]. It would also be very interesting to see how the non perturbative $SU(2)$ dynamics might affect the recent results in [@Alford:2002kj]. However even within the present restrictive framework our estimate for the $SU(2)$ confining temperature may be a useful guide for astrophysical models of compact stars like the one in Ref. [@OS] featuring a 2SC state. Since the gluon-condensate is not a true order parameter for the deconfining transition the glueball Lagrangian cannot be used to infer the order of the transition itself. To settle this issue we modify the Polyakov’s loop inspired model [@Pisarski:2001pe] to fit the present case and finally predict a second order phase transition. Finally we suggest how ordinary lattice importance sampling techniques can be used to check our results and constitute, at the same time, the first simulations testing the high quark chemical potential but small temperature region of the QCD phase diagram. The glueball Lagrangian, if extended to describe the transition point, predicts a first order transition which seems to disagree with the prediction based on the order parameter (Polyakov’s loop). Actually the disagreement is an apparent one. Indeed only the order parameter is obliged to know about the order of the phase transition. Any other gauge invariant quantity does not need to display the same behavior while still bearing information about the phase transition itself (see for example [@Pisarski:2001pe] page 3 equations (9) and (10) and subsequent discussion (first reference)). In practice the critical temperature predicted represents also the limit of applicability of our simple model. At the critical point the Ginzburg-Landau theory for the order parameter is the proper way to describe the transition itself and can be used to infer the order of the phase transition. Unfortunately though the Ginzburg-Landau theory cannot predict the critical temperature. The previous discussion does not imply that the glueball and the order parameter (the Polyakov loop) at the transition are not related [@Sannino:2002wb]. It is important to stress that in general we have a tower of scalar, pseudoscalar and other excited glueball states in the confined regime together with the other physical states involving quarks of the 2SC state. We have made the standard assumption that the low energy $SU(2)$ dynamics is dominated by the associated lightest mode in the theory: the scalar glueball. This state does not couple to the light ungapped up and down quarks in the direction 3 of color (for a review of the complete low energy effective theory of the 2SC state see the 7th reference in [@REV]). Besides, according to [@Litim:2001je], the quark temperature effects are exponentially suppressed ($\sim \exp({-\Delta/T})$) so for $T<T_c$ and $\hat{\Lambda}<\Delta$, for an initial investigation, we can neglect these corrections. For temperatures in the range $T_c < T < T_{2SC}$ the gapped quark dynamics is no longer negligible and some of their effects have been computed using transport theory in [@Litim:2001je]. Our model must be considered only as a first step toward a more complete theory of the 2SC state where the $SU(2)$ non perturbative dynamics is included. In Section \[Glueball\] we provide the light glueball Lagrangian and construct the one loop thermal effective action. Here we suggest a way to relate the results obtained by employing different parameterizations for the glueball field. In Section \[Features\] we study the relevant features connected with the deconfining transition. We provide an economical criterion to estimate the critical temperature similar to the one extensively used in literature for the in vacuum Yang-Mills theories [@Carter:1998ti]. Finally using the Polyakov loop model adapted to the present case we show the phase transition to be likely second order. We conclude in Section \[Conclusions\]. Glueball Effective Lagrangian at finite Temperature {#Glueball} =================================================== The light glueball action for the in–medium Yang-Mills theory is [@OS2]: $$\begin{aligned} S_{Glueball}=\int &d^4x&\left\{\frac{c}{2}\,H^{-\frac{3}{2}}\left[\partial^{0} H \partial^{0}H - v^2 \partial^iH \partial^iH\right] -\frac{1}{2}H\log\left[\frac{H}{\hat{\Lambda}^4}\right] \right\} \ . \label{G-ball}\end{aligned}$$ $H$ is the composite field describing, upon quantization, the scalar glueball in medium and possesses mass-scale dimensions four. Here $c$ is a positive constant [^2] which fixes the tree glueball mass. Our results do not depend on the specific value assumed by this constant. It is also important to stress that the glueballs move with the same velocity as the underlying gluons in the 2SC color superconductor [@OS2]. The velocity depends on the gluon dielectric constant ($\epsilon$) and magnetic permeability ($\lambda$) via $v=1/\sqrt{\epsilon \lambda}$. The dielectric constant $\epsilon$ is different from unity (in fact $\epsilon \gg 1$ in the 2SC case [@rischke2k]) leading to an effectively reduced gauge coupling constant. Studying the polarization tensor at asymptotically high quark densities for the $SU(2)$ gluons in [@rischke2k] was found: $$\epsilon =1 + \frac{g_s^2 \mu^2}{18 \pi^2 \Delta^2}\ , \qquad \lambda =1 \ , \label{el}$$ with $g_s$ the underlying $SU(3)$ coupling constant and $\mu$ the quark chemical potential. This result has also been derived via effective theories valid close to the Fermi surface [@Casalbuoni2001]. In the effective Lagrangian $\hat{\Lambda}$ is a physical constant related to the confining scale of the in– medium 2 color Yang-Mills theory. Following [@rischke2k] we have the one loop relation: $$\begin{aligned} \hat{\Lambda}=\Delta \exp \left[-\frac{8\pi^2}{b g_s^2(\mu)}{ \sqrt{\frac{\epsilon(\mu/\Delta)}{\lambda(\mu/\Delta)}}}\right]\simeq \Delta \exp \left[-\frac{2\sqrt{2}\pi}{11} \frac{\mu}{g_s(\mu)\Delta} \right]\ , \label{lambda}\end{aligned}$$ with $b=22/3$ (at one loop) for $SU(2)$ and in the last step we considered the asymptotic solution of Ref. [@rischke2k], for convenience reported in Eq. (\[el\]). By using $\Lambda_{QCD}\simeq 300$ MeV, $\mu \simeq 500$ MeV and a gap value of about $30$ MeV one gets $\hat{\Lambda} \simeq 1$ MeV. It is hence reasonable to expect that the glueballs are light (with respect to the gap) with a mass typically somewhat larger or of the order of the confining scale. They are stable with respect to the strong interactions unlike ordinary glueballs while still decaying into two photons [@OS2]. The potential in Eq. (\[G-ball\]) can be considered a zeroth order model for a Yang-Mills theory in medium [@OS2] in which the glueballs are the associated hadronic particles. The minimum of the potential $V$ (see [@OS2] for details) is taken for $$\langle H \rangle =\frac{\hat{\Lambda}^4}{e} \ , \quad {\rm at~which~point} \quad \langle{V}\rangle=-\frac{\hat{\Lambda}^4}{2\,e} \ .$$ For the zero density case a number of phenomenological questions have been discussed using this type of toy model Lagrangian Eq. (\[G-ball\]) [@SST]. In order to extract dynamical information we define a canonically normalized (with canonical mass dimension one) glueball field $h$ via: $$H=f(h)=f_{(0)}^4+ f_{(1)} h+f_{(2)}\frac{h^2}{2!} +\cdots + f_{(n)}\frac{h^n}{n!}+\cdots \ ,$$ where we require $f(h)$ to be a well behaved function of the glueball field $h$ with non vanishing $f_{(0)}$ and $f_{(1)}$. The normalization condition of the kinetic term, at the tree level, yields the constraint: $$c^{\frac{1}{2}}\, f_{(1)}= f_{(0)}^{3} \ .$$ It is reasonable to expect that any interpolating function $f(h)$ should lead to the same physical results. This is indeed the case at the tree level since all of the possible choices to define a canonically normalized field are equivalent. However when considering thermal/quantum corrections is hard to demonstrate that different choices lead to the same physical results. We remind the reader of the time-honored sigma model example where the linear version is a renormalizable theory while the non linear sigma model is [*not*]{} a renormalizable theory in the usual sense. In order to keep our results as independent as possible from the specific function $f(h)$ here we define thermal averages directly in terms of $H$. More specifically following Dolan and Jackiw [@Dolan:qd] we formally introduce the temperature effective action $\Gamma(\overline{H})$ -the generating functional for single-particle irreducible Green’s functions via: $$\begin{aligned} W[J]&=&-i\log\left[\frac{{\rm Tr}e^{-\frac{\cal H}{T}}\exp\left[i \int d^4x \, H(x)J(x) \right]}{{\rm Tr}e^{-\frac{\cal H}{T}} }\right] \ , \\ \overline{H}(x)&=& \frac{\delta W[J]}{\delta J(x)}\ , \label{classyfield} \\ \Gamma[\overline{H}]&=& W[J]-\int d^4x\, {\overline H}(x) J(x) \ . \label{classyaction}\end{aligned}$$ ${\cal H}$ is the Hamiltonian and $J(x)$ the external source for the gluon condensate. In the last equation $J(x)$ is eliminated in favor of $\overline{H}(x)$ by the definition in (\[classyfield\]). We also have that ${\delta \Gamma[\overline{H}]}/{\delta \overline{H}(x)=-J(x)}$ and $\overline{H}(x)$, evaluated at $J=0$, is the thermodynamic average of the gluon condensate field $H(x)$. The present definition of the effective action is independent of the choice of the interpolating field function. For the present purposes it is sufficient to study $\Gamma[\overline{H}]$ for constant ${\overline H}(x)$ and consider the effective potential: $$V[\overline{H}]=-\frac{\Gamma[\overline{H}]}{\rm{space-time~volume}} \ .$$ In practice the $J$ dependent tree generating functional for the trace anomaly is (with $V\left[J\right] = - W\left[J\right]/(\rm{space-time~volume})$ for constant fields) $$V_{Tree}[J]=\frac{1}{2} H\log\left[\frac{H}{\hat{\Lambda}^4}\right] -J\, H \ .$$ and the one loop thermal effective potential as function of $J$ is: $$\begin{aligned} V[J]&=&{2} f_{(0)}^4\log\left[\frac{f_{(0)}}{\hat{\Lambda}}\right] - J\, f_{(0)}^4 + \frac{T}{v^3 \, 2\pi^2} \int_0^\infty dk\,k^2 \log \left[1 - \exp \left({-\frac{\epsilon_J}{T}}\right)\right] \ ,\end{aligned}$$ where $\displaystyle{\epsilon_J= \sqrt{k^2+ M^2_J \left( f_{(0)},f_{(2)} \right)}}$, and $M^2_J$ is defined via the curvature of the potential as $$M^2_{J}= \left. \frac{\partial^2 V}{\partial h^2} \right|_{h=0}= \frac{f_{(0)}^2}{2c} +f_{(2)}\left[2\log\frac{\sqrt[4]{e}\, f_{0}}{\hat{\Lambda} }-J\right] \ . \label{curvatura}$$ With the help of $$\overline{H} \equiv \overline{h}^4=-\frac{\delta V\left[J\right]}{ \delta J} = f_{(0)}^4 + \frac{f_{(2)}}{v^3 4\pi^2} \, \int_0^\infty \frac{dk \, k^2}{\left[ \exp\left(\displaystyle{\frac{\epsilon_J}{T}}\right) -1\right]\, \epsilon_J} \ ,$$ we deduce the effective potential $$V[\overline{h}]=\left[1 - J\,\frac{\partial}{\partial J}\right]V[J] \ , \label{finalV}$$ where the functional derivative with respect to $J$ is replaced with a partial derivative since we are now dealing with constant fields. We need now to solve for $J[\overline{H}]$ as function of $\overline{H}$ and then extremize the action. We identify $\bar{H}$ with $\bar{h}^4$ only after $J$ has been eliminated. For a general choice the function $f(h)$ one cannot find an analytical expression for $J[\overline{H}]$. However we immediately notice that for $f_{(2)}=0$ (at the one-loop level) there is no dependence on $J$ and we have $\overline{h}=f_{(0)}$ as well as a positive definite curvature $M^2={f_{(0)}}^2/{2c}$. To be more specific our glueball field function is now $f(h)=f_{(0)}^4 + f_{(1)}\,h$ where we truncate our function to the quadratic term since higher terms do not affect the one loop result. Actually any function $f(h)$ with just vanishing $f_{(2)}$ leads to the same source independent effective thermal potential: $$\begin{aligned} V\left[\bar{h}\right]= {\hat{\Lambda}^4 \over 2 e} + 2\bar{h}^4\log\left[\frac{\bar{h}}{\hat{\Lambda}}\right] + \frac{T^4}{v^3 \, 2\pi^2} \int_0^\infty dx\,x^2 \log \left[1 - \exp \left(-\sqrt{x^2 + \frac{\bar{h}^2}{2cT^2}} \right) \right] \ , \label{thermalP}\end{aligned}$$ where for convenience we subtracted the constant value of the potential evaluated on the vacuum at zero temperature. This expression is well defined for any value of $\bar{h}$. $V\left[\bar{h}\right]$ is shown in Fig. \[figure1\] for different values of the temperature and a given value of $c$ which fixes the zero temperature tree-level glueball mass (i.e. $M^2=\hat{\Lambda}^2/2\sqrt{e}c$). The plot is provided only for illustration and the general feature of the potential does not change for different choices of the chemical potential and reasonable values of the gap parameter. Note that our results for the critical temperature (presented in the next section) are evaluated at different values of the quark chemical potential and the gap $\Delta$. As we increase the temperature we observe a new minimum setting in for $\bar{h}$ at zero. We also note that the position of the old minimum is not much affected by temperature corrections over a large range of temperatures (see Fig. \[figure1\]). Close to the new minimum at zero is possible to perform the high temperature expansion leading to: $$\begin{aligned} \lim_{\bar{h}\rightarrow 0}V\left[\bar{h}\right]={\hat{\Lambda}^4 \over 2 e} - \frac{2\pi^2}{90}\frac{T^4}{v^3} + \frac{\bar{h}^2}{2c}\frac{T^2}{24 v^3} + {\cal O}\left(\bar{h}^4\right) \ . \label{exactTP}\end{aligned}$$ Before describing in some detail the features of the phase transition we now briefly comment on another possible choice of the glueball field widely used in literature. This is the exponential representation: $$\begin{aligned} H=f_{(0)}^4 \exp\left[\frac{h}{f_{(0)}\sqrt{c}}\right] \ .\end{aligned}$$ This function recovers the previous one for small field fluctuations. However since $f_{(2)}=f_{(0)}^2/c$ is not vanishing we cannot deduce an analytical expression of $J$ as function of $\bar{h}$. Note that if we would naively set $J$ to zero from the beginning the second derivative of the potential defined in Eq. (\[curvatura\]) is not definite positive for all values of $\bar{h}$ and the integral in Eq. (\[thermalP\]) is ill defined. Often in the literature the thermal corrections are computed without including the source $J$. We have shown that, at the one loop level, the linearly realized representation is not affected by the introduction of the source term while the non linear realization used for example in [@Drago:2001gd] are very much affected and should be handled with care. We expect the partial derivative term $J\partial V[J]/\partial J$ in Eq. (\[finalV\]) to help compensating for the possible different choices of $f(h)$. In the rest of this work we shall use the linear realizations. Clearly after having defined the extremum of the effective potential it is a simple matter to derive all of the relevant thermodynamical quantities. For the reader’s convenience we summarize the standard relations between the thermodynamical quantities and the free energy (per unit volume) $F=V$ (V is evaluated on the minimum) with the pressure $P=- F$ while the entropy per unit volume is $S=-\partial F/\partial T$. ![Potential function $V\left[\bar{h}\right]/\hat{\Lambda}^4$ for $\mu=500$ MeV and $\Delta=30$ MeV as function of the condensate $\bar{h}/\hat{\Lambda}$ for different values of the temperature. The solid line corresponds to $T=0$; the dotted line to $T=0.85~T_c$; the short-dashed to $T=T_c$; the long dashed to $T=1.1~T_c$. Finally the dot-dashed line corresponds to the high temperature expansion near $\bar{h}=0$ for $T=1.1~T_c$. We have chosen for definitiveness $c=1/(50 \sqrt{e})$, corresponding to a zero temperature glueball mass of $5\hat{\Lambda}$.[]{data-label="figure1"}](fig1.eps) Relevant features of the deconfining transition {#Features} =============================================== Studying the one loop thermal effective potential in Eq. (\[finalV\]) one observes that when increasing the temperature a new local minimum sets in at $\bar{h}=0$ and, for a certain range of temperatures, the potential has two local minima. The temperature for which the two minima have the same free energy is: $$T_{c}=\sqrt[4]{\frac{90\,v^3}{2e\pi^2}}\hat{\Lambda} \ .$$ This value is obtained by comparing the jump of the potential due to the temperature corrections (actually at zero $\bar{h}$) with the respect to the zero temperature minimum, and it does not depend on the specific value assumed by the constant $c$ in the effective Lagrangian. The latter can be fixed once the glueball mass is known. Assuming that the drop in the gluon condensate together with the drastic change in the glueball mass are related to the deconfinement phase transition as supported by lattice simulations [@Bacilieri] we interpret Eq. (\[ApproxTc\]) as an estimate for the critical temperature. In figure \[figure2\] we plot the critical temperature as function of the superconductive gap for different values of the quark chemical potential. In models where the contribution of the elementary gluons is added one finds a smaller temperature (see for example [@Carter:1998ti]). The reduction is due to the extra contribution of the [*light*]{} gluons appearing at $T_c$. This effect can be estimated assuming that the main contribution of the gluons at $T_c$ is the free energy for unconfined gluons propagating with velocity $v$. By simply adding to the effective thermal potential the term $-2\pi^2 \left[2(N^2-1)\right] T^4\Theta (T - T_c)/(90v^3)$ for a general number of colors $N$ the temperature for which the two minima have the same free energy value is lowered to $${T}_{c} \rightarrow \frac{T_c}{\sqrt[4]{2\left(N^2-1\right)+1}} \ . \label{secondTc}$$ The reduction is perhaps too drastic since, in many investigations at zero density, it has been argued that a better fit to the Lattice data even at temperatures as high as 3 times the critical temperature requires an effective number of gluon degrees of freedom lower than the one predicted by a free gas approximation. It is then quite likely that the true critical temperature lies in between the one presented in Eq. (\[ApproxTc\]) computed without gluons and the one estimated Eq. (\[secondTc\]). When reducing the temperature from the quark gluon plasma phase we see that color superconductivity first sets in along the temperature axis with the $SU(2)$ of color still unconfined and finally the $SU(2)$ confines at a lower value of the temperature (see Fig. \[figure2\]). Higher order corrections to our critical temperature are shown to be smaller than 10% (see [@Carter:1998ti]). ![Plots of the $SU(2)$ critical temperature for 2 values of the quark chemical potential ($\mu=400$ MeV long–dashed line; $\mu=500$ MeV short–dashed line) as function of the superconductive gap $\Delta$. The solid line corresponds to the critical temperature for the superconductive state $0.57 \Delta$. The left panel corresponds to $\Lambda_{QCD}=300$ MeV while the right one corresponds to $\Lambda_{QCD}=200$ MeV[]{data-label="figure2"}](fig22.eps){width="16cm" height="5cm"} We also see confronting the left and right panel of Fig. \[figure2\] that the critical temperature decreases if $\Lambda_{QCD}$ decreases. Our model applies directly only to the ideal 2SC state and for physical applications we need to consider in some detail the corrections induced for example by the quark masses. Nevertheless it might be instructive to show how the explicit dependence of the $SU(2)$ confining temperature on $\mu$ and $\Delta$ may be helpful to astrophysical applications. Indeed in a model for Gamma Ray Bursts (GRBs) [@OS] it was suggested that some compact stars might feature a hot 2SC surface layer. The GRBs model used the glueballs as an active degree of freedom. So we need to know when, along the temperature axis, the 2SC layer enters the $SU(2)$ confining regime. Within our model calculations we indicate in the $T-\mu$ phase diagram where the glueballs degrees of freedom start playing a role. For example if $\mu=400-500$ MeV from Fig. \[figure2\] we deduce that the $SU(2)$ confines at $T_c\approx 10$ MeV provided $\Delta \geq 60-70$ MeV. Our work might also be useful when investigating the cooling process in compact stars. The derived free energy for the $SU(2)$ glue for very low temperatures represents an initial step when computing part of the complete equation of state which is needed when considering the thermodynamics of compact object featuring a 2SC state. In our model we have assumed the glueball velocity not to depend on the temperature. This is reasonable since the temperature corrections for $v$ are exponentially suppressed, more specifically the suppression factor is $e^{-\Delta/T}$ . Since we find the critical temperature to lie well below the critical temperature for color superconductivity ($T_{2SC}\approx 0.57 \Delta$) our results provide a consistent picture. It is important to stress that for temperatures $T_c < T < T_{2SC}$ the gapped quark contributions are no longer negligible. In Ref. [@Litim:2001je] using the transport theory some relevant temperature effects have been analyzed. It is useful to study the dependence of the glueball mass on the temperature. Defining the square of the mass as the potential curvature evaluated at the (global) minimum we observe (see Fig. \[figure1\]) that for $T<T_c$ the curvature is practically constant and the mass square value is well approximated by $M^2(T<T_c)=\hat{\Lambda}^2/{2\sqrt{e}c}$ . Although the glueball treatment alone cannot be used above the deconfining phase transition it is nevertheless interesting to consider such a temperature region. For $T\geq T_c$ the new global minimum is at zero and we can use Eq. (\[exactTP\]) to deduce $$\begin{aligned} \frac{M^2(T\geq T_c)}{M^2(T=0)}= \frac{\sqrt{5}}{4\pi \sqrt{v^3}}\left[\frac{T}{T_c}\right]^2 \ .\label{squaremass}\end{aligned}$$ Due to the velocity factor in Eq. (\[squaremass\]) there is a relative enhancement with to the respect to the in vacuum (but hot) Yang-Mills theory. For illustration we plot our results in Fig. \[figure3\] for the in vacuum (i.e. $v=1$ and $\mu=0$) and the in medium theory for $\mu=500$ while considering different values of $\Delta$. ![Illustrative plot of the glueball mass as function of the temperature , $M^2(T)/M^2(T=0)$ for different values of $\Delta$ and fixed chemical potential $\mu=500$ MeV. The curve labelled by $\mu=0$ corresponds to the in vacuum (but hot) case. It is straightforward to consider another value of $\mu$ and the qualitative picture remains unchanged.[]{data-label="figure3"}](fig3.eps){height="6cm" width="10cm"} Since the glueball mass increases in the deconfined phase the new light degrees of freedom (namely the elementary gluons themselves) now dominate the free energy. Interestingly we find that due to a large dielectric constant of the 2SC medium the associated light glueballs mass square in the deconfined region gains a factor $1/\sqrt{v^3}$ relative to the in vacuum case. Hence for all of the relevant thermodynamical properties/quantities of the 2SC above the deconfining $SU(2)$ phase transition the glueballs are not expected to play an important role. Since we plot the ratio of masses the result does not depend on the positive constant $c$. From the figure is also clear that there is a strong dependence on the specific value of $\Delta$. We now comment briefly on the fate of the old minimum as the temperature is increased above the critical temperature and in absence of elementary gluons. The value of the glueball condensate corresponding to the old minimum just above the critical temperature start reducing while the minimum disappears for a temperature of the order of $\approx 2T_c$. This behavior is summarized in Fig. \[figure4\] and it is a classical example of first order phase transition if we were to consider the glueball Lagrangian as the correct description at and above the transition point. ![Zoom of the Potential of Eq. (\[thermalP\]) close to the old minimum for $\mu=500$ MeV, $\Delta=30$ MeV as function of the condensate $\bar{h}/\hat{\Lambda}$ for different values of the temperature above $T_c$. The solid line corresponds to $T=T_c$, the dotted line to $T=1.8~T_c$, the short-dashed to $T=1.9~T_c$ and the long dashed to $T=2~T_c$. We have, as in Fig. \[figure1\], chosen $c=1/(50 \sqrt{e})$.[]{data-label="figure4"}](fig4.eps) When restricting to the in vacuum theory our results are in reasonable agreement with the results and expectations of various investigations (see for example [@Carter:1998ti] and references therein) using similar models. The glueball Lagrangian based model cannot be used to establish the order of the phase transition since the gluon condensate is not an order parameter for a Yang-Mills theory although it does encode information on the underlying conformal anomaly. The break down point signals the presence of new lighter degrees of freedom which needed to be taken into account. In the absence of quarks a reasonable order parameter for the $SU(N)$ Yang-Mills theory is the Polyakov loop [@Svetitsky:1982gs]: $$\begin{aligned} {\ell}\left(x\right)=\frac{1}{N}{\rm Tr}({\bf L})\equiv\frac{1}{N}{\rm Tr} \left[{\cal P}\exp\left[i\,g\int_{0}^{1/T}A_{0}(x,\tau)d\tau\right]\right] \ ,\end{aligned}$$ where ${\cal P}$ denotes path ordering, $g$ is the $SU(N)$ coupling constant and $x$ is the coordinate for the three spatial dimensions while $\tau$ is euclidean time. The $\ell$ field is real for $N=2$ while otherwise complex. This object is charged with respect to the center $Z_N$ of the $SU(N)$ gauge group [@Svetitsky:1982gs] under which it transforms as $\ell \rightarrow z \ell$ with $z\in Z_N$. A relevant feature of the Polyakov loop is that its expectation vanishes in the low temperature regime and is non zero in the high temperature phase. This behavior has recently lead Pisarski [@Pisarski:2001pe] to model the Yang-Mills (non supersymmetric) phase transition as a mean field theory of Polyakov loops. One can simply show that for $SU(2)$ one expects a second order phase transition (as function of the temperature) and a weak first order for $SU(3)$. We can use Pisarski’s model to predict the order of the transition in the present case. Assuming that a local $SU(2)$ Yang-Mills action at low energies does exists we construct the simplest Polykov loop using the rescaled space time coordinates and fields : $$\begin{aligned} {\hat{\ell}}\left(x\right)=\frac{1}{2}{\rm Tr} \left[{\cal P}\exp\left[i\,\hat{g}\int_{0}^{1/\hat{T}}\hat{A}_{0}({x},\hat{\tau})d\hat{\tau}\right]\right] \ ,\end{aligned}$$ with $\hat{A_0}=\hat{A}_0^a \tau^a/2$ and $\tau^a$ the $SU(2)$ Pauli matrices, and the connection with the underlying fields is $\hat{A_0^a}=\lambda^{\frac{1}{4}}\epsilon^{\frac{3}{4}}A_0^a$ while $\hat{g}=g_{s}(\lambda/\epsilon)^{\frac{1}{4}}$. The rescaled euclidean time $\hat{\tau}=\tau/\sqrt{\lambda \epsilon}$ leads to $\hat{T}=T/v$ while $\lambda$ is a possible magnetic permeability which turns to be equal to one in our case. At this point the effective mean field type of model a lá Pisarski for $\hat{\ell}$ is similar to the one for the in vacuum $SU(2)$ Yang-Mills theory. So if we make the strong but plausible assumption (as shown above) that all the way up and above the deconfinement $SU(2)$ color phase transition the effects of the quark superconductive matter can be taken into account just via a non zero dielectric constant we expect a second order phase transition for the $SU(2)$ of color in 2SC. It is relevant to mention that the order of the transition might change if we include new contributions arising for example by considering the quark masses. The deconfining temperature is expected to be the one close to our prediction obtained from the glueball model Lagrangian. Even if a non zero magnetic permeability exists the present argument would not be modified. $SU(2)$ Yang-Mills with non zero dielectric and magnetic permeability can be simulated, using standard sampling methods, on the lattice. For a large body of work on $SU(2)$–Yang–Mills theory we refer to [@Damgaard]. These results would test at the same time the validity of the glueball model for the prediction of the critical temperature and the order of the phase transition according to the Polyakov loop model in a framework slightly modified with respect to the in vacuum case. Besides the latter would also constitute the first lattice simulations testing the high quark chemical potential but small temperature region of the QCD phase diagram. The disagreement between the first order phase transition predicted by the glueball Lagrangian and the previous argument based on the symmetries obeyed by the order parameter is an apparent one. In fact any gauge invariant quantity which is not an order parameter does not need to behave as the order parameter itself at the transition [@Pisarski:2001pe] as discussed at length in the introduction (see also [@Sannino:2002wb]). Conclusions {#Conclusions} =========== We studied the temperature effects on the unbroken $SU(2)$ color gauge interactions for the 2 flavor case at high matter density. Using a simple model based on a light glueball Lagrangian we estimated the $SU(2)$ deconfinement critical temperature for given chemical potential and superconductive gap value. We have shown that the deconfining/confining critical temperature is smaller than the critical temperature for the superconductive state itself. The breaking of Lorentz invariance (already at zero temperature), encoded in the glueball velocity, further reduces the critical temperature by a factor $v^{3/4}$ relative to the in vacuum case. By computing the glueball thermal effective potential we have the equation of state for part of the ideal 2SC phase (i.e. zero up and down quark masses and infinitely massive strange quark). In particular we can compute the pressure, the energy density and the entropy of the system. Another relevant point is that we have developed a general framework according to which any parameterization of the glueball field can be used to construct the full thermal effective action arising from the lagrangian constructed using the anomalous variation of dilation current. Using the Polyakov loop model, adapted to the present case we also predict the associated phase transition to be second order. In order to apply our model to the physics of compact objects we should extend it in order to take into accounts the effects of the the up and down quark masses as well as the effects of a not too massive strange quark. Notes added in proof {#notes-added-in-proof .unnumbered} -------------------- About two months after we submitted this paper the paper [@Alford:2002kj] appeared where it is claimed that the 2SC state might not be present on compact stars. This is a very dynamical issue which deserves further studies. The present paper deals with the properties of part of the ideal 2SC state and as such our results are not affected by this claim. However possible astrophysical applications may be affected. Finally also Ref. [@Sannino:2002wb] which clarifies the relation between the order parameter and the gluon condensate further strengthening our approach appeared after this paper was submitted. It is a pleasure for us to thank R. Casalbuoni for suggesting this problem to us. We would like to thank P. Damgaard for enlightening discussions, J. Schechter for discussions and reading of the manuscript, and K. Splittorff for interesting discussions. We also acknowledge discussions with C. Manuel, R. Ouyed and O. Scavenius. The work of F.S. is supported by the Marie–Curie fellowship under contract MCFI-2001-00181, N.M. was supported by the EU Comission under contract HPMT-2000-00010 and by NORDITA, while W.S. acknowledges support by DAAD and NORDITA. See K. Rajagopal and F. Wilczek, hep-ph/0011333; M.G. Alford, Ann. Rev. Nucl. Part. Sci.  [**51**]{}, 131 (2001) \[arXiv:hep-ph/0102047\] for an overview on the subject; S.D.H. Hsu, hep-ph/0003140 for the renormalization group approach review; D.K. Hong, Acta Phys. Polon. B32:1253, 2001, hep-ph/0101025 for the effective theories close to the fermi surface; R. Casalbuoni, AIP Conf. Proc.  [**602**]{}, 358 (2001) \[arXiv:hep-th/0108195\]. for the effective Lagrangians approach. G. Nardulli, hep-ph/0202037 for the effective theory approach to CSC and the LOFF phase and possible applications of the LOFF phase to the physics of compact stars; F. Sannino, hep-ph/0112029 for the 2SC effective Lagrangians, topological terms and the electroweak sector. G. W. Carter and S. Reddy, Phys. Rev. D [**62**]{}, 103002 (2000), hep-ph/0005228. D. K. Hong, S. D. Hsu and F. Sannino, Phys. Lett. B [**516**]{}, 362 (2001), hep-ph/0107017. R. Ouyed and F. Sannino, astro-ph/0103022. R. Casalbuoni, Z. Duan and F. Sannino, Phys. Rev. D [**62**]{} (2000) 094004, hep-ph/0004207. R. Ouyed and F. Sannino, Phys. Lett. B [**511**]{} (2001) 66 hep-ph/0103168. R. Casalbuoni and R. Gatto, Phys. Lett. B[**464**]{}, 11 (1999). R. Casalbuoni and R. Gatto, Phys. Lett. B [**469**]{}, 213 (1999), hep-ph/9909419. R. Casalbuoni, Z. Duan and F. Sannino, Phys. Rev. D [**63**]{}, 114026 (2001), hep-ph/0011394 F. Sannino, Phys. Lett. B[**480**]{}, 280, (2000). S. D. Hsu, F. Sannino and M. Schwetz, Mod. Phys. Lett. A [**16**]{}, 1871 (2001), hep-ph/0006059. D. H. Rischke, D. T. Son and M. A. Stephanov, Phys. Rev. Lett.  [**87**]{}, 062001 (2001), hep-ph/0011379. F. Karsch, AIP Conf. Proc.  [**602**]{}, 323 (2001) \[arXiv:hep-lat/0109017\]. B. A. Campbell, J. R. Ellis and K. A. Olive, Nucl. Phys. B [**345**]{}, 57 (1990). Yu. A. Simonov, JETP Lett.  [**55**]{}, 627 (1992) \[Pisma Zh. Eksp. Teor. Fiz., 605 (1992)\]. N. O. Agasian, JETP Lett.  [**57**]{}, 208 (1993) \[Pisma Zh. Eksp. Teor. Fiz., 200 (1993)\]. J. Sollfrank and U. W. Heinz, Z. Phys. C [**65**]{}, 111 (1995), nucl-th/9406014. G. W. Carter, O. Scavenius, I. N. Mishustin and P. J. Ellis, Phys. Rev. C [**61**]{}, 045206 (2000), nucl-th/9812014; G. W. Carter and P. J. Ellis, Nucl. Phys. A [**628**]{}, 325 (1998), nucl-th/9707051. B. J. Schaefer, O. Bohr and J. Wambach, Phys. Rev. D [**65**]{}, 105008 (2002) \[arXiv:hep-th/0112087\]. A. Drago and M. Gibilisco, hep-ph/0112282. T. Renk, R. A. Schneider and W. Weise, hep-ph/0201048. R. D. Pisarski, hep-ph/0112037; R.D. Pisarski, Phys. Rev. D[**62**]{}, 111501 (2000). A. Dumitru and R. D. Pisarski, Phys. Lett. B[**504**]{}, 282 (2001); O. Scavenius, A. Dumitru and A. D. Jackson, Phys. Rev. Lett., 182302 (2001) \[arXiv:hep-ph/0103219\]; P.N. Meisinger, T.R. Miller, and M.C. Ogilvie, Phys. Rev. D [**65**]{}, 034009 (2002) \[arXiv:hep-ph/0108009\]; P.N. Meisinger and M.C. Ogilvie, Phys. Rev. D [**65**]{}, 056013 (2002) \[arXiv:hep-ph/0108026\] C. P. Korthals Altes, R. D. Pisarski and A. Sinkovics, Phys. Rev. D [**61**]{}, 056007 (2000), hep-ph/9904305. A. Dumitru and R. D. Pisarski, Phys. Lett. B [**525**]{}, 95 (2002), hep-ph/0106176. O. Scavenius, A. Dumitru and J. T. Lenaghan, hep-ph/0201079. J. Wirstam, Phys. Rev. D [**65**]{}, 014020 (2002), hep-ph/0106141. R.D. Pisarski and D.H. Rischke, Phys. Rev. D[**61**]{}, 051501 (2000); Phys. Rev. D[**61**]{}, 074017 (2000). F. Sannino and W. Schäfer, Phys. Lett. B [**527**]{}, 142 (2002) hep-ph/0111098; J. T. Lenaghan, F. Sannino and K. Splittorff, Phys. Rev. D [**65**]{}, 054002 (2002), hep-ph/0107099. M. Alford and K. Rajagopal, arXiv:hep-ph/0204001. F. Sannino, arXiv:hep-ph/0204174. D. F. Litim and C. Manuel, Phys. Rev. Lett.  [**87**]{}, 052002 (2001), hep-ph/0103092. R. Casalbuoni, R. Gatto and G. Nardulli, Phys. Lett. B [**498**]{} (2001) 179, hep-ph/0010321; R. Casalbuoni, R. Gatto, M. Mannarelli and G. Nardulli, Phys.Lett. B [**524**]{}, 144 (2002) \[arXiv:hep-ph/0107024\]. J. Schechter, Phys. Rev. [**D21**]{}, 3393 (1980). For a review on the effective Lagrangians for QCD see hep-ph/0112205. F. Sannino and J. Schechter, Phys. Rev. D[**60**]{}, 056004, (1999). F. Sannino and J. Schechter, Phys. Rev. D[**57**]{}, 170 (1998). S. D. Hsu, F. Sannino and J. Schechter, Phys. Lett. B [**427**]{}, 300 (1998), hep-th/9801097. A.A. Migdal and M.A. Shifman, Phys. Lett. [**114B**]{}, 445 (1982). J.M. Cornwall and A. Soni, Phys. Rev. [**D29**]{}, 1424 (1984); [**32**]{}, 764 (1985). A. Salomone, J. Schechter and T. Tudron, Phys. Rev. [**D23**]{}, 1143 (1981). J. Ellis and J. Lanik, Phys. Lett. [**150B**]{}, 289 (1985). H. Gomm and J. Schechter, Phys. Lett. [**158B**]{}, 449 (1985). T. De Grand, R.L. Jaffe, K. Johnson and J. Kiskis, Phys. Rev. [**D12**]{}, 2066 (1975). M. Shifman, A. Vainshtein and V. Zakharov, Nucl. Phys. [**B147**]{}, 385 (1979); [**B147**]{}, 448 (1979). M.S. Chanowitz and J. Ellis, Phys. Rev. D[**7**]{}, 2490 (1973). L. Dolan and R. Jackiw, Phys. Rev. D [**9**]{}, 3320 (1974). P. Bacilieri et al. Phys. Lett. B[**220**]{}, 607, 1989. D. H. Rischke, Phys. Rev. D [**64**]{}, 094003 (2001), nucl-th/0103050. B. Svetitsky and L. G. Yaffe, Nucl. Phys. B [**210**]{}, 423 (1982). L. G. Yaffe and B. Svetitsky, Phys. Rev. D [**26**]{}, 963 (1982). B. Svetitsky, Phys. Rept.  [**132**]{}, 1 (1986). P.H. Damgaard, Phys. Lett. B194 (1987) 107; J. Kiskis, Phys. Rev. D[**41**]{} (1990) 3204; J. Fingberg, D.E. Miller, K. Redlich, J. Seixas, and M. Weber, Phys. Lett. B248 (1990) 347; J. Christensen and P.H. Damgaard, Nucl. Phys. B348 (1991) 226; P.H. Damgaard and M. Hasenbush, Phys. Lett. B331 (1994) 400: J. Kiskis and P. Vranas, Phys. Rev. D[**49**]{} (1994) 528. For a more recent review and a rather complete list of references, see S. Hands, Nucl. Phys. Proc. Suppl.  [**106**]{}, 142 (2002) \[arXiv:hep-lat/0109034\]. [^1]: In the full theory it is more reasonable to expect just a drastic drop of the condensate close to the phase transition. [^2]: Here we absorbed the coefficient $b$ present in the Lagrangian of [@OS2] in the definition for $H$. This coefficient is relevant when comparing the results of the glueball Lagrangian derived for different number of colors and flavors [@SS].
--- abstract: 'X-ray emission from cool stars is an important tracer for stellar activity. The X-ray luminosity reflects different levels of activity and covers four orders of magnitude in stars of spectral types M-F. Low spectral resolution provided by X-ray observations of stellar coronae in the past allowed the determination of temperature distributions and elemental abundances making use of atomic databases (listing line emissivities and bremsstrahlung continuum for a given temperature structure). The new missions XMM-Newton and Chandra carry X-ray gratings providing sufficient spectral resolution to measure the fluxes of strategic emission lines. I describe the different approaches applicable to low-resolution and high-resolution spectra, especially focusing on the new grating spectra with X-ray lines. From only a few lines it is possible to determine plasma temperatures and associated densities, to check for any effects from resonant scattering, and to identify particular abundance anomalies. Line-based temperature- and density measurements represent only a fraction of the total plasma, but the pressure environment of different fractions can be probed simply by selection of specific lines. Selected results are presented covering all aspects of line-based analyses.' address: 'Hamburger Sternwarte, Gojenbergsweg 112, 21029 Hamburg, Germany' author: - 'J.-U. Ness' bibliography: - 'hist.bib' - 'jhmm.bib' - 'astron.bib' - 'jn.bib' title: 'Advances of plasma diagnostics with high-resolution spectroscopy of stellar coronae' --- [^1] Stellar atmospheres ,Stellar activity ,Main-sequence: late-type stars 97.10.Ex ,97.10.Jb ,97.20.Jg Introduction {#intro} ============ The term activity for the Sun and for late-type stars summarizes phenomena in the outer atmosphere. Important connections are known between sunspots (places where magnetic fields pierce the surface and suppress convection) and active regions in the corona (regions with particularly high temperature and strong X-ray emission). Also, the appearance of active regions is more frequent with solar maximum and the X-ray output exhibits the same cycle as the sunspots. The interface between the surface and the upper corona is the chromosphere, where Ca[ii]{} emission originates. The intensity of the Ca[ii]{} emission (measured in the middle of the Ca[ii]{} photospheric absorption line) sensitively reacts to changes in the solar activity cycle, and is at present (with few exceptions) the only tracer for stellar activity cycles.\ It took severe efforts to actually discover the extreme physical properties of the solar corona (extremely high temperatures and very low densities). Measurements of the optical corona revealed that the spectrum is almost identical to the spectrum of the solar photosphere at all heights, suggesting scattering of photospheric light by electrons. Since most Fraunhofer lines could not be identified and the strongest lines were found smeared out, [@grotrian31] concluded that the electrons must have extremely high mean velocities, which are not consistent with photospheric temperatures. However, only the identification of emission lines from highly ionized species led to the undoubted conclusion of the million degree corona [e.g., @edlen; @grotrian]. This high temperature requires X-ray observations in order to directly look into the million degree plasma of stellar coronae; no contamination from the stellar photosphere needs be dealt with in this wavelength region. Past X-ray missions like Einstein and ROSAT were able to discover the ubiquitous occurrence of hot, tenuous coronae around late-type stars by detection of considerable X-ray luminosities for [**all**]{} late-type stars within the immediate neighborhood of the Sun [e.g., @schm97]. The X-ray luminosity is a classical activity indicator and a relation between the X-ray luminosity and the rotational velocity $v\sin i$ [@pal81] has been established. This suggests magnetic dynamo generation to be involved in the creation of stellar coronae and therefore also for the solar corona.\ Common practice in analyzing X-ray spectra has been based on global fit approaches (see Sect. \[lowres\]). With this method a general trend of plasma temperatures increasing with activity was found [e.g., @cjm91]. Since the causes of the heating phenomena have still today not been discovered, this is an important contribution towards a complete future understanding of the formation and heating of stellar coronae and the solar corona.\ In this spirit, the detailed physical description of stellar coronae is the next step in order to approach this aim. X-ray spectroscopy of the solar corona has been applied to measure temperatures and densities in specific active or quiescent regions. The solar corona was found to be essentially optically thin and the X-ray spectrum is thus dominated by emission lines. Important diagnostics tools have been developed, e.g., the density diagnostics with He-like triplets [@gj69]. All these diagnostics can in principle also be applied to stellar coronae, however, very sensitive instruments are required in order to provide a decent S/N at the required spectral resolution. Also, the results have to be interpreted with the limitation that only average coronal properties can be obtained, because no spatial resolution is possible. The X-ray missions Chandra and XMM-Newton provide the ideal instrumental setup with their slitless grating spectrometers. With these gratings, spectra have been obtained in the last four years which clearly confirmed that the X-ray spectra of stellar coronae are also dominated by emission lines originating from highly ionized atomic transitions. For more active stars continuum emission is found which is dominated by the bremsstrahlung mechanism [e.g., @ness_alg]. While the continuum can be used in order to obtain the plasma temperature of the hottest regions in the corona, the formation of each individual line reflects the physical conditions of the plasma regions emitting the respective lines. Since no individual emission region can be isolated in a stellar corona the line diagnostics will return averages of all visible emission regions, weighted with the brightness of each region. This limitation implies that only typical activity-related physical properties can be identified. When samples of stellar coronae are investigated, trends between coronal properties and stellar parameters can be found, uncovering the underlying physical processes.\ This paper will give a review of coronal physical parameters that can be deduced from spectral line analysis. While in the past X-ray spectra did not have the power to measure individual lines, the new gratings aboard Chandra and XMM-Newton allow individual line fluxes to be measured for the first time. This requires new analytical approaches. In principle, a complete model spectrum can be synthesized from tables containing all our knowledge of the atomic physics (atomic databases) and be compared with measured spectra of any spectral resolution. This method (global fitting) will be limited by the quality of the spectrum (when applied to low-resolution spectra) or by the quality of the atomic database in use (when applied to high-resolution spectra). I will first describe how low-resolution spectra have been analyzed and what could be learnt and then address a number of aspects which have been deduced from the analysis of individual lines. Analysis of low-resolution spectra {#lowres} ================================== While the earliest X-ray missions allowed only the detection of the X-ray intensity in a broad energy band, more refined missions had some spectral resolution based on the energy-sensitivity of the detectors (CCDs, proportional counters). Although individual spectral features could not directly be seen a lot of useful information could still be obtained by convolving model spectra to the instrumental spectral resolution. These model spectra basically contain the relevant atomic physics and within a surprisingly good range the physical parameters could be optimized in order to find good agreement with the measured spectra.\ In order to construct a model spectrum an atomic database is needed, which contains the information on the formation of lines induced by atomic transitions under the assumption of an optically thin plasma (i.e., only the production of photons is described, but no absorption; see also Sect. \[opt\]). Since the coronal plasma is in principle dominated by collisional ionizations and excitations, it is sufficient for the modelling of stellar coronal spectra to assume the ’coronal’ approximation. Going beyond this assumption would need more refined efforts.\ The parameters put into a model are temperatures, emission measures, and elemental abundances. The temperature is the main parameter entering a spectral model. It will affect the ionization fraction and (along with the density) the population of the excited levels. Also the contribution of a continuum, which consists of bremsstrahlung, recombination, and two-photon continuum, depends on the temperature. Since nature does not provide isothermal plasma, a temperature distribution has to be assumed. This can be approximated by using, e.g., three isothermal components, which each carry a weighting factor in terms of an emission measure value (specifying the amount of emitting material in the corona at the given temperature). Three spectral models are then constructed and will be co-added for the final model. The number of temperature components can be chosen arbitrarily high, but including additional temperature components makes only sense when an improvement in agreement with the measurements can be accomplished. In contrast to a (generally low) number of isothermal components, a smooth temperature distribution can be assumed and optimized [e.g., @schm90]. The next physical parameter is a set of elemental abundances. All lines originating from ions of the same element are linearly scaled by the value of its abundance. In the models the elemental abundances and the temperature components are modelled simultaneously, but a sufficient number of lines must lie in the spectral region for sensible constraints on the abundance. Note, however, that changes in the elemental abundances will also affect the bremsstrahlung continuum, because in hot plasma, highly ionized metals will insert a considerable number of additional electrons. Within the ranges of abundances now typically found in stellar coronae the bremsstrahlung continuum changes only by a few percent for typical hot coronal temperatures.\ With these spectral models it was possible to establish a temperature-activity relation [@cjm91; @schm90]. Also, it was found that the hotter average temperatures in more active stars are mainly caused by an additional hotter temperature component, while only a slight shift of the cooler temperature component was noticed [@guedel97]. This can be explained by an increasing number of active regions with increasing activity as in the solar activity cycle.\ The methods applied to model low-resolution spectra can in principle also be applied to high-resolution spectra (which show individual emission lines). However, the accuracy of the available atomic databases then imposes the major limitations, while the same method applied to low-resolution spectra was limited by the quality of the spectra. The improvement of the quality of the results has thus been pushed to the limits of the databases and further progress can only be made by improving the atomic data.\ An alternative approach is to measure the line fluxes of strategically chosen individual lines and compute line fluxes reflecting specific physical aspects. In the next section I will discuss some aspects which can be addressed by measurement of line fluxes and line flux ratios. Analysis of high-resolution spectra {#highres} =================================== A high-resolution spectrum in the present context is defined as a spectrum which allows one to resolve a minimum number (at least five to ten) of individual emission lines. Taking the present X-ray missions this implies that the CCD spectra (ACIS-S, ACIS-I, EPIC-PN, and EPIC-MOS) are considered low-resolution spectra, while the grating spectra (LETGS, HETGS, and RGS) are considered high-resolution spectra.\ The same methods applied to low-resolution spectra, especially the global fit approaches can just as well be applied to high-resolution spectra and the results gain accuracy from the improved spectral resolution, but not beyond the quality of the atomic databases. At the moment, the limitations of the results are determined by the quality of the databases alone. However, the quality of global fits can be improved further by excluding wavelength regions where there are high degrees of uncertainties in the databases by calculating the fit goodness parameter as demonstrated by [@aud03]. With this approach one can avoid to a large extent misidentifications of measured line features belonging to lines not listed in the databases. In the latter case a global fit would seek physical conditions pulling up other line fluxes listed at nearby wavelengths, while line-based approaches would leave these features as unidentified, ignoring them for further interpretation.\ The three X-ray gratings cover the wavelength ranges 1–40Å (the LETGS goes up to 175Å) with different spectral resolution and sensitivity. The strategic lines in this wavelength region are the H-like and He-like lines of Si, Mg, Ne, O, N, and C (altogether 24 lines). Also, a number of Fe L-shell and K-shell lines are measurable with these instruments. It is possible to obtain a large amount of information from these few lines without the use of global models, which use thousands of lines simultaneously.\ For the interpretation of line fluxes and line ratios the same atomic databases must be used. The line-based analysis uses the strongest lines with the smallest uncertainties (constrained by both theoretical calculations and laboratory measurements), but these strong lines might also be blended with fainter lines from complicated ions, e.g., lines of Fe. The blending can be significant, e.g., for the Ne He-like lines [@nebr] and in these cases the limitations are essentially the same as in global models. Accounting for the blending lines is intrinsically implemented in the global fit approach, while a line-based approach has to carefully predict the blending lines. Measurement of opacities {#opt} ------------------------ Before any analyses based on the information obtained from the atomic databases can be carried out, one has to assure that any measured photon rate actually represents the photon production rates. In principle, photons produced in lower layers might be absorbed in higher layers and re-emitted into other directions (scattering). The solar corona is commonly assumed to be optically thin, however, the strongest resonance lines with high radiative excitation probabilities might place considerable absorption cross sections into the line of sight. Absorbed photons will be re-emitted, but not necessarily back into the line of sight and some photons will be re-emitted back to the stellar surface and are thus effectively lost. In all cases, with no balance of scattering out of the line of sight and into the line of sight (called ”effectively optically thin”), these ”resonant scattering” effects would significantly distort the measured line fluxes compared to those produced. This distortion can be modelled, but the modelling requires assumptions about the structure of the absorbing layers (e.g., spherical geometries) but coronal plasma can have extreme geometries (especially when active regions are involved), so the modelling would become extremely complicated. It is therefore common practice to neglect resonant scattering effects, but this can be tested. In principle one could easily see resonant scattering effects when comparing measured (affected) resonance lines with measured forbidden lines, which can be considered to be 100% thin. The ratio of a resonance line and a forbidden line can be compared with a corresponding theoretical ratio from the databases predicting optically thin fluxes. In order to eliminate any temperature- and abundance effects, the choice of lines should focus on lines of the same element and the same ionization stage. A prominent example is the ratio of two Fe[xvii]{} lines at 15Å and 15.27Å. The latter is a forbidden line (oscillator strength $f=0.6$ in contrast to $f=2.6$ for the 15Å line) and the ratio $\lambda$15.27/$\lambda$15 will increase with increasing opacity effects. [@ness_opt] have compared a large number of stellar $\lambda$15.27/$\lambda$15 (and other) ratios with each other and found the measured ratios all higher than theoretical predictions for an optical thin plasma (suggestive of significant opacity effects). However, they found no systematic trend with activity yielding similar ratios for all stars in their sample. This means that the amount of emitting plasma has no effect on the line ratios, and so [@ness_opt] concluded that the higher measured ratios rather imply erroneous theoretical ratios than identical optical depths for all kinds of stellar coronae. [@testa_opt] analyzed the ratio Ly$_\beta$/Ly$_\alpha$ for the ions of oxygen and neon for a sample of stars, but found only two exceptions from the zero-optical depth scenario. However, their claims of unique first-time findings of resonant scattering effects in stellar coronae are confirmed by both ions only for one corona (IMPeg).\ The general conclusion is that it appears to be reasonable to neglect resonant scattering effects in coronal plasma, but one has to check individual spectra. Abundance anomalies {#aabun} ------------------- For coronae resembling the solar corona an abundance anomaly called The FIP effect was discovered with EUVE, Chandra, and XMM-Newton. This effect has long been known from the solar corona and all elements with a first ionization potential (FIP) lower than 10eV ($\sim 1216$Å, which is the wavelength of the hydrogen Ly$_\alpha$ line) are overabundant compared to photospheric abundances. The heating or transport mechanisms obviously prefer to deal with species, which are ionized (possibly by photoionization from Ly$_\alpha$ line photons) which can then couple to magnetic fields. For many more active stars an inverse FIP effect has been found with XMM-Newton [e.g., @aud03], which is puzzling. The detailed background to the FIP and inverse FIP effects are complicated, but a recent approach by [@laming04] explains the fractionation quite naturally by ponderomotive forces arising as upward propagating Alfv[é]{}n waves from the chromosphere transmit or reflect upon reaching the chromosphere-corona boundary.\ While about half of these results have been determined by the use of global fits to high-resolution spectra, some methods have been developed to obtain elemental abundances using the specific advantages of measuring individual line fluxes [see summary in @guedelaarev]. [@abun] developed a method using the H-like to He-like line ratios (see Sect. \[temp\]) to construct the temperature distribution independently of elemental abundances. All remaining discrepancies from a spectrum constructed from this temperature distribution then reflect abundance effects. [@telli04] compared global methods applied to limited spectral ranges containing bright lines [@aud03] with a line-based approach, and found good agreement between these methods.\ An interesting effect was discovered by [@cno]. The lack of any carbon line in the Chandra LETGS spectrum of Algol raised suspicion of an abundance effect, since the nitrogen lines were well detected. Fig. \[c6\_n7\] demonstrates that N[vii]{}/C[vi]{} flux ratios greater than one cannot be explained by a temperature effect, but must be explained by abundance anomalies. This effect is well in line with expectations from stellar evolution. Since Algol is evolved, the anomalous abundance pattern reflects dredged-up CNO cycled material. All stars for which an enhanced N[vii]{}/C[vi]{} ratio was detected are evolved stars, while the other stars show normal abundances, with $\alpha$Cen and Procyon showing low ratios, probably due to low coronal temperatures. Plasma temperatures {#temp} ------------------- While the global fits allow one to obtain immediately a complete temperature distribution, individual lines can probe temperatures for specific regions of the temperature distribution. Again, a smart choice of line ratios allows one to eliminate other effects besides temperatures, e.g., elemental abundances. From the available strong lines the ratio of the H-like to one of the He-like triplet (r, i, f) lines (usually the resonance line r or the sum of all three lines) of the same element is temperature-sensitive due to the ionization balance. In hotter plasma the H-like line will be stronger while in cooler plasma the He-like lines dominate. The ratios can then be compared to theoretical predictions and probe all plasma emitting the respective lines. In Fig. \[lyrats\] I show the theoretical predictions of H-like to He-like line ratios for different elements [see @abun]. A steep increase of the ratios can be recognized indicating a very sensitive temperature diagnostic. In addition I include measured line flux ratios for different stars, and it can be seen that the stars selected have quite different temperature distributions. While Algol has systematically higher line ratios (indicative of higher temperatures), Procyon has only a high carbon H-like to He-like line ratio, while the Si lines are produced at higher temperatures than found in Procyon’s corona. These ratios provide a good starting point for constructing temperature distributions, where the ratios can be used as interpolation points. The shape of the emission measure distribution has to reproduce the measured line ratios, and no abundance effects interfere, because all line ratios are independent of the elemental abundances. Densities {#dens} --------- Density measurements are not possible with low-resolution spectra, because the effects from densities are too subtle to significantly affect a low-resolution spectrum. Therefore, no density analyses have been carried out for stellar X-ray coronae and structural information was only available from eclipsing systems, where one component is X-ray dark [e.g., for $\alpha$CrB or Algol @guedel03; @algolflare estimated densities from the spatial distribution of intensity]. From measurements of the emission measure EM, the total emitting volume $V$ can be inferred if densities $n_e$ are known by simply applying the relation EM$=0.85n_e^2V$, which defines the volume emission measure; here, a homogeneous geometry is assumed. Densities are also needed in order to apply loop scaling laws (developed for the Sun) in order to investigate whether stellar coronae can be considered as scaled-up versions of solar active regions or whether new concepts have to be developed. Again, a geometry has to be assumed, e.g., a set of identical loop-like structures.\ The measurement of densities from high-resolution spectra exploits the increasing number of collisions with increasing electron densities. In a low-density plasma transitions with low de-excitation probabilities (forbidden lines) will still show up. With increasing electron densities the upper levels of these transitions are increasingly subject to collisionally induced further excitations into higher levels with higher radiative de-excitation probabilities and those lines will show up instead. All density analyses are based on either measuring the appearance of the latter lines [e.g., Fe[xxi]{} lines in EUVE spectra: @mason] or the disappearance of the former; or both [He-like triplets: @gj69].\ The density measurements are carried out from line flux ratios in order to eliminate abundance and temperature effects. A number of Fe[xxi]{} lines which are expected to show up in high-density plasma and an Fe[xxi]{} resonance line (at 128.73Å) were measured with EUVE. The ratios of the fluxes in the former lines with those in the latter were analysed by [@dupr93], who reported extremely high densities for Capella, while, e.g., [@schmitt94] found no evidence at all for deviations from the low-density limits for Procyon. The difficulty with these diagnostics has been described by [@ness_dens]. One can never say whether an emission feature at the expected wavelength of a density-sensitive line actually corresponds exactly to this line, or whether it comes from (an) unidentified line(s). [@ness_dens] investigated several Fe[xxi]{} line ratios for several stellar coronae using the LETGS (Chandra) and found not a single star with consistently high densities from all line ratios. Some ratios suggested higher densities (when believing the measured line fluxes to belong to the expected lines), but they were ruled out again by other line ratios, measured at the same time from the same ion (therefore formed in exactly the same environments).\ Analyses of the He-like triplets measure the ratio of a forbidden line, f ($^3$S$_1$–$^1$S$_0$), versus an intercombination line, i ($^3$P$_1$–$^1$S$_0$), [f/i @gj69; @ness_dens], where the f line will put its photons into the i line with increasing densities. The principle is the same for all He-like ions from different species, formed in different plasma regions with different temperatures. The Chandra and XMM-Newton gratings can measure the He-like triplet lines from Si[xiii]{} ($Z=14$) down to C[v]{} ($Z=6$) and plasma regions with temperatures ranging from 15MK down to 1MK can be probed, measuring densities in the range $10^9$–$10^{14}$cm$^{-3}$. Those He-like triplets formed at high temperatures probe only high densities (above $10^{12}$cm$^{-3}$), while low-temperature ions measure only lower densities ($\sim 10^{10}$cm$^{-3}$); this leaves two cases unexplored: low densities in hot plasma and high densities in cool plasma. From the O[vii]{} measurements of stellar coronae the case of high densities at low temperature can be excluded, but the case of low densities at high temperatures remains unexplored. Conclusions =========== The analysis of emission line fluxes from grating X-ray spectra is a powerful tool complementing global fit approaches. It is possible to survey the temperatures in different regions of the temperature distribution, identify abundance anomalies, recognize effects from resonant scattering, and measure densities. Some of these issues can only be addressed with emission line measurements, especially the densities. [@ness_dens] measured O[vii]{} and Ne[ix]{} densities and [@testa04] measured the Mg[xi]{} densities, and the combined results suggest that all three ions originate from different pressure regions. Since the coronal structures implied (generally believed to be loop-like arches) usually do not extend higher than the pressure scale heights of the individual stars, each loop must have constant pressure, and the different pressures from the different density diagnostics thus imply that different classes of loops exist. The O[vii]{} loops are characterized by low pressures and low temperatures (thus small scale heights), and these loops are found to occur in stellar coronae in all stages of activity. In contrast to this, the Ne[ix]{} and the Mg[xi]{} loops have higher pressures (higher temperatures [**and**]{} higher densities) and occur with increasing number in more active stars (characterized by higher X-ray surface fluxes). It appears reasonable to conclude that a standard cool temperature corona always exists, while active regions containing hotter plasma are a privilege of the more active stars. Acknowledgments {#acknowledgments .unnumbered} =============== I thank Prof. Carole Jordan for stimulating discussion about the paper. I acknowledge financial support from Deutsches Zentrum für Luft- und Raumfahrt e.V. (DLR) under 50OR98010. The comments by the first referee, Dr. Manuel Guedel, are highly appreciated and improved the quality of the paper. [^1]: Present address: Rudolf Peierls Centre for Theoretical Physics, University of Oxford, 1 Keble Road, Oxford OX13NP, UK
--- abstract: 'This paper introduces a statistical method to decide whether two blocks in a pair of images match reliably. The method ensures that the selected block matches are unlikely to have occurred “just by chance.” The new approach is based on the definition of a simple but faithful statistical *background model* for image blocks learned from the image itself. A theorem guarantees that under this model not more than a fixed number of wrong matches occurs (on average) for the whole image. This fixed number (the number of false alarms) is the only method parameter. Furthermore, the number of false alarms associated with each match measures its reliability. This [*a contrario*]{} block-matching method, however, cannot rule out false matches due to the presence of periodic objects in the images. But it is successfully complemented by a parameterless *self-similarity threshold*. Experimental evidence shows that the proposed method also detects occlusions and incoherent motions due to vehicles and pedestrians in non simultaneous stereo.' author: - | Neus Sabater$^*$, Andrés Almansa$^{**}$ and Jean-Michel Morel$^*$\ $^*$ENS Cachan, CNRS-CMLA. France.\ $^{**}$Telecom ParisTech, CNRS-LTCI. France. bibliography: - 'biblio\_article.bib' title: Meaningful Matches in Stereovision --- Stereo vision, Block-matching, Number of False Alarms (NFA), **[*a contrario*]{}** detection. Introduction ============ Stereo algorithms aim at reconstructing a 3D model from two or more images of the same scene acquired at different angles. This work only considers previously stereo-rectified image pairs. In that case the 3D reconstruction requires that the matched points in both images belong to the same horizontal epipolar line. The matching process of stereo image pairs has been studied in depth for more than four decades. [@Brown03] and [@Scharstein02] contain a fairly complete comparison of the main methods. According to these surveys there are roughly two main classes of algorithms in binocular stereovision: local matching methods and global methods. Global methods aim at a coherent solution obtained by minimizing an energy functional containing matching fidelity terms and regularity constraints. The most efficient ones seem to be Belief Propagation [@Klaus06; @Yang06], Graph Cuts [@Kolmogorov05], Dynamic Programming [@Ohta85; @Forstmann04] and solvers of the multi-label problem [@Ishikawa03; @Pock08]. They often resolve ambiguous matches by maintaining a coherence along the epipolar line (DP) or along and across epipolar lines (BP & GC). They rely on a regularization term to eliminate outliers and reduce the noise. They give a match to all points which are not detected as occluded. Global methods are, however, at risk to make or propagate errors if the regularization term is not adapted to the scene. A classic example is when a large portion of the scene is nearly constant, for example a scene including a cloudless sky, since there is no information in such a region to compute reliable matches (see Fig. \[fig:flower\_garden\] for an example). On such ambiguous regions, global methods perform an interpolation by using the informative pixels. This interpolation can be lucky, as it is the case in most images of the Middlebury benchmark[^1]. But it can also fail, as is apparent in the above example and in many outdoor scenes. Furthermore, the energy in global methods, has at least two terms and one parameter weighting them (and sometimes three terms and two parameters [@Kolmogorov05]). These parameters are difficult to tune, and even to model. Thus, it remains a valid question how to rule out by a parameterless method the dubious regions where the matches cannot be scientifically demonstrated. On the other hand local methods are simpler, but equally sensitive to local ambiguities. Local methods start by comparing features of the right and left images. These features can be blocks in block-matching methods, or even local descriptors [@Mikolajczyk03] like SIFT descriptors [@Lowe04; @Rabin07], curves [@Schmid00], corners [@Harris88; @Cao04], etc. The drawback of local methods is that they do not provide a dense map as global methods do (meaning that the percentage of matched points is lower than 100%). Recent years have therefore seen a blooming of global methods, which reach the best performance in recent benchmarks such as the Middlebury dataset [@Scharstein02]. But our purpose is to show that local methods can also be competitive. This paper considers the common denominator of most local methods, block-matching. It shows that this tool is amenable to a local statistical decision rule telling us whether a match is reliable. In fact, not all the pixels in an image pair can be reliably matched in real scenes (40 to 80% of pixels). The lack of corresponding points in the second image or the ambiguity in certain points stirs up gross errors in dense stereovision. In particular block-matching methods suffer from two mismatching causes that must be tackled one by one: 1. The main mismatch cause in local methods is the absence of a theoretically well founded threshold to decide whether two blocks really match or not. Our main goal here will be to define such a threshold by an [*a contrario*]{} block-matching (ACBM) rejection rule, ensuring that two blocks do not match “just by chance.” 2. A second minor mismatch cause is the presence on the epipolar line of repetitive shapes or textures, a problem sometimes called “stroboscopic phenomenon,” or “self-similarity.” The proposed ACBM only rules out stochastic similarities, not deterministic ones. While the ACBM rule mismatches repetitive patterns, these types of mismatches are easily eliminated by a simple self-similarity rule (SS). We shall, however, verify that a self-similarity rule by itself is far from reaching the ACBM performance. Both rules are necessary and complementary. The elimination of these two sorts of mismatches is a key issue in block-matching methods. The problem of sifting out matching errors in stereovision has of course been addressed many times. We shall discuss a choice of the significant contributions for each cause of mismatch. [*Occlusions*]{} are still an open problem in stereovision and one of the main causes of mismatch. For this reason numerous stereo approaches focus on detecting them. Global energy methods [@Kolmogorov05] address occlusions by adding a penalty term for occluded pixels in their energy function. In [@Szeliski01] the major contribution is the reasoning about visibility in multi-view stereo. [@Yang06] computes two disparity maps symmetrically and verifies the left-right coherence to detect occluded pixels. [@Ohta85] asserts that if two points in the epipolar line match with two points with a different order then there is an occlusion. Again this can lead to errors if there are narrow objects in the scene. See also [@Egnal02], which compares a choice of methods to detect occlusions. Matching pixels in [*poorly textured regions*]{}, where noise dominates signal, is clearly the main cause of error. Based on local SNR estimates, [@Delon07] has proposed to reject matches by thresholding the second derivative of the correlation function: the flatter the correlation function, the less reliable the match. In [@Sara02], the mismatches due to weakly textured objects or to [*periodic structures*]{} are considered. The author defines a confidently stable matching in order to establish the largest possible unambiguous matching at a given confidence level. Two parameters control the compromise between the percentage of bad matches and the match density of the map. Yet, the match density falls dramatically when the percentage of mismatches decreases. We will see that the method presented here is able to get denser disparity maps with less mismatches. Similarly, [@Manduchi99] tries to eliminate errors on repeated patterns. Yet their matches seem to concentrate mainly on image edges and therefore have a low density. A more primitive version of the rejection method developed here was applied successfully to the detection of [*moving and disappearing objects*]{} in [@Sabater10]. This is a foremost problem in the quasi-simultaneous stereo usual in aerial or satellite imaging where vehicles and pedestrians perturb strongly the stereo matching process. The extended method presented here deals with a much broader class of mismatches, including those due to poor signal to noise ratio. Anterior Statistical *A Contrario* Decision Methods --------------------------------------------------- Because of the above mentioned reasons one cannot presuppose the existence of uniquely determined correspondences for all pixels in the image. Thus, a decision must be taken on whether a block in the left image actually meaningfully matches or not its best match in the right image. This problem will be addressed by the *a contrario* approach initiated by [@Desolneux07]. This method is generally viewed as an adaptation to image analysis of classic hypothesis testing. But it also has a psychophysical justification in the so-called Helmholtz principle, according to which all perceptions could be characterized as having a low probability of occurring in noise. Early versions of this principle in computer vision are [@Lowe85], [@Grimson91], [@Stewart95]. A probabilistic [*a contrario* ]{} argument is also invoked in the SIFT method [@Lowe04], which includes an empirical rejection threshold. A match between two descriptors $S_1$ and $S'_1$ is rejected if the second closest match $S'_2$ to $S_1$ is actually almost as close to $S_1$ as $S'_2$ is. The typical distance ratio rejection threshold is $0.6$, which means that $S_2$ is accepted if $dist(S'_1,S_1)\leq 0.6\times dist(S'_2, S_1)$ and rejected otherwise. Interestingly, Lowe justifies this threshold by a probabilistic argument: if the second best match is almost as good as the first, this only means that both matches are likely to occur casually. Thus, they must be rejected. Recently, [@Rabin07] proposed a rigorous theory for this intuitive method. SIFT matches are accepted or rejected by an *a contrario* methodology involving the Earth mover distance. The *a contrario* methodology has also already been used in stereo matching. [@Moisan04] proposed a probabilistic criterion to detect a rigid motion between two point sets taken from a stereo pair, and to estimate the fundamental matrix. This method, ORSA, shows improved robustness compared to a classic RANSAC method. In the context of foreground detection in video, [@Mittal04] proposed an *a contrario* method for discriminating foreground from background pixels that was later refined by [@Patwardhan08]. Even though this problem has some points in common with stereo matching, it is in a way less strict, since it only needs to learn to discriminate two classes of pixels. Hence they do not need to resort to image blocks, but rely only on a 5 dimensional feature vector composed of the color and motion vector of each pixel. Among influential related works, Robin [*et al.*]{} [@Robin09] describe a method for change detection in a time series of Earth observation images. The change region is defined as the complement of the maximal region where the time series does not change significantly. Thus, what is controlled by the [*a contrario* ]{} method is the number of false alarms (NFA) of the no-change region. This method can therefore be regarded as an [*a contrario*]{} region matching method. It is fundamentally different from the method we shall present. Indeed, Robin’s method assumes (in addition to the statistical background model) a statistical image model that the time series follows in the regions where no change occurs, which is not feasible in stereo matching. The method in [@Nee08] is also worth mentioning. It is an *a contrario* method for detecting similar regions between two images. This method is a classic statistical test rather than an *a contrario* detection method in the sense of [@Desolneux07]. Indeed, the role of the background model ($H_0$ hypothesis) and the structure to be tested ($H_1$ hypothesis) are reversed: This method only controls the false negative rate and not the false positive rate (as in typical *a contrario* methods). Furthermore the significance level of the statistical test is set to $\alpha \approx 0.1$ in accordance with classical statistical testing, whereas as demonstrated in [@Desolneux07] the significance level can be made much more secure, of the order of $10^{-6}$. The [*a contrario*]{} model for region matching in stereo vision used in [@Igual07] is simple and efficient. The gradient orientations at all region pixels are assumed independent and uniformly distributed in the background model. A more elaborate version learns the probability distribution of gradient orientation differences under the hypothesis that the disparity (or motion) is zero, and uses this distribution as a background model. Still, pixels are all considered as independent under the background model. Once this background model is learned, a given disparity (or motion model) is considered as meaningful if the number of aligned gradient orientations is sufficiently large within the tested region. This region matching method works well, but requires an initial over-segmentation of the gray-level image which is later refined by an *a contrario* region merging procedure. Because of the rough background model, false positive region matches can be observed. The key to a good background or [*a contrario*]{} model in block-matching would be to learn a realistic probability distribution of the high-dimensional space of image patches. The seminal works [@Muse06] and [@Cao08] in the context of shape matching (where shapes are represented as pieces of level lines of a fixed size) showed that high-dimensional shape distributions can be efficiently approximated by the tensor product of (well chosen) marginal distributions. The marginal laws are one-dimensional, and therefore easily learned. In [@Muse03] these marginals are learned along the orientations of the principle components. The present work can be viewed as an extension of this curve matching method to block-matching. [@Burrus09] proposed an alternative way of choosing detection thresholds such that the number of false detections under a given background model is ensured to stay below a given threshold. The procedure does not require analytical computations or decomposing the probability as a tensor product of marginal distributions. Instead, detection thresholds are learned by Monte-Carlo simulations in a way that ensures the target NFA rate. This method, that was developed in the context of image segmentation, involves the definition of a set of thresholds to determine whether two neighboring regions are similar. However, as in [@Nee08], the detected event whose false positive rate is controlled is *“the two regions are different,”* and not the one we are interested in in the case of region matching, namely *“the two regions are similar.”* In conclusion, the [*a contrario*]{} methodology is expanding to many matching decision rules, but does not seem to have been previously applied to the block-matching problem. We shall now proceed to describe the [*a contrario*]{} or background model for block-matching. The proposed model is the simplest that worked, but the reader may wonder if a still simpler model could actually work. In the next section we analyze a list of simpler proposals, and we explain why they must be discarded. Choosing an Adequate *A Contrario* Model for Patch Comparison. -------------------------------------------------------------- The goal of this section is is to reject simpler alternatives to the probabilistic block model that will be used. In recent years, patch models and patch spaces are becoming increasingly popular. We refer to [@Mairal08] and references therein for algorithms generating sparse bases of patch spaces. Here, our goal can be formulated in one single question, that clearly depends on the observed set of patches in one particular image and not on the probability space of [*all*]{} patches. The question is: “[*What is the probability that given two images and two similar patches in these images, this similarity arises just by chance?*]{}”The “just by chance” implies the existence of a stochastic [*background model*]{}, often called the [*a contrario*]{} model. When trying to define a well suited model for image blocks, many possibilities open up. Simple arguments show, however, that over-simplified models do not work. Let $H$ be the gray-level histogram of the second image $I'$. The simplest [*a contrario model*]{} of all might simply assume that the observed values $I'(\bx)$ are instances of i.i.d. random variables $\mathcal{I}'(\bx)$ with cumulative distribution $H$. This would lead us to affirm that pixels $\bq$ in image $I$ and $\bq'$ in image $I'$ are a meaningful match if their gray level difference is unlikely small, $${\mathbb{P}}[ |I(\bq) - \mathcal{I}'(\bq')| \leq |I(\bq) - I'(\bq')| := \theta ] \leq \frac{1}{N_{tests}}.$$ As we shall see later, the number of tests $N_{tests}$ is quite large in this case ($N_{tests} \approx 10^7$ for typical image sizes), since it must consider all possible pairs of pixels $(\bq,\bq')$ that may match. But such a small probability can be achieved (assume that $H$ is uniform over $[0,255]$) only if the threshold $\theta = |I(\bq) - I'(\bq')| < 128 \cdot 10^{-7}$. On the other hand, $|I(\bq) - I'(\bq')|$ cannot be expected to be very small because both images are corrupted by noise, among other distortions. Even in a very optimistic setting, where there would be only a small noise distortion between both images (of about 1 gray level standard deviation), such a small difference would only happen for about a tiny proportion ($3.2*10^{-5}$) of the correct matches. This means that a pixel-wise comparison would require an extremely strict detection threshold to ensure the absence of false matches, but this leads to an extremely sparse detection (about thirty meaningful matches per mega-pixel image). This suggests that the use of local information around the pixel is unavoidable. The next simplest approach could be to compare blocks of a certain size $\sqrt{s} \times \sqrt{s}$ with the $\ell^2$ norm, and with the same background model as before. Thus, we could declare blocks $B_{\bq}$ and $B_{\bq'}$ as meaningfully similar if $$\begin{gathered} {\mathbb{P}}\left[ \frac{1}{|B_0|} \sum_{\bx\in B_0} |I(\bq+\bx) - \mathcal{I}'(\bq'+\bx)|^2 \leq \right. \\ \left. \frac{1}{|B_0|} \sum_{\bx\in B_0} |I(\bq+\bx) - I'(\bq'+\bx)|^2 := \theta \right] \leq \frac{1}{N_{tests}} \end{gathered}$$ where $B_0$ is the block of size $\sqrt{s} \times \sqrt{s}$ centered at the position (0,0). Now the test would be passed for a more reasonable threshold ($\theta = 6, 28, 47$ for blocks of size $3 \times 3$, $5\times 5$, $7\times 7$ respectively), which would ensure a much denser response. However, this [*a contrario* ]{} model is by far too naive and produces many false matches. Indeed, blocks stemming from natural images are much more regular than the white noise generated by the background model. Considering all pixels in a block as independent leads to overestimating the similarity probability of two observed similar blocks. It therefore leads to an over-detection. In order to fix this problem, we need a background model better reflecting the statistics of natural image blocks. But directly learning such a probability distribution from a single image in dimension 81 (for $9\times 9$ blocks) is hopeless. Fortunately, as pointed out in [@Muse06], high-dimensional distributions of shapes can be approximated by the tensor product of their adequately chosen marginal distributions. Such marginal laws, being one-dimensional, are easily learned from a single image. Ideally, ICA (Independent Component Analysis) should be used to learn which marginal laws are the most independent, but the simpler PCA analysis will show accurate enough for our purposes. Indeed, it ensures that the principal components are decorrelated, a first approximation to independence. Fig. \[patches\] gives a visual assessment of how well a local PCA model simulates image patches in a class. Nevertheless, the independence assumption will be used as a tool for building the a-contrario model. This independence is not an empirical finding on the set of patches. Plan ---- Section \[NeighborhoodComparison\] introduces the stochastic block model learned from a reference image. Section \[TheAContrarioModel\] presents the [*a contrario*]{} method applied to disparity estimation in stereo pairs and treats the main problem of deciding whether two pixels match. Theorem \[Laseuleproposition\] is the main result of this section, ensuring a controlled number of false detections. Section \[sec:autosimilarity-threshold\] tackles the stroboscopic problem by a parameterless method, and demonstrates the necessity and complementarity of the [*a contrario*]{} and self-similarity rejections. Experimental results and comparison with other methods are in Section \[sec:experimental\_results\]. Section \[Conclusions\] is conclusive. An appendix summarizes the algorithm and gives its complete pseudo-code. The [*a contrario*]{} Model for Block-Matching {#NeighborhoodComparison} ============================================== We shall denote by $\bq\!\!=\!\!(q_1,q_2)$ a pixel in the reference image $I$ and by $B_\bq$ a block centered at $\bq$. To fix ideas, the block will be a square throughout this paper, but this is by no means a restriction. A different shape (rectangle, disk) would be possible, and even a variable shape. Given a point $\bq$ and its block $B_{\bq}$ in the reference image, block-matching algorithms look for a point $\bq'$ in the second image $I'$ whose block $B_{\bq'}$ is similar to $B_{\bq}$. Principal Component Analysis {#PCA} ---------------------------- ![Left: Reference image of a stereo pair of images. Right: the nine first principal components of the 7$\times$7 blocks. []{data-label="pcs_partition"}](Figure1_1.eps){width="3.5cm"} ![Left: Reference image of a stereo pair of images. Right: the nine first principal components of the 7$\times$7 blocks. []{data-label="pcs_partition"}](Figure1_2.eps "fig:"){width="1cm"} ![Left: Reference image of a stereo pair of images. Right: the nine first principal components of the 7$\times$7 blocks. []{data-label="pcs_partition"}](Figure1_3.eps "fig:"){width="1cm"} ![Left: Reference image of a stereo pair of images. Right: the nine first principal components of the 7$\times$7 blocks. []{data-label="pcs_partition"}](Figure1_4.eps "fig:"){width="1cm"}\ ![Left: Reference image of a stereo pair of images. Right: the nine first principal components of the 7$\times$7 blocks. []{data-label="pcs_partition"}](Figure1_5.eps "fig:"){width="1cm"} ![Left: Reference image of a stereo pair of images. Right: the nine first principal components of the 7$\times$7 blocks. []{data-label="pcs_partition"}](Figure1_6.eps "fig:"){width="1cm"} ![Left: Reference image of a stereo pair of images. Right: the nine first principal components of the 7$\times$7 blocks. []{data-label="pcs_partition"}](Figure1_7.eps "fig:"){width="1cm"}\ ![Left: Reference image of a stereo pair of images. Right: the nine first principal components of the 7$\times$7 blocks. []{data-label="pcs_partition"}](Figure1_8.eps "fig:"){width="1cm"} ![Left: Reference image of a stereo pair of images. Right: the nine first principal components of the 7$\times$7 blocks. []{data-label="pcs_partition"}](Figure1_9.eps "fig:"){width="1cm"} ![Left: Reference image of a stereo pair of images. Right: the nine first principal components of the 7$\times$7 blocks. []{data-label="pcs_partition"}](Figure1_10.eps "fig:"){width="1cm"}\ For building a simple [*a contrario*]{} model the principal component analysis can play a crucial role, as shown in [@Muse03]. Indeed, it allows for effective dimension reduction and decorrelates these dimensions, giving a first approximation to independence. This facilitates the construction of a probabilistic density function for the blocks as a tensor product of its marginal densities. Let $B_{\bq}$ be the block of a pixel $\bq$ in the reference image and $(x_{1}^{\bq}, \ldots ,x_{s}^{\bq})$ the intensity gray levels in $B_{\bq}$, where $s$ is the number of pixels in $B_{\bq}$. Let $n$ be the number of pixels in the image. Consider the matrix $X=(x_{i}^{j})$ $1 \leq i \leq s ,\; 1 \leq j \leq n$ consisting of the set of all data vectors, one column per pixel in the image. Then, the covariance matrix of the block is $C= \mathbb{E} (X-\bar{\bx}\textbf{1})(X-\bar{\bx}\textbf{1})^{T}$, where $\bar{x}$ is the column vector of size $s \times 1$ storing the mean values of matrix $X$ and $\textbf{1}=(1, \cdots, 1)$ a row vector of size $1 \times n$. Notice that $\bar{\bx}$ corresponds to the block whose $k$-th pixel is the average of all $k$-th pixels of all blocks in the image. Thus, $\bar{\bx}$ is very close to a constant block, with the constant equal to the image average. The eigenvectors of the covariance matrix are called principal components and are orthogonal. They give the new coordinate system we shall use for blocks. Fig. \[pcs\_partition\] shows the first principal blocks. Usually, the eigenvectors are sorted in order of decreasing eigenvalue. In that way the first principal components are the ones that contribute most to the variance of the data set. By keeping the first $N<s$ components with larger eigenvalues, the dimension is reduced but the significant information retained. While this global ordering could be used to select the main components, a local ordering for each block will instead be used for the statistical matching rule. In other words, for each block, a new order for the principal components will be established given by the corresponding ordered PCA coordinates (the decreasing order is for the absolute values). In that way, comparisons of these components will be made from the most meaningful to the least meaningful one for this particular block. Each block is represented by $N$ ordered coefficients $(c_{\sigma_{\bq}(1)}(\bq),\ldots,c_{\sigma_{\bq}(N)}(\bq))$, where $c_{i}(\bq)$ is the resulting coefficient after projecting $B_{\bq}$ onto the principal component $i \in \{ 1,\ldots,s \}$ and $\sigma_{\bq}$ the permutation representing the final order when ordering the absolute values of components for this particular $\bq$ in decreasing order. By a slight abuse of notation we will write $c_{i}(\bq)$ instead of $c_{\sigma_{\bq}(i)}(\bq)$ knowing that it represents the local order of the best principal components. But notice that $\sigma_{\bq}(1)=1$ for most $\bq$ because of the dominance of the first principal component. Moreover notice that this first component has a quite different coefficient histogram than the other ones (see Fig. \[pca\_coordinate\_histograms\]), because it approximately computes a mean value of the block. Indeed, the barycenter of all blocks is roughly a constant block whose average grey value is the image average grey level. The set of blocks is elongated in the direction of the average grey level and, therefore, the first component computes roughly an average grey level of the block. This explains why the first component histogram is similar to the image histogram. ![(a) Patches of the reference image, chosen at random. (b) Simulated random blocks following the law of the reference image. This experiment illustrates the (relative) adequacy of the [*a contrario* ]{} model. Nevertheless, the PCA components are empirically uncorrelated, but of course not independent. []{data-label="patches"}](Figure2a_1.eps "fig:") ![(a) Patches of the reference image, chosen at random. (b) Simulated random blocks following the law of the reference image. This experiment illustrates the (relative) adequacy of the [*a contrario* ]{} model. Nevertheless, the PCA components are empirically uncorrelated, but of course not independent. []{data-label="patches"}](Figure2a_2.eps "fig:") ![(a) Patches of the reference image, chosen at random. (b) Simulated random blocks following the law of the reference image. This experiment illustrates the (relative) adequacy of the [*a contrario* ]{} model. Nevertheless, the PCA components are empirically uncorrelated, but of course not independent. []{data-label="patches"}](Figure2a_3.eps "fig:") ![(a) Patches of the reference image, chosen at random. (b) Simulated random blocks following the law of the reference image. This experiment illustrates the (relative) adequacy of the [*a contrario* ]{} model. Nevertheless, the PCA components are empirically uncorrelated, but of course not independent. []{data-label="patches"}](Figure2a_4.eps "fig:")\ ![(a) Patches of the reference image, chosen at random. (b) Simulated random blocks following the law of the reference image. This experiment illustrates the (relative) adequacy of the [*a contrario* ]{} model. Nevertheless, the PCA components are empirically uncorrelated, but of course not independent. []{data-label="patches"}](Figure2a_5.eps "fig:") ![(a) Patches of the reference image, chosen at random. (b) Simulated random blocks following the law of the reference image. This experiment illustrates the (relative) adequacy of the [*a contrario* ]{} model. Nevertheless, the PCA components are empirically uncorrelated, but of course not independent. []{data-label="patches"}](Figure2a_6.eps "fig:") ![(a) Patches of the reference image, chosen at random. (b) Simulated random blocks following the law of the reference image. This experiment illustrates the (relative) adequacy of the [*a contrario* ]{} model. Nevertheless, the PCA components are empirically uncorrelated, but of course not independent. []{data-label="patches"}](Figure2a_7.eps "fig:") ![(a) Patches of the reference image, chosen at random. (b) Simulated random blocks following the law of the reference image. This experiment illustrates the (relative) adequacy of the [*a contrario* ]{} model. Nevertheless, the PCA components are empirically uncorrelated, but of course not independent. []{data-label="patches"}](Figure2a_8.eps "fig:")\ ![(a) Patches of the reference image, chosen at random. (b) Simulated random blocks following the law of the reference image. This experiment illustrates the (relative) adequacy of the [*a contrario* ]{} model. Nevertheless, the PCA components are empirically uncorrelated, but of course not independent. []{data-label="patches"}](Figure2a_9.eps "fig:") ![(a) Patches of the reference image, chosen at random. (b) Simulated random blocks following the law of the reference image. This experiment illustrates the (relative) adequacy of the [*a contrario* ]{} model. Nevertheless, the PCA components are empirically uncorrelated, but of course not independent. []{data-label="patches"}](Figure2a_10.eps "fig:") ![(a) Patches of the reference image, chosen at random. (b) Simulated random blocks following the law of the reference image. This experiment illustrates the (relative) adequacy of the [*a contrario* ]{} model. Nevertheless, the PCA components are empirically uncorrelated, but of course not independent. []{data-label="patches"}](Figure2a_11.eps "fig:") ![(a) Patches of the reference image, chosen at random. (b) Simulated random blocks following the law of the reference image. This experiment illustrates the (relative) adequacy of the [*a contrario* ]{} model. Nevertheless, the PCA components are empirically uncorrelated, but of course not independent. []{data-label="patches"}](Figure2a_12.eps "fig:")\ ![(a) Patches of the reference image, chosen at random. (b) Simulated random blocks following the law of the reference image. This experiment illustrates the (relative) adequacy of the [*a contrario* ]{} model. Nevertheless, the PCA components are empirically uncorrelated, but of course not independent. []{data-label="patches"}](Figure2a_13.eps "fig:") ![(a) Patches of the reference image, chosen at random. (b) Simulated random blocks following the law of the reference image. This experiment illustrates the (relative) adequacy of the [*a contrario* ]{} model. Nevertheless, the PCA components are empirically uncorrelated, but of course not independent. []{data-label="patches"}](Figure2a_14.eps "fig:") ![(a) Patches of the reference image, chosen at random. (b) Simulated random blocks following the law of the reference image. This experiment illustrates the (relative) adequacy of the [*a contrario* ]{} model. Nevertheless, the PCA components are empirically uncorrelated, but of course not independent. []{data-label="patches"}](Figure2a_15.eps "fig:") ![(a) Patches of the reference image, chosen at random. (b) Simulated random blocks following the law of the reference image. This experiment illustrates the (relative) adequacy of the [*a contrario* ]{} model. Nevertheless, the PCA components are empirically uncorrelated, but of course not independent. []{data-label="patches"}](Figure2a_16.eps "fig:")\ (a)\ ![(a) Patches of the reference image, chosen at random. (b) Simulated random blocks following the law of the reference image. This experiment illustrates the (relative) adequacy of the [*a contrario* ]{} model. Nevertheless, the PCA components are empirically uncorrelated, but of course not independent. []{data-label="patches"}](Figure2b_1.eps "fig:") ![(a) Patches of the reference image, chosen at random. (b) Simulated random blocks following the law of the reference image. This experiment illustrates the (relative) adequacy of the [*a contrario* ]{} model. Nevertheless, the PCA components are empirically uncorrelated, but of course not independent. []{data-label="patches"}](Figure2b_2.eps "fig:") ![(a) Patches of the reference image, chosen at random. (b) Simulated random blocks following the law of the reference image. This experiment illustrates the (relative) adequacy of the [*a contrario* ]{} model. Nevertheless, the PCA components are empirically uncorrelated, but of course not independent. []{data-label="patches"}](Figure2b_3.eps "fig:") ![(a) Patches of the reference image, chosen at random. (b) Simulated random blocks following the law of the reference image. This experiment illustrates the (relative) adequacy of the [*a contrario* ]{} model. Nevertheless, the PCA components are empirically uncorrelated, but of course not independent. []{data-label="patches"}](Figure2b_4.eps "fig:")\ ![(a) Patches of the reference image, chosen at random. (b) Simulated random blocks following the law of the reference image. This experiment illustrates the (relative) adequacy of the [*a contrario* ]{} model. Nevertheless, the PCA components are empirically uncorrelated, but of course not independent. []{data-label="patches"}](Figure2b_5.eps "fig:") ![(a) Patches of the reference image, chosen at random. (b) Simulated random blocks following the law of the reference image. This experiment illustrates the (relative) adequacy of the [*a contrario* ]{} model. Nevertheless, the PCA components are empirically uncorrelated, but of course not independent. []{data-label="patches"}](Figure2b_6.eps "fig:") ![(a) Patches of the reference image, chosen at random. (b) Simulated random blocks following the law of the reference image. This experiment illustrates the (relative) adequacy of the [*a contrario* ]{} model. Nevertheless, the PCA components are empirically uncorrelated, but of course not independent. []{data-label="patches"}](Figure2b_7.eps "fig:") ![(a) Patches of the reference image, chosen at random. (b) Simulated random blocks following the law of the reference image. This experiment illustrates the (relative) adequacy of the [*a contrario* ]{} model. Nevertheless, the PCA components are empirically uncorrelated, but of course not independent. []{data-label="patches"}](Figure2b_8.eps "fig:")\ ![(a) Patches of the reference image, chosen at random. (b) Simulated random blocks following the law of the reference image. This experiment illustrates the (relative) adequacy of the [*a contrario* ]{} model. Nevertheless, the PCA components are empirically uncorrelated, but of course not independent. []{data-label="patches"}](Figure2b_9.eps "fig:") ![(a) Patches of the reference image, chosen at random. (b) Simulated random blocks following the law of the reference image. This experiment illustrates the (relative) adequacy of the [*a contrario* ]{} model. Nevertheless, the PCA components are empirically uncorrelated, but of course not independent. []{data-label="patches"}](Figure2b_10.eps "fig:") ![(a) Patches of the reference image, chosen at random. (b) Simulated random blocks following the law of the reference image. This experiment illustrates the (relative) adequacy of the [*a contrario* ]{} model. Nevertheless, the PCA components are empirically uncorrelated, but of course not independent. []{data-label="patches"}](Figure2b_11.eps "fig:") ![(a) Patches of the reference image, chosen at random. (b) Simulated random blocks following the law of the reference image. This experiment illustrates the (relative) adequacy of the [*a contrario* ]{} model. Nevertheless, the PCA components are empirically uncorrelated, but of course not independent. []{data-label="patches"}](Figure2b_12.eps "fig:")\ ![(a) Patches of the reference image, chosen at random. (b) Simulated random blocks following the law of the reference image. This experiment illustrates the (relative) adequacy of the [*a contrario* ]{} model. Nevertheless, the PCA components are empirically uncorrelated, but of course not independent. []{data-label="patches"}](Figure2b_13.eps "fig:") ![(a) Patches of the reference image, chosen at random. (b) Simulated random blocks following the law of the reference image. This experiment illustrates the (relative) adequacy of the [*a contrario* ]{} model. Nevertheless, the PCA components are empirically uncorrelated, but of course not independent. []{data-label="patches"}](Figure2b_14.eps "fig:") ![(a) Patches of the reference image, chosen at random. (b) Simulated random blocks following the law of the reference image. This experiment illustrates the (relative) adequacy of the [*a contrario* ]{} model. Nevertheless, the PCA components are empirically uncorrelated, but of course not independent. []{data-label="patches"}](Figure2b_15.eps "fig:") ![(a) Patches of the reference image, chosen at random. (b) Simulated random blocks following the law of the reference image. This experiment illustrates the (relative) adequacy of the [*a contrario* ]{} model. Nevertheless, the PCA components are empirically uncorrelated, but of course not independent. []{data-label="patches"}](Figure2b_16.eps "fig:")\ (b)\ [*A Contrario*]{} Similarity Measure between Blocks {#TheAContrarioModel} --------------------------------------------------- \[defacontrariomodel\] We call [*a contrario block model*]{} associated with a reference image a random block $\bB$ described by its (random) components $\bB =(\bc_1, \dots, \bc_s)$ on the PCA basis of the blocks of the reference image, satisfying - the components $\bc_i$, $i=1, \dots, s$ are independent random variables; - for each $i$, the law of $\bc_i$ is the empirical histogram of the $i$-th PCA component $c_i(\cdot)$ of the blocks of the reference image. The reference image will be the secondary image $I'$. Fig. \[patches\] shows patches generated according to the above [*a contrario*]{} block model and compares them to blocks picked at random in the reference image. The [*a contrario*]{} model will be used for computing a block resemblance probability as the product of the marginal resemblance probabilities of the $\bc_i$ in the [*a contrario*]{} model, which is justified by the independence of $\bc_i$ and $\bc_j$ for $i\neq j$. There is a strong adequacy of the [*a contrario*]{} model to the empirical model, since the PCA transform ensures that $\bc_i$ and $\bc_j$ are uncorrelated for $i\neq j$, a first approximation of the independence requirement. We start by defining the resemblance probability between two blocks for a single component. Denote by $H_{i}(\cdot):=H_{i}(c_{i}(\cdot))$ the normalized cumulative histogram of the $i$-th PCA block component $c_{i}(\cdot)$ for the secondary image $I'$. \[defempiricalprobability\] Let $B_{\bq}$ be a block in $I$ and $B_{\bq'}$ a block in $I'$. Define the probability that a random block $\bB$ of $I'$ resembles $B_{\bq}$ as closely as $B_{\bq'}$ does in the $i$-th component by $$\widehat{p^{i}}_{\bq\, \bq'}= \begin{cases} H_{i}(\bq') & \text{if $H_{i}(\bq')-H_{i}(\bq) > H_{i}(\bq) $;} \\ 1-H_{i}(\bq') & \text{if $H_{i}(\bq)-H_{i}(\bq') > 1-H_{i}(\bq)$} \\ 2|H_{i}(\bq)-H_{i}(\bq')| & \text{otherwise.} \end{cases}$$ Fig. \[cumulative\_histo\] illustrates how the resemblance probability $\widehat{p^{i}}_{\bq\, \bq'}$ is computed and Fig. \[pca\_coordinate\_histograms\] shows empirical marginal densities. ![Normalized cumulative histogram of $i$-th PCA coordinates of the secondary image. $c_i(\bq)$ is the $i$-th PCA coordinate value in the first image. The resemblance probability $\widehat{p^{i}}_{\bq\, \bq'}$ for the $i$-th component is twice the distance $|H_{i}(\bq)-H_{i}(\bq')|$ when $H_{i}(\bq)$ is not too close to the values $0$ or $1$.[]{data-label="cumulative_histo"}](Figure3.eps) ![image](Figure4_1.eps){width="16.00000%"} ![image](Figure4_2.eps){width="16.00000%"} ![image](Figure4_3.eps){width="16.00000%"} ![image](Figure4_4.eps){width="16.00000%"} ![image](Figure4_5.eps){width="16.00000%"} ![image](Figure4_6.eps){width="16.00000%"} Robust Similarity Distance -------------------------- The first principal components of $B_\bq$, being in decreasing order, contain the relevant information on the block. Thus, if two blocks are not similar for one of the first components, they should not be matched, even if their next components are similar. Due to this fact, the components of $B_\bq$ and another block $B_{\bq'}$ must be compared with a non-decreasing exigency level. In addition, in the [*a contrario*]{} model, the number of tested correspondences should be as small as possible to reduce the number of false alarms. A quantization of the tested resemblance probabilities is therefore required to limit the number of tests. These two remarks lead to define the quantized resemblance probability as the smallest non-decreasing sequence of quantized probabilities bounding from above the sequence $\widehat{ \, p^{i}}_{\bq\, \bq'}$. \[def:quantized-non-decreasing-probability\] Let $B_{\bq}$ be a block in $I$. Let $\Pi:=\{ \pi_j=1/2^{j-1}\}_{j=1,\ldots,Q}$ be a set of quantized probability thresholds and let $$\Upsilon := \left\lbrace \, \bp\!=\!(p_1, \ldots, p_N) \, \mid \, p_i \in \Pi , \quad p_i \leqslant p_j \; \mathrm{if} \; i<j \right\rbrace$$ be the family of non-decreasing $N$-tuples in $\Pi^N$, endowed with the order $\ba\geqslant \bb$ if and only if $a_i\geqslant b_i$ for all $i$. The quantized probability sequence associated with the event that random block $\bB$ resembles $B_{\bq}$ as closely as $B_{\bq'}$ does in the ith component is defined by $$(p^i_{\bq\,\bq'})_{i=1, \dots N} = \underset{t \in \Upsilon}{\inf} \lbrace t\; \mid\; t\geqslant (\widehat{p^{i}}_{\bq\,\bq'})_{i=1,\dots N} \rbrace \,.$$ Notice that the infimum $(p^1_{\bq \,\bq'}, \ldots, p^N_{\bq\,\bq'}) $ is uniquely defined and belongs to $\Upsilon$. Put another way the quantized probability vector $(p^1_{\bq \,\bq'}, \ldots, p^N_{\bq\,\bq'})$ is the smallest upper bound of the resemblance probabilities $(\widehat{p^{1}}_{\bq\,\bq'},\ldots,\widehat{p^{N}}_{\bq\,\bq'})$ that can be found in $\Upsilon$. Fig. \[non\_decreasing\_proba\] illustrates the quantized probabilities in two cases. ![Two examples of probabilities with $Q=5$ and $N=9$. The probability thresholds are in ordinate and the features in abscissa. The resemblance probabilities are represented with small crosses and quantized probabilities with small squares. The example on the left has a final probability of $\nicefrac{1}{(16^2 \cdot 8^2 \cdot 4^4 \cdot 2)}$. The right example has the same resemblance probabilities excepting for features $1$ and $2$, but the final probability is $\nicefrac{1}{2}$. Only the configuration on the left corresponds to a meaningful match.[]{data-label="non_decreasing_proba"}](Figure5.eps) \[resemblanceprobability\] Let $B_{\bq} \in I$ and $B_{\bq'} $ be two blocks. Assume the principal components $i\in \{1, 2, \dots, s\}$ are reordered so that $|c_1(\bq)|\geqslant |c_2(\bq)|\geqslant \dots \geqslant |c_s(\bq)|$. The probability of the event [“the random block $\bB$ has its $N$ first components as similar to those of $B_{\bq}$ as to those of $B_{\bq'}$”]{} is $$Pr_{\bq\,\bq'} = \prod_{i=1}^{N}p^{i}_{\bq\,\bq'}\;.$$ This is a direct consequence of Def. \[defacontrariomodel\], the principal components of $\bB$ being independent. The resemblance probability is the product of the marginal resemblance probabilities. As classic in statistical decision, we could stop and use the above resemblance probability. But, despite having a low resemblance probability for each $Pr_{\bq\, \bq'}$, the large number of resemblance tests allows for a very large number of false matches. Our next goal therefore is to define a number of false alarms, and not a probability, as the right criterion. To this aim, we need to estimate the number of tests. Number of Tests --------------- The number of tests for comparing all the blocks of image $I$ with all the blocks in image $I'$ is the product of three factors. The first one is the image size $\# I$. The second is the size of the search region denoted by $S'\subset I' $. We mentioned before that the search is done on the epipolar line. In practice, a segment of this line is enough. If $\bq=(q_1,q_2)$ is the point of reference it is enough to look for $\bq'=(q'_1,q_2) \in I'$ such that $q'_1 \in [q_1-R,q_1+R]$ where $R$ is a fixed integer larger than the maximal possible disparity. The third and most important factor is the number of different non-decreasing probability distributions $FC_{N,Q}=\#\Upsilon$ that can be envisaged. Of course not all of these tests are performed, but only the one indicated by the observed block $B_{\bq'}$. Yet, the choice of this unique test is steered by an [*a posteriori*]{} observation, while the calculation of the expectation of the number of false alarms (NFA) must be calculated [*a priori*]{}. Thus we must compute the NFA as though all comparisons for all quantized decreasing probabilities were effectuated. A test can never be defined [*a posteriori*]{}, it cannot be steered by the observation. Thus the number of tests is not the number of tests effectively performed. There are $\#\Upsilon$ ways each couple of blocks could [*a priori*]{} be compared. In other terms $\#\Upsilon$ different distances are [*a priori*]{} tested. Theorem 1 will ultimately justify the following definition. \[defnumberoftest\] With the above notation we call the number of tests for matching two images $I$ and $I'$ the integer $ N_{test}= \#I \cdot \#S' \cdot \# \Upsilon \; = n \,(2R+1) \, FC_{N,Q}. $ With the above notation, $$FC_{N,Q}=\sum_{t=0}^{Q-1} (t+1)\cdot \binom{N+Q-t-3}{Q-t-1} \;,$$ where $$FC_{N,Q} :=\# \{ \,f:[1,N]\rightarrow [1,Q] \, \mid\, f(x)\leqslant f(y),\, \forall x \leq y \, \} .$$ In order to prove this result we write $$\begin{aligned} \overline{FC}_{N,Q} :=\# \{ \, & f:[1,N]\rightarrow [1,Q] \, \mid \,f(1)=1, \; f(N)=Q; \notag \\ & f(x)\leqslant f(y),\, \forall x \leqslant y \, \} \;. \notag\end{aligned}$$ Since $\displaystyle{FC_{N,Q}=\sum_{t=0}^{Q-1} (t+1)\overline{FC}_{N,Q-t}}$ and\ $\overline{FC}_{N,Q}=\binom{N+Q-3}{Q-1}$ the result follows. We are now in a position to define a number of false alarms, which will control the overall number of false detections on the whole image. \[defNFA\] Let $B_{\bq} \in I$ and $B_{\bq'} \in I'$ be two observed blocks. Assume the principal components $i\in \{1, 2, \dots, s\}$ are reordered so that $|c_1(\bq)|\geqslant |c_2(\bq)|\geqslant \dots \geqslant|c_s(\bq)|$. We define the Number of False Alarms associated with event [“the random block $\bB$ has its $N$ first components as similar to those of $B_{\bq}$ as those of $B_{\bq'}$ are”]{} by $$NFA_{\bq,\bq'} = N_{test} \cdot Pr_{\bq\,\bq'}= N_{test}\cdot \prod_{i=1}^{N}p_{\bq\,\bq'}^{i},$$ where $N_{test}$ comes form Def. \[defnumberoftest\] and $Pr_{\bq\,\bq'}$ is the probability that the random block $\bB$ have its first $N$ PCA components as similar to those of $B_{\bq}$ as those of $B_{\bq'}$ are (Prop. \[resemblanceprobability\]). \[def:meaningful\_match\] A pair of pixels $\bq$ and $\bq'$ in a stereo pair $(I, I')$ is an $\epsilon$-meaningful match if $$NFA_{\bq\,\bq'} \leqslant \epsilon \;.$$ The Main Theorem ---------------- As it is computed above the NFA dimensionality is that of a number (of false alarms) [*per image*]{}. An alternative would be to measure the NFA as a number of false alarms per pixel, in which case the number of tests would not contain the cardinality of the image factor $\# I$. With the proposed NFA, it is up to the users to decide which number of false alarms per image they consider tolerable. The NFA of a match actually gives a security level: the smaller the NFA, the more meaningful the match intuitively is. But Thm. \[Laseuleproposition\] will give the real meaning of the NFA. To state it, we will use a clever trick used by Shannon in his information theory [@shannon2001mathematical], page 22-23, namely to treat the probability of an event as random variable and to play with its expectation. Here the NFA will become a random variable, replacing $B_{\bq'}$ with $\bB$ in its definition. In the [*a contrario*]{} model, each comparison of $B_\bq$ with some $B_{\bq'}$ is interpreted as a comparison of $B_\bq$ to a trial of the random block model $\bB$. In total, $B_\bq$ is compared with $2R+1 other blocks$ for each $\bq\in I$. So, we are led to distinguish for each $\bq$ $(2R+1)$ trials which are as many i.i.d. random blocks $\bB^{\bq, j}$, $j\in\{1, 2, \dots 2R+1\}$, all with the same law as $\bB$. They model [*a contrario*]{} the $(2R+1)$ trials by which $B_\bq$ is matched to $(2R+1)$ blocks in $I'$. We are interested in the expectation of the number of such trials being successful (i.e. $\varepsilon$-meaningful), “just by chance.” Consider the event $E_{\bq,j}$ that a random block $\bB^{\bq, j}$ in the [*a contrario*]{} model with reference image $I'$ meaningfully matches $B_\bq$. If this happens, it is obviously a [*false alarm*]{}. We shall denote by $\chi_{\bq,j}$ the random characteristic function associated with this event, with the convention that $\chi_{\bq, j}=1$ if $E_{\bq, j}$ is true, $\chi_{\bq, j}=0$ otherwise. Similarly $NFA_{\bq,j}$ and $p^i_{\bq,j}$ are the NFA and quantized probabilities associated with the event $E_{\bq,j}$. \[Laseuleproposition\] Let $\Gamma=\Sigma_{\bq\in I, j\in \{1, \dots, 2R+1\}} \chi_{\bq, j}$ be the random variable representing the number of occurrences of an $\epsilon$-meaningful match between a deterministic patch in the first image and a random patch in the second image. Then the expectation of $\Gamma$ is less than or equal to $\epsilon$. We have $$\chi_{\bq, j} = \left\{ \begin{array}{cc} 1, & \; \text{if } NFA_{\bq, j}\leqslant\epsilon;\\ 0, & \; \text{if } NFA_{\bq, j} > \epsilon. \end{array} \right.$$ Then, by the linearity of the expectation $$\mathbb{E} [\Gamma] = \sum_{\bq, j} \mathbb{E}[\chi_{\bq,j}] = \sum_{\bq, j} {\mathbb{P}}\left[ NFA_{\bq, j} \leqslant \epsilon \right].$$ The probability inside the above sum can be computed by Definitions \[defNFA\] and \[defempiricalprobability\]: $$\mathbb{P} \left[ NFA_{\bq, j} \leqslant \epsilon \right] = \mathbb{P} \left[ \, \prod_{i}^{N} p^{i}_{\bq, j} \leqslant \frac{\epsilon}{N_{test}} \, \right]$$ There are many probability $N$-tuples $p=(p^i_{\bq, j})_{i=1,\dots, N}$ permitting to obtain the inequality inside the above probability. Nevertheless, the probabilities having been quantized, we can reduce it to a (non-disjoint) union of events, namely all $p \in \Upsilon$ such that $ \prod_{i} p_i \leqslant \epsilon/N_{test}$. By the Bonferroni correction the considered probability can be upper-bounded by the sum of their probabilities sum. In addition the intersection below involves only independent events according to our background model. Thus $$\begin{aligned} \mathbb{P} \left[ \prod_{i}^{N} p^{i}_{\bq, j} \leqslant \frac{\epsilon}{N_{test}}\right] & = \mathbb{P} \left[ \bigcup_{\substack{ p \in \Upsilon\\ \prod_{i} p_i \leqslant \epsilon/N_{test} }} \bigcap_{i} \big(p^{i}_{\bq, j}\leqslant p_i \big) \right]\\ &\leqslant \sum_{\substack{p \in \Upsilon \\ \prod_{i} p_i \leqslant \epsilon/N_{test} }} \prod_{i} p_i \\ & \leqslant \; \dfrac{\epsilon}{\#I \, \#S'},\\\end{aligned}$$ where we have also used $N_{tests} = \#I \, \#S' \, \#\Upsilon$. So we have shown that $$\mathbb{E}[\Gamma] = \sum_{\bq, j} \mathbb{E} \left[ \chi_{\bq, j} \right] \leqslant \sum_{\bq, j} \dfrac{\epsilon}{\#I \, \#S'} = \epsilon.$$ The $\epsilon$ parameter is the only legitimate parameter of the method, the other ones namely the block size $\sqrt s$, the number of principal components $N$ and the number of quantized probability thresholds $Q$ can be fixed once and for all for a given SNR (Signal to Noise Ratio). All experiments are made with a common SNR, but a lower SNR would allow smaller blocks and consequently a different set of parameters. The question of how many false alarms should be acceptable in a stereo pair depends on the size of the images. In all experiments with moderate size images, of the order of $10^6$ pixels, the decision was to fix $\varepsilon=1$. Thanks to Theorem \[Laseuleproposition\] this means that it is expected to find one false alarm in average for images with $10^6$ pixels. Then, fixing $\varepsilon$ makes the method into a parameterless method for all moderately sized images. The Self-Similarity Threshold {#sec:autosimilarity-threshold} ============================= Urban environments contain many periodic local structures (for example the windows on a façade). Since, in general, the number of repetitions is insignificant with respect to the number of blocks that have been used to estimate the empirical [*a contrario*]{} probability distributions, the [*a contrario*]{} model does not learn this repetition, and can be fooled by such repetitions, thus signaling a significant match for each repetition of the same structure. Of course, one of those significant matches is the correct one, but chances are that the correct one is not also the most significant. In such a situation two choices are left: *(i)* try to match the whole set of self-similar blocks of $I$ as a single multi-block (typically, global methods such as graph-cuts do that implicitly); or *(ii)* remove any (probably wrong) response in the case where the stroboscopic effect is detected. The first alternative would lead to errors anyway, if the similar blocks do not have the same height, or if some of them are out of field in one of the images. Fortunately, stereo pair block-matching yields a straightforward adaptive threshold. A distance function $d$ between blocks being defined, let $\bq$ and $\bq'$ be points in the reference and secondary images respectively that are candidates to match with each other. The match of $\bq$ and $\bq'$ will be accepted if the following self-similarity (SS) condition is satisfied: $$\label{test_ss} d(B_{\bq},B_{\bq'}) < min\{ d(B_{\bq},B_{\br}) | \; r \in I \cap S(\bq)\}$$ where $S(\bq)=[q_1-R\,,\,q_1+R] \, \backslash \,\{q_1,\,q_1+1,\,q_1-1\}$ and $R$ is the search range. As noted earlier, the search for correspondences can be restricted to the epipolar line. This is why the automatic threshold is restricted to $S(\bq)$. The distance used in the self-similarity threshold is the sum of squared differences (SSD) of all the pixels in the block and the block size is the same than the block size use for ACBM. Computing the similarity of matches in one of the images is not a new idea in stereovision. In [@Manduchi99] the authors define the *distinctiveness* of an image point $\bq$ as the perceptual distance to the most similar point other than itself in the search window. In particular, they study the case of the auto-SSD function (Sum of Squared Differences computed in the same image). The flatness of the function contains the expected match accuracy and the height of the smallest minimum of the auto-SSD function beside the one in the origin gives the risk of mismatch. They are able to match ambiguous points correctly by matching intrinsic curves [@Tomasi98]. However, the proposed algorithm only accepts matches when their quality is above a certain threshold. The obtained disparity maps are rather sparse and the accepted matches are completely concentrated on the edges of the image. According to [@Sara02], the ambiguous correspondences should be rejected. In this work a new *stability property* is defined. This property is one condition a set of matches must satisfy to be considered unambiguous at a given confidence level. The stability constraint and the tuning of two parameters permits to take care of flat or periodic autocorrelation functions. The comparison of this last algorithm with our results will be done in section \[sec:experimental\_results\]. *A Contrario* vs Self-Similarity -------------------------------- Is the self-similarity (SS) threshold really necessary? One may wonder whether the *a contrario* decision rule to accept or reject correspondences between patches would be sufficient by itself. Conversely, is the self-similarity threshold enough to reject false matches in a correlation algorithm? This section addresses both questions and analyzes some simple examples enlightening the necessity and complementarity of both tests. For each example we are going to compare the result of the [*a contrario*]{} test and the result of a classic correlation algorithm combined with the self-similarity threshold alone. First consider two independent Gaussian noise images (Fig. \[fig:noise\]). It is obvious that we would like to reject any possible match between these two images. As expected, (this is a sanity check!) the *a contrario* test rejects all the possible patch matches. On the other hand, the correlation algorithm combined with the self-similarity is not sufficient: many false matches are accepted. ![(a) Reference noise image. (b) No match at all has been accepted by the *a contrario* test! (c) Many false correspondences have been accepted by the self-similarity threshold.[]{data-label="fig:noise"}](Figure6_1.eps "fig:"){height="2.5cm"}\ (a)\ ![(a) Reference noise image. (b) No match at all has been accepted by the *a contrario* test! (c) Many false correspondences have been accepted by the self-similarity threshold.[]{data-label="fig:noise"}](Figure6_2.eps "fig:"){height="2.5cm"}\ (b)\ ![(a) Reference noise image. (b) No match at all has been accepted by the *a contrario* test! (c) Many false correspondences have been accepted by the self-similarity threshold.[]{data-label="fig:noise"}](Figure6_3.eps "fig:"){height="2.5cm"}\ (c)\ The second comparative test is about occlusions. If a point of the scene can be observed in only one of the images of the stereo pair, then an estimation of its disparity is simply impossible. The best decision is to reject its matches. A good example to illustrate the performance of both rejection tests ACBM and SS is the map image (Middlebury stereovision database, Fig. \[map\_ac\_as\]) which has a large baseline and therefore an important number of occluded pixels. ACBM gives again the best result (see Table \[table\_map\_ac\_as\]). The table indicates that the self-similarity test only removes a few additional points. Yet, even if the proportion of eliminated points is tiny, such mismatches can be very annoying and the gain is not negligible at all. ![(a) Reference image (b) Secondary image. The rectangular object occludes part of the background (c) The *a contrario* test does not accept any match for pixels in the occluded areas. (d) With the self-similarity threshold the disparity map is denser, but wrong disparities remain in the occluded region.[]{data-label="map_ac_as"}](Figure7_1.eps "fig:"){height="3cm"}\ (a)\ ![(a) Reference image (b) Secondary image. The rectangular object occludes part of the background (c) The *a contrario* test does not accept any match for pixels in the occluded areas. (d) With the self-similarity threshold the disparity map is denser, but wrong disparities remain in the occluded region.[]{data-label="map_ac_as"}](Figure7_2.eps "fig:"){height="3cm"}\ (c)\ ![(a) Reference image (b) Secondary image. The rectangular object occludes part of the background (c) The *a contrario* test does not accept any match for pixels in the occluded areas. (d) With the self-similarity threshold the disparity map is denser, but wrong disparities remain in the occluded region.[]{data-label="map_ac_as"}](Figure7_3.eps "fig:"){height="3cm"}\ (b)\ ![(a) Reference image (b) Secondary image. The rectangular object occludes part of the background (c) The *a contrario* test does not accept any match for pixels in the occluded areas. (d) With the self-similarity threshold the disparity map is denser, but wrong disparities remain in the occluded region.[]{data-label="map_ac_as"}](Figure7_4.eps "fig:"){height="3cm"}\ (d)\ Bad matches Total matches --------- ------------- --------------- SS 3.35% 85.86% ACBM 0.37% 64.85% ACBM+SS 0.36% 64.87% : Quantitative comparison of several algorithms on Middlebury’s Map image: the block-matching algorithm with the self-similarity threshold (SS), the *a contrario* algorithm (ACBM) and the algorithm combining both (ACBM+SS). The percentage of matches for each algorithm is computed in the whole image and among these the number of wrong matches is also given. A match is considered wrong if its disparity difference with the ground truth disparity is larger than one pixel.[]{data-label="table_map_ac_as"} The [*a contrario*]{} methodology cannot detect the ambiguity inherent in periodic patterns. Indeed, periodicity certainly does not occur “just by chance.” The match between a window and another identical window on a building façade is obviously non casual and is therefore legally accepted by an [*a contrario*]{} model. In this situation, the self-similarity test is necessary. A synthetic case has been considered in Fig. \[brodatz\_ratlla\], where the accepted correspondences are completely wrong in the *a contrario* test for the repeated lines. On the contrary, the self-similarity threshold is able to reject matches in this region of the image. ![(a) Reference image with a texture and a stripes periodic motif. The secondary image is a 2 pixels translation of the reference image. The obtained disparity map should be a constant image with value 2. (b) The *a contrario* test gives the right disparity 2 everywhere, except in the stripes region. (c) The repeated stripes are locally similar, so the self-similarity threshold rejects all the patches in this region. []{data-label="brodatz_ratlla"}](Figure8_1 "fig:"){height="2.4cm" width="2.4cm"}\ (a)\ ![(a) Reference image with a texture and a stripes periodic motif. The secondary image is a 2 pixels translation of the reference image. The obtained disparity map should be a constant image with value 2. (b) The *a contrario* test gives the right disparity 2 everywhere, except in the stripes region. (c) The repeated stripes are locally similar, so the self-similarity threshold rejects all the patches in this region. []{data-label="brodatz_ratlla"}](Figure8_2.eps "fig:"){height="2.5cm" width="2.5cm"}\ (b)\ ![(a) Reference image with a texture and a stripes periodic motif. The secondary image is a 2 pixels translation of the reference image. The obtained disparity map should be a constant image with value 2. (b) The *a contrario* test gives the right disparity 2 everywhere, except in the stripes region. (c) The repeated stripes are locally similar, so the self-similarity threshold rejects all the patches in this region. []{data-label="brodatz_ratlla"}](Figure8_3.eps "fig:"){height="2.5cm" width="2.5cm"}\ (c)\ In short, ACBM and SS are both necessary and complementary. SS only removes a tiny additional number of errors, but even a few outliers can be very annoying in stereo. From now on, a possible match $(\bq, \bq')$ will therefore be accepted only if it is a meaningful match (ACBM test in Def. \[def:meaningful\_match\]) and satisfies the SS condition given by (\[test\_ss\]). Comparative Results {#sec:experimental_results} =================== The algorithm parameters are identical for all experiments throughout this paper. The comparison window size is $9\times9$, the number of considered principal components is $9$, the number of quantum probabilities is $5$. The previous section showed how the proposed method (ACBM + SS) deals with noise, occlusions and repeated structures. The detection method is also adapted to quasi-simultaneous stereo from aerial or satellite images, where moving objects (cars, pedestrians) are a serious disturbance. Essentially, this is the same problem as the occlusion problem, but the occlusion is caused by camera motion in presence of a depth difference instead of object motion. Figure \[marseille\] shows a stereo pair of images of the city of Marseille (France). In both cases, several cars have changed position between the two images. They are duly detected. The shadow regions, which contain more noise than signal, have also been rejected. We have also compared our results with the Kolmogorov’s graph cut implementation [@Kolmogorov05] which rejects [*a posteriori*]{} incoherent matches and are labeled as occlusions. In these examples, graph cuts is able to reject some mismatches due to the moving objects in the scene but a lot of conspicuous errors remain in the final disparity map. Likewise, OpenCV’s stereo matching algorithm [@opencv] fails completely on this kind of pairs, even though it obtains correct results in more simple examples like the one in figure \[map\_ac\_as\].\ [ccc]{} [reference image]{} & ![ From top to bottom: reference image, secondary image, ACBM+SS disparity map, graph cuts disparity map, and OpenCV disparity map. In our disparity map, red points are points which haven’t been matched. Notice that patches containing a moved car or bus haven’t been matched. Poorly textured regions (shadows) where noise dominates have also been rejected. Red points in the graph cuts disparity map are rejected [*a posteriori*]{} and considered as occlusions. The graph cuts disparity map is denser and smoother but several mismatches appear in the low textured areas and regions with moving objects.[]{data-label="marseille"}](Figure9_5.eps "fig:"){width="3cm"} & ![ From top to bottom: reference image, secondary image, ACBM+SS disparity map, graph cuts disparity map, and OpenCV disparity map. In our disparity map, red points are points which haven’t been matched. Notice that patches containing a moved car or bus haven’t been matched. Poorly textured regions (shadows) where noise dominates have also been rejected. Red points in the graph cuts disparity map are rejected [*a posteriori*]{} and considered as occlusions. The graph cuts disparity map is denser and smoother but several mismatches appear in the low textured areas and regions with moving objects.[]{data-label="marseille"}](Figure9_1.eps "fig:"){width="3cm"}\ [secondary image]{} & ![ From top to bottom: reference image, secondary image, ACBM+SS disparity map, graph cuts disparity map, and OpenCV disparity map. In our disparity map, red points are points which haven’t been matched. Notice that patches containing a moved car or bus haven’t been matched. Poorly textured regions (shadows) where noise dominates have also been rejected. Red points in the graph cuts disparity map are rejected [*a posteriori*]{} and considered as occlusions. The graph cuts disparity map is denser and smoother but several mismatches appear in the low textured areas and regions with moving objects.[]{data-label="marseille"}](Figure9_6.eps "fig:"){width="3cm"} & ![ From top to bottom: reference image, secondary image, ACBM+SS disparity map, graph cuts disparity map, and OpenCV disparity map. In our disparity map, red points are points which haven’t been matched. Notice that patches containing a moved car or bus haven’t been matched. Poorly textured regions (shadows) where noise dominates have also been rejected. Red points in the graph cuts disparity map are rejected [*a posteriori*]{} and considered as occlusions. The graph cuts disparity map is denser and smoother but several mismatches appear in the low textured areas and regions with moving objects.[]{data-label="marseille"}](Figure9_2.eps "fig:"){width="3cm"}\ [ACBM+SS]{} & ![ From top to bottom: reference image, secondary image, ACBM+SS disparity map, graph cuts disparity map, and OpenCV disparity map. In our disparity map, red points are points which haven’t been matched. Notice that patches containing a moved car or bus haven’t been matched. Poorly textured regions (shadows) where noise dominates have also been rejected. Red points in the graph cuts disparity map are rejected [*a posteriori*]{} and considered as occlusions. The graph cuts disparity map is denser and smoother but several mismatches appear in the low textured areas and regions with moving objects.[]{data-label="marseille"}](Figure9_7.eps "fig:"){width="3cm"} & ![ From top to bottom: reference image, secondary image, ACBM+SS disparity map, graph cuts disparity map, and OpenCV disparity map. In our disparity map, red points are points which haven’t been matched. Notice that patches containing a moved car or bus haven’t been matched. Poorly textured regions (shadows) where noise dominates have also been rejected. Red points in the graph cuts disparity map are rejected [*a posteriori*]{} and considered as occlusions. The graph cuts disparity map is denser and smoother but several mismatches appear in the low textured areas and regions with moving objects.[]{data-label="marseille"}](Figure9_3.eps "fig:"){width="3cm"}\ [graph-cuts]{} & ![ From top to bottom: reference image, secondary image, ACBM+SS disparity map, graph cuts disparity map, and OpenCV disparity map. In our disparity map, red points are points which haven’t been matched. Notice that patches containing a moved car or bus haven’t been matched. Poorly textured regions (shadows) where noise dominates have also been rejected. Red points in the graph cuts disparity map are rejected [*a posteriori*]{} and considered as occlusions. The graph cuts disparity map is denser and smoother but several mismatches appear in the low textured areas and regions with moving objects.[]{data-label="marseille"}](Figure9_8.eps "fig:"){width="3cm"} & ![ From top to bottom: reference image, secondary image, ACBM+SS disparity map, graph cuts disparity map, and OpenCV disparity map. In our disparity map, red points are points which haven’t been matched. Notice that patches containing a moved car or bus haven’t been matched. Poorly textured regions (shadows) where noise dominates have also been rejected. Red points in the graph cuts disparity map are rejected [*a posteriori*]{} and considered as occlusions. The graph cuts disparity map is denser and smoother but several mismatches appear in the low textured areas and regions with moving objects.[]{data-label="marseille"}](Figure9_4.eps "fig:"){width="3cm"}\ [OpenCV]{} & ![ From top to bottom: reference image, secondary image, ACBM+SS disparity map, graph cuts disparity map, and OpenCV disparity map. In our disparity map, red points are points which haven’t been matched. Notice that patches containing a moved car or bus haven’t been matched. Poorly textured regions (shadows) where noise dominates have also been rejected. Red points in the graph cuts disparity map are rejected [*a posteriori*]{} and considered as occlusions. The graph cuts disparity map is denser and smoother but several mismatches appear in the low textured areas and regions with moving objects.[]{data-label="marseille"}](Figure9_9.eps "fig:"){width="3cm"} & ![ From top to bottom: reference image, secondary image, ACBM+SS disparity map, graph cuts disparity map, and OpenCV disparity map. In our disparity map, red points are points which haven’t been matched. Notice that patches containing a moved car or bus haven’t been matched. Poorly textured regions (shadows) where noise dominates have also been rejected. Red points in the graph cuts disparity map are rejected [*a posteriori*]{} and considered as occlusions. The graph cuts disparity map is denser and smoother but several mismatches appear in the low textured areas and regions with moving objects.[]{data-label="marseille"}](Figure9_10.eps "fig:"){width="3cm"}\ The proposed algorithm will now be compared with the non-dense algorithms of [@Sara02], [@Veksler02], [@Veksler03] and [@Mordohai06], whose aims are comparable. All of these papers have published experimental results on the first Middlebury dataset [@Scharstein02] (Tsukuba, Sawtooth, Venus and Map pair of images), on the non-occluded mask. These four algorithms compute sparse disparity maps and propose techniques rejecting unreliable pixels. We also show some additional comparison with the block matching method implemented in the OpenCV library version 2.2.0 [@opencv], because it is possibly the most widely used one since it comes close to real-time performance. The authors of [@Mordohai06] compute an initial classic correlation disparity map and select correct matches based on the support these pixels receive from their neighboring candidate matches in 3D after tensor voting. 3D points are grouped into smooth surfaces using color and geometric information and the points which are inconsistent with the surface color distribution are removed. The rejection of wrong pixels is not complete, because the algorithm fails when some objects appear only in one image, or when occluded surfaces change orientation. A variation of the critical rejection parameters can lead to quite different results. [@Veksler02] detects and matches so called “dense features” which consist of a connected set of pixels in the left image and a corresponding set of pixels in the right image such that the intensity edges on the boundary of these sets are stronger than their matching error on the boundary (which is the absolute intensity difference between corresponding boundary pixels). They call this the “boundary condition.” The idea is that even the boundary of a non textured region can give a correspondence. Then, each dense feature is associated with a disparity. The main limitation is the way dense features are extracted. They are extracted using a local algorithm which processes each scan line independently from the others. As a result, top and bottom boundaries are lost. On the contrary, [@Veksler03] uses graph cuts to extract “dense features” (which of course does not necessarily imply a dense disparity map) thus enforcing the boundary conditions. The results in [@Veksler02] are rather dense and the error rate is one of the most competitive ones. Yet these good results are also due to the particularly well adapted structure of the benchmark. Indeed, the Sawtooth, Venus and Map scenes consist of piecewise planar surfaces, with almost fronto-parallel surface patches. The ground truth of Tsukuba is a piecewise constant disparity map with six different disparities. Table \[table\_compar\_sparse\] summarizes the percentage of matched pixels (density) and the percentage of mismatches (where the estimated disparity differs by more than one pixel from the ground truth). This table reports first the result of ACBM+SS, whose error rate is very small and yields larger match densities than Sara’s results [@Sara02]. To compare with other algorithms yielding denser disparity maps, the results of ACBM+SS have been densified by the most straightforward proximal interpolation (a 3$\times$3 spatial median filter). Doing this, the match density rises significantly while keeping small error rates. Still, large regions containing poor textures, typically shadows in aerial imaging, are impossible to fill in because they contain no information at all. Besides the compared algorithms in Table \[table\_compar\_sparse\] [@Szeliski01] also published non-dense results for the Tsukuba image (error rate of $2.1\%$ with a density of $45\%$) but since non-dense results on other images are not published it does not appear in our table. Fig. \[fig:shrub\] compares the ACBM+SS results with opencv, graph cuts and the Sara published results on the classic CMU Shrub pair[^2]. Sara’s disparity map has several mismatches and the ACBM+SS results are obviously denser. On the other hand, Kolmogorov’s graph cut implementation is denser but the mismatches have risen considerably. OpenCVs disparity map is more dense than Kolmogorov’s, and less dense than Sara’s, but it has also the highest number of wrong matches. So, the proposed algorithm ACBM+SS has a better trade-off between density and mismatches. In the Kolmogorov graph cuts implementation the occlusions are detected, providing a non-dense disparity map. It is clear that detecting occlusions in real images is not enough to avoid mismatches. Another example is shown in Fig. \[fig:flower\_garden\], where the almost dense disparity map obtained with graph cuts is compared with the ACBM+SS disparity map. The top left of the image gets by Graph Cuts a completely wrong disparity: the sky and the tree branches are clearly not at the same depth in the scene. This type of error is unavoidable with global methods. The depth of the smooth sky is inherently ambiguous. By the minimization process it inherits the depth of the twigs through which it is seen. An interesting question arises out of the comparative results about the duality error/density. We have seen that our algorithm gives very low error percentages with densities between 40% and 90%. The parameter $\varepsilon$ can be increased but then the error rate will rise. Our goal is to match with high reliably the points between two images and reject any possible false match. So the choice of one expected false alarm ($\varepsilon=1$) is a conservative choice but ensures a very small error percentage. [*Discussion on the other parameters:* ]{} We have mentioned that the number of considered principal components $N$ and the number of quantum probabilities $Q$ can be increased without noticeable alteration of the results. In practice, the two values are chosen (for computational reasons) to the minimal values not affecting the quality of the result. They are fixed once and for all to $N=9$ and $Q=5$ respectively. Another parameter is the search region size ($2R+1$) but it is easy to find since we only need $R$ to be larger than the largest disparity in the image, which is a classic assumption in stereovision algorithms (in practice $R$ can be estimated from the sparse matching of interest points that was previously obtained for the epipolar rectification step). Finally, the last parameter is the size of the block. We know that very small blocks are affected by image noise but at the same time, the bigger the block, the bigger the fattening error (also named adhesion error). This error becomes apparent at the object borders of the scene causing a dilation of their real size, which is proportional to the block-size. The fattening phenomenon is not the object of the this paper but different solutions have already been suggested to avoid it [@Delon07]. Fixing the size of the block to $9 \times 9$ seems to be a good compromise between noise and fattening for a realistic SNR conditions, ranging from 200 to 20 (the SNR is measured as the ratio between the average grey level and the noise standard deviation.) [*Computational time:* ]{} For the sake of computational speed, the PCA basis is previously learnt on a set of representative images and stored once and for all.[^3] Then, this basis is used to compute all image coefficients. Notice that only the image coefficients of the second image need to be sorted in order to compute the resemblance probability between all possible matches. With our implementation, which is still not highly optimized for speed, an experiment with a pair of images of size $512 \times 512$ with disparity rang = \[-5,5\], takes 4.5 seconds running on a 2.4 GHz Intel Core 2 Duo processor.\ A similar experiment with the OpenCV stereo algorithm takes between 5 and 500 miliseconds. This is much closer to real-time requirements, but results are also much more data-dependent, producing good results in easy examples like the Middlebury pair, but much less dense and less reliable results than our method in more difficult scenes like shrub, marseille or even the stereo pairs provided with OpenCV. ------------------------------------ ---------- ------------ ---------- ------------ ---------- ------------ ---------- ------------ Error(%) Density(%) Error(%) Density(%) Error(%) Density(%) Error(%) Density(%) ACBM + SS **0.31** 45.6 **0.09** 65.7 0.02 54.1 **0.0** 84.8 ACBM + SS + Median filter 0.33 54.3 0.14 77.9 **0.0** 66.6 **0.0** 93.0 Sara [@Sara02] 1.4 45 1.6 52 0.8 40 0.3 74 Veksler 02 [@Veksler02] 0.38 66 1.62 76 1.83 68 0.22 87 Veksler 03 [@Veksler02] 0.36 75 0.54 87 0.16 73 0.01 87 Mordohai and Medioni [@Mordohai06] 1.18 74.5 0.27 78.4 0.20 74.1 0.08 94.2 ------------------------------------ ---------- ------------ ---------- ------------ ---------- ------------ ---------- ------------ ![CMU Shrub scene. (a) and (b) Reference and secondary images. (c) Method of Sara [@Sara02]. Red points are rejected. Density: 24% (d) Kolmogorov’s Graph-Cuts [@Kolmogorov05]. Red points are points detected as occlusions. Density: 77% (e) ACBM+SS. Red points are rejected points. Density: 42%. Sara’s disparity map has a lower density and has several evident mismatches. Kolmogorov’s disparity map is denser but has many obvious errors. (f) The block matching algorithm included in OpenCV is also not very dense AND contains many errors. It is only provided as a reference of what can be easily obtained with a freely available quasi-real-time block matching algorithm. (e) Proposed method ACBM+SS.[]{data-label="fig:shrub"}](Figure10_1.eps "fig:"){height="3cm"}\ (a) left image\ ![CMU Shrub scene. (a) and (b) Reference and secondary images. (c) Method of Sara [@Sara02]. Red points are rejected. Density: 24% (d) Kolmogorov’s Graph-Cuts [@Kolmogorov05]. Red points are points detected as occlusions. Density: 77% (e) ACBM+SS. Red points are rejected points. Density: 42%. Sara’s disparity map has a lower density and has several evident mismatches. Kolmogorov’s disparity map is denser but has many obvious errors. (f) The block matching algorithm included in OpenCV is also not very dense AND contains many errors. It is only provided as a reference of what can be easily obtained with a freely available quasi-real-time block matching algorithm. (e) Proposed method ACBM+SS.[]{data-label="fig:shrub"}](Figure10_2.eps "fig:"){height="3cm"}\ (c) Sara [@Sara02]\ ![CMU Shrub scene. (a) and (b) Reference and secondary images. (c) Method of Sara [@Sara02]. Red points are rejected. Density: 24% (d) Kolmogorov’s Graph-Cuts [@Kolmogorov05]. Red points are points detected as occlusions. Density: 77% (e) ACBM+SS. Red points are rejected points. Density: 42%. Sara’s disparity map has a lower density and has several evident mismatches. Kolmogorov’s disparity map is denser but has many obvious errors. (f) The block matching algorithm included in OpenCV is also not very dense AND contains many errors. It is only provided as a reference of what can be easily obtained with a freely available quasi-real-time block matching algorithm. (e) Proposed method ACBM+SS.[]{data-label="fig:shrub"}](Figure10_3.eps "fig:"){height="3cm"}\ (e) Proposed algorithm\ ![CMU Shrub scene. (a) and (b) Reference and secondary images. (c) Method of Sara [@Sara02]. Red points are rejected. Density: 24% (d) Kolmogorov’s Graph-Cuts [@Kolmogorov05]. Red points are points detected as occlusions. Density: 77% (e) ACBM+SS. Red points are rejected points. Density: 42%. Sara’s disparity map has a lower density and has several evident mismatches. Kolmogorov’s disparity map is denser but has many obvious errors. (f) The block matching algorithm included in OpenCV is also not very dense AND contains many errors. It is only provided as a reference of what can be easily obtained with a freely available quasi-real-time block matching algorithm. (e) Proposed method ACBM+SS.[]{data-label="fig:shrub"}](Figure10_4.eps "fig:"){height="3cm"}\ (b) right image\ ![CMU Shrub scene. (a) and (b) Reference and secondary images. (c) Method of Sara [@Sara02]. Red points are rejected. Density: 24% (d) Kolmogorov’s Graph-Cuts [@Kolmogorov05]. Red points are points detected as occlusions. Density: 77% (e) ACBM+SS. Red points are rejected points. Density: 42%. Sara’s disparity map has a lower density and has several evident mismatches. Kolmogorov’s disparity map is denser but has many obvious errors. (f) The block matching algorithm included in OpenCV is also not very dense AND contains many errors. It is only provided as a reference of what can be easily obtained with a freely available quasi-real-time block matching algorithm. (e) Proposed method ACBM+SS.[]{data-label="fig:shrub"}](Figure10_5.eps "fig:"){height="3cm"}\ (d) Kolmogorov [@Kolmogorov05]\ ![CMU Shrub scene. (a) and (b) Reference and secondary images. (c) Method of Sara [@Sara02]. Red points are rejected. Density: 24% (d) Kolmogorov’s Graph-Cuts [@Kolmogorov05]. Red points are points detected as occlusions. Density: 77% (e) ACBM+SS. Red points are rejected points. Density: 42%. Sara’s disparity map has a lower density and has several evident mismatches. Kolmogorov’s disparity map is denser but has many obvious errors. (f) The block matching algorithm included in OpenCV is also not very dense AND contains many errors. It is only provided as a reference of what can be easily obtained with a freely available quasi-real-time block matching algorithm. (e) Proposed method ACBM+SS.[]{data-label="fig:shrub"}](Figure10_6 "fig:"){height="3cm"}\ (f) OpenCV SGBM ![Flower-garden scene. (a) and (b) Reference and secondary images. (c) Graph Cuts (method of [@Kolmogorov05]). Red points are occluded points. (d) ACBM+SS. Red points are rejected points. Density: 59%. Most rejected points are obviously mismatched by the graph cut algorithm, which equates the depths of trees, sky and house.[]{data-label="fig:flower_garden"}](Figure11_1.eps "fig:"){height="2.5cm"}\ (a)\ ![Flower-garden scene. (a) and (b) Reference and secondary images. (c) Graph Cuts (method of [@Kolmogorov05]). Red points are occluded points. (d) ACBM+SS. Red points are rejected points. Density: 59%. Most rejected points are obviously mismatched by the graph cut algorithm, which equates the depths of trees, sky and house.[]{data-label="fig:flower_garden"}](Figure11_2.eps "fig:"){height="2.5cm"}\ (c)\ ![Flower-garden scene. (a) and (b) Reference and secondary images. (c) Graph Cuts (method of [@Kolmogorov05]). Red points are occluded points. (d) ACBM+SS. Red points are rejected points. Density: 59%. Most rejected points are obviously mismatched by the graph cut algorithm, which equates the depths of trees, sky and house.[]{data-label="fig:flower_garden"}](Figure11_3.eps "fig:"){height="2.5cm"}\ (b)\ ![Flower-garden scene. (a) and (b) Reference and secondary images. (c) Graph Cuts (method of [@Kolmogorov05]). Red points are occluded points. (d) ACBM+SS. Red points are rejected points. Density: 59%. Most rejected points are obviously mismatched by the graph cut algorithm, which equates the depths of trees, sky and house.[]{data-label="fig:flower_garden"}](Figure11_4.eps "fig:"){height="2.45cm"}\ (d)\ Conclusion {#Conclusions} ========== Wrong match thresholds were, in our opinion, the principal drawbacks for block-matching algorithms in stereovision. The [*a contrario*]{} block-matching threshold, that was the principal object of the present paper, combined with the self-similarity threshold is able to detect mismatches systematically, by an algorithm which is essentially parameter-free. Indeed, the only user parameter is the expected number of false matches, which can be fixed once and for all in most applications. The method indiscriminately detects occlusions, moving objects and poor or periodic textured regions. Mismatches in block-matching have led to the overall dominance of global energy methods. However, global methods have no validation procedure, and the proposed [*a contrario*]{} method must be viewed as a validation procedure, no matter what the stereo matching process was. Block-matching, together with the reliability thresholds established in this paper, gives a fairly dense set of reliable matches (from 50$\%$ to 80$\%$ usually). It may be objected that the obtained disparity map is not dense. This objection is not crucial for two reasons. First, having only validated matches opens the path to benchmarks based on accuracy, and to raise challenges about which precision can be ultimately attained (on [*validated*]{} matches only). Second, knowing which matches are reliable allows one to complete a given disparity map by fusing several stereo pairs. Since disposing of multiple observations of the same scene by several cameras and/or at several different times is by now a common setting, it becomes more and more important to be able to fuse 3D information obtained from many stereo pairs. Having almost only reliable matches in each pair promises an easy fusion. A straightforward solution in our case would be the following: Given $m>2$ images, the disparity map between each possible pair of images is computed with ACBM+SS. Then the final disparity map is the accumulated disparity map considering all meaningful matches computed with all the image pairs whenever all the computed disparities for the same pixel are coherent. Acknowledgements ================ The authors thank Pascal Getreuer for helpful comments on this work. Work partially supported by the following projects FREEDOM (ANR07-JCJC-0048-01), Callisto (ANR-09-CORD-003), ECOS Sud U06E01 and STIC Amsud (11STIC-01 - MMVPSCV). [^1]: http://vision.middlebury.edu/stereo/ [^2]: http://vasc.ri.cmu.edu/idb/html/jisct [^3]: In our experience the (computationally intensive) choice of this basis does not significantly affect the results, but the (computationally fast) learning of marginal distributions for a particular image on this basis does.
--- abstract: 'We present the zero-temperature phase diagram of the one-dimensional $t_{\rm 2g}$-orbital Hubbard model, obtained using the density-matrix renormalization group and Lanczos techniques. Emphasis is given to the case for the electron density $n$=5 corresponding to five electrons per site, of relevance for some Co-based compounds. However, several other cases for electron densities between $n$=3 and 6 are also studied. At $n$=5, our results indicate a first-order transition between a paramagnetic (PM) insulator phase and a fully-polarized ferromagnetic (FM) state by tuning the Hund’s coupling. The results also suggest a transition from the $n$=5 PM insulator phase to a metallic regime by changing the electron density, either via hole or electron doping. The behavior of the spin, charge, and orbital correlation functions in the FM and PM states are also described in the text and discussed. The robustness of these two states varying parameters suggests that they may be of relevance in more realistic higher dimensional systems as well.' author: - 'J. C. Xavier' - 'H. Onishi' - 'T. Hotta' - 'E. Dagotto' date: 'July 20, 2005' title: | Spin, charge, and orbital correlations\ in the one-dimensional $t_{\rm 2g}$-orbital Hubbard model --- Introduction ============ The study of the exotic properties of cobalt oxides is an area of investigations that is currently attracting considerable attention in the research field of condensed matter physics. Among the main reasons for this wide effort, the recent discovery of superconductivity in layered two-dimensional triangular lattices of Co atoms with the composition Na$_{\rm x}$CoO$_2$ has certainly triggered a rapid increase of research activities on cobalt oxides. This material becomes superconducting after H$_2$O is intercalated,[@cobalt-SC] opening an exciting area of investigations. The experimentally unveiled phase diagram of this compound varying the Na composition has revealed the existence of several other competing tendencies: Charge ordered as well as magnetic states are stabilized, in addition to superconductivity.[@cobalt-cava] In the related compound $\rm (Ca_2 Co O_3)(CoO_2)$, a spin incommensurate spin-density-wave has been recently reported.[@cobalt-IC] The existence of such a rich phase diagram is a characteristic of strongly correlated electron systems, where complex behavior typically emerges due to the presence of competing states that have similar energies but vastly different transport and magnetic properties.[@complexity] Additional motivation for the study of Co-oxides arises from recent experimental studies of hole-doped cobaltites in the perovskite form such as $\rm La_{1-x}Sr_{x}CoO_3$, where clear tendencies toward phase separation between ferromagnetic (FM) metallic and paramagnetic (PM) insulating regions have been found.[@cobalt-PS] This establishes an intriguing qualitative connection between Co-oxides and the famous manganites that exhibit the colossal magnetoresistance, effect widely believed to originate in an analogous mixed-phase tendency exhibited by Mn-oxides.[@book] In fact, a large magnetoresistance in some cobaltites has also been observed, and its origin appears related with phase competition.[@raveau] As a consequence, establishing the dominant ground-state tendencies of simple models for cobaltites is important to envision the possible phase mixtures that may lead to exotic behavior. As a third motivation for the study of cobaltites, it is known that some Co-based compounds have interesting thermoelectric properties. In particular, a huge thermoelectric power has been recently discovered in NaCo$_2$O$_4$ by Terasaki [*et al.*]{},[@terasaki] opening another area of investigations, with a focus on thermoelectric materials mainly with the purpose of industrial applications. For all these reasons, the theoretical study of models for Co-oxides is timely and needed in order to guide further experimental developments. Ab-initio calculations have already provided important information in this context,[@singh] and the inclusion of many-body effects is the natural next step. Previous theoretical studies of Co-based systems including Coulombic repulsion have mainly focused on triangular lattices. In this context, recent Monte Carlo investigations unveiled the presence of magnetic correlations.[@maekawa] Fluctuation-exchange approximations also revealed tendencies toward ferromagnetism and possible triplet-pairing instabilities in a multiorbital model.[@ogata] Several approximate studies of $t$-$J$ [@single-band-t-J] and single-band Hubbard models [@single-band-Hubbard] have also been presented. To understand the behavior of complex oxides, it is of particular importance the analysis of the many possible tendencies in the ground state, namely the study of the various competing states stabilized as electron density and coupling are modified. Unfortunately, this task is difficult due to a lack of reliable unbiased analytical techniques. For this reason, the first effort toward a detailed numerical analysis of models for cobaltites is presented here. Instead of directly emphasizing the triangular lattice with approximate techniques or exactly studying small systems, we have preferred to perform a systematic study of a one-dimensional multiorbital Hamiltonian, exploring in detail the coupling and electron density parameter space, and using computationally exact techniques. This level of accuracy is achieved through the use of reliable methods such as the density-matrix renormalization group (DMRG) [@white] and the Lanczos technique.[@review] We envision this effort as a first step toward a systematic computational analysis of more complicated quasi-two-dimensional triangular-lattice systems. The paper is organized as follows. In Sec. II, the multiorbital model is introduced and many-body computational techniques used here are briefly discussed. In Sec. III, the main results are presented. These results are organized based on the observable studied: First, the $n$=5 phase diagram is discussed, where $n$ denotes the number of electron per site. Then, the spin correlations are presented at several values of $n$. This is followed by the charge and orbital correlations. Finally, conclusions are presented in Sec. IV. The main result of the paper is the clear dominance of two rather different ground states: (1) a FM state and (2) a PM state with short-range correlations. Both are very robust varying couplings and densities. Their higher-dimensional versions may be of relevance for present and future Co-oxide experiments. Model and Technique =================== In the investigation reported in this manuscript, we consider a three-orbital Hubbard model, defined on a one-dimensional chain along the $x$-axis with $L$ sites. The three orbitals represent the $t_{\rm 2g}$ orbitals of relevance for cobaltites. The model is given by $$\begin{aligned} H &=& -\sum_{j,\gamma,\gamma',\sigma}t_{\gamma,\gamma'} \left(d_{j,\gamma\sigma}^{\dagger} d_{j+1,\gamma'\sigma}^{\phantom{\dagger}}+ \mathrm{H.}\,\mathrm{c.}\right) \nonumber\\ &&+U\sum_{j,\gamma}\mathbf{\rho}_{j,\gamma\uparrow} \mathbf{\rho}_{j,\gamma\downarrow} +\frac{U'}{2}\sum_{j,\sigma,\sigma',\gamma\ne\gamma'} \mathbf{\rho}_{j,\gamma\sigma}\mathbf{\rho}_{j,\gamma'\sigma'} \nonumber\\ &&+\frac{J}{2}\sum_{j,\sigma,\sigma',\gamma\ne\gamma'} d_{j,\gamma\sigma}^{\dagger}d_{j,\gamma'\sigma'}^{\dagger} d_{j,\gamma\sigma'}d_{j,\gamma'\sigma} \nonumber\\ &&+\frac{J'}{2}\sum_{j,\sigma\ne\sigma',\gamma\ne\gamma'} d_{j,\gamma\sigma}^{\dagger}d_{j,\gamma\sigma'}^{\dagger} d_{j,\gamma'\sigma'}d_{j,\gamma'\sigma},\end{aligned}$$ where the index $j$ denotes the site of the chain, $\gamma$ indicates the orbitals $xy$, $yz$, and $zx$, and $\sigma$ is the spin projection along the $z$-axis. The rest of the notation is standard. The hopping amplitudes are $t_{xy,xy}$=$t_{zx,zx}$=$t$=$1$, and zero for the other cases. These simple values for the hopping amplitudes can be easily derived from the overlap of $d_{xy}$, $d_{yz}$, and $d_{zx}$ orbitals between nearest-neighbor sites along the $x$-axis. The interaction parameters $U$, $U'$, $J$, and $J'$ are the standard ones for multiorbital Hamiltonians, and a detailed description can be found in Ref. . These couplings are not independent, but they satisfy the well-known relations $J'$=$J$ and $U$=$U'$+$2J$, due to the reality of the wave function and the rotational symmetry in the orbital space. We investigate the model described above mainly using the DMRG technique with open boundary conditions.[@white] The finite-size algorithm is employed for sizes up to $L$=$48$, keeping up to $m$=$350$ states per block. The truncation errors are kept around $10^{-5}$ or smaller. The center blocks in our DMRG procedure are composed of 64 states due to the three orbitals. Note that, for instance, the $t$-$J$ model has only 3 states in these center blocks. As a consequence, keeping $m$=$350$ states per block in the $t_{\rm 2g}$-orbital Hubbard model is analogous to keeping $m$$\sim$$7000$ states per block in the $t$-$J$ model. Although in related investigations specific values of $U$ and $J$ for the triangular Co-oxides were discussed,[@ogata] here we prefer to vary independently these couplings analyzing the possible ground states that are stabilized by this procedure. In fact, the ratio $U/J$ may change among the many interesting Co-oxides, and in addition, it is important to classify the states that could be stabilized by the proper isovalent chemical doping, external fields, or perturbations. Results ======= Phase diagram for density $n$=5 ------------------------------- ![\[fig1\] Ground-state phase diagram for the one-dimensional three-orbital Hubbard model, using a 6-site chain and working at electron density $n$=5. FM and PMI denote the regions with ferromagnetism and paramagnetism (insulator), respectively. We also present a schematic picture of the electron configurations. AFO indicates the staggered population of orbitals in the FM state. The reader should consult the text for more details, as well as the next figure. ](fig1.eps){width="0.8\linewidth"} In Fig. 1, the ground-state phase diagram $J$ versus $U_{\rm eff}$=$U'$$-$$J$ is presented. For large $J$, a fully polarized FM phase is obtained, while for small $J$, a PM regime is found. This PM phase is insulating, as shown below. Note that the phase diagram is obtained by comparing the energies for different sectors of the $z$-projection of the total spin, $S_{\rm total}^{z}$, mainly using a system of size $L$=6. Other values of $L$ are also studied, and it is observed that for $L$=4, 6, 8, and 10, in the PM regime the ground state has total spin 0, 1, 0, and 1, respectively, for a large set of couplings investigated. As a consequence, it is reasonable to assume that the transition line separates states with the minimum and maximum total spin, without intermediate partially polarized regimes. The phase diagram we have found has similarities with that already reported by two of the authors at density $n$=$4$,[@onishihotta] in the context of spin-1 chains. As described later, our results for the spin-spin, charge-charge, and orbital correlations suggest, roughly, an electron distribution at short distances, as schematically presented in Fig. 1. The electron configuration in the FM phase is quite simple: 5 electrons per site, with a polarized net spin 1/2 and antiferro-orbital (AFO) correlations. The existence of FM correlations is a direct consequence of the multiorbital nature of the model and the robust value of $J$ in the FM regime. ![\[fig2\] States with the largest weight in the ground state of a 4-site chain solved exactly. Note that each state has 8-fold degeneracy. At $J$=0, these three states have the same weight. On the other hand, for nonzero $J$, the state (a) (and its 8-fold degenerate states) has the largest weight, with a spin (orbital) structure factor peaked at $\pi/2$ ($\pi$). The states (b) and (c) (each one also with degeneracy 8) have the second- and third-largest weights, respectively, for nonzero $J$. ](fig2.eps){width="0.5\linewidth"} On the other hand, a more complex electron configuration emerges in the PM phase. The meaning of the full circles in the inset of Fig. 1 for the PM phase is to denote either a spin up or a spin down. Note, however, that quantum fluctuations are strong and the configuration shown in Fig. 1 is just a guidance. To obtain insight into the ground-state wave function, it is useful to consider the case of a four-site chain, where results can be obtained exactly by using the Lanczos method. In the strong-coupling limit $U_{\rm eff} \gg J \gg 1$ (or, more precisely, $1/(U'-J) \ll 1$), it is found that the most important portion of the ground-state wave function is expressed as $$\begin{aligned} |\psi\rangle&=&\frac{1}{\sqrt{24}}\sum_{\rm P}(-1)^{n_{\rm P}} \nonumber\\ && \times \left(\begin{array}{c} \uparrow\downarrow\\ \uparrow\downarrow\\ \downarrow \end{array}\right) \otimes \left(\begin{array}{c} \downarrow\\ \uparrow\downarrow\\ \uparrow\downarrow \end{array}\right) \otimes \left(\begin{array}{c} \uparrow\downarrow\\ \uparrow\downarrow\\ \uparrow \end{array}\right) \otimes \left(\begin{array}{c} \uparrow\\ \uparrow\downarrow\\ \uparrow\downarrow \end{array}\right),\end{aligned}$$ where the sum is taken over the permutation of the four spinors and $n_{\rm P}$ is the number of permutation we have to perform to recover the original configuration. Namely, the electron configuration presented in the PM phase of Fig. 1 should be regarded as the equivalent of the 4 spinors contained in $|\psi\rangle$. Note that this is not a rigid configuration, but all permutations are equally important at small $J$. In particular, all the 24 states have the same weight in the ground state at $J$=0, while at finite $J$, the 24 states are split into three classes with 8 states for each, as shown in Fig. 2. Note that each of these classes lead to a distinct peak position in the spin and orbital structure factors. When the peak positions in these channels are denoted by $q_{\rm spin}$ and $q_{\rm orbital}$, the class (a) has $q_{\rm spin}$=$\pi/2$ and $q_{\rm orbital}$=$\pi$, class (b) $q_{\rm spin}$=$\pi/2$ and $q_{\rm orbital}$=$\pi/2$, and class (c) $q_{\rm spin}$=$\pi$ and $q_{\rm orbital}$=$\pi/2$. The $yz$ orbital is fully occupied due to the one dimensionality of the system that prevents the movement of electrons in this orbital due to a vanishing hopping. For the active $xy$ and $zx$ orbitals, the electrons are not distributed in a rigid charge-ordered pattern, but instead the density is to an excellent approximation equal to 1.5 at every site. Note that a similar representation of the ground-state wave function for four sites has been found for the $SU(4)$ spin-orbital model.[@zhangsu4] As discussed in more detail below, these two models are related to each other. Spin correlations at several densities -------------------------------------- To understand more quantitatively the magnetic order present in the PM phase, it is useful to measure the spin-spin correlation function, defined as $$C_{\rm spin}(l)=\frac{1}{M}\sum_{|i-j|=l} \left\langle S_{i}^{z}S_{j}^{z}\right\rangle,$$ where $S_{i}^{z}$=$\sum_{\gamma}(\rho_{i\gamma\uparrow}-\rho_{i\gamma\downarrow})/2$ is the $z$-projection of the total spin at each site and $M$ is the number of site pairs $(i,j)$ satisfying $l$=$|i-j|$. We average over all pairs of sites separated by distance $l$, in order to minimize boundary effects. In Fig. 3(a), $C_{\rm spin}(l)$ is shown for the PM phase. It is found that the numerical data of $C_{\rm spin}(l)$ are well reproduced by the function $$\label{Eq:fit} {\tilde C}_{\rm spin}(j)= \frac{a}{j^{2}}+b\frac{\cos(\frac{\pi}{2}j)}{j^{3/2}},$$ as shown by the dashed curve. Note that $a$ and $b$ are appropriate fitting parameters. The result indicates that the spin-spin correlation function has a four-site periodicity and decays as a power law with critical exponent 3/2. In the inset of Fig. 3(a), we also present the Fourier transform of the spin-spin correlation function, $$S\left(q\right)= \frac{1}{L}\sum_{j,k}e^{iq\left(j-k\right)} \left\langle S_{j}^{z}S_{k}^{z}\right\rangle,$$ for $L$=$16$ and $L$=$48$. As observed in this figure, finite-size effects appear to be very small. Here we clearly find a peak in $S(q)$ at $q$=$\pi/2$, corresponding to the four-site periodicity of the spin-spin correlation function. ![\[fig3\] (a) The spin-spin correlation function $C_{\rm spin}(j)$ vs. $j$ for $L$=$48$ and density $n$=$5$. The dashed line indicates a fit using Eq. (\[Eq:fit\]). The inset shows the spin structure factor $S\left(q\right)$ for $L$=$16$ and $L$=$48$. (b) The linear-log plot of the module of the spin-spin correlation $|C_{\rm spin}(l)|$ corresponding to densities $n$=$3$, 4, and 5 with $L$=$48$, as well as the fit used in (a). For details, see the main text. (c) Spin structure factor $S\left(q\right)$ for several densities $n$, and using $L$=$16$. The arrows indicate the peak positions. In all plots $U_{\rm eff}$=$10$ and $J$=$1$, as indicated. Inset shows $S(q)$ for $n$=$4$ and 5. ](fig3.eps){width="0.75\linewidth"} Let us here discuss the physical meaning of the four-site periodicity. Since the $yz$ orbital is fully occupied in our studies, the $t_{\rm 2g}$-orbital Hubbard model can be regarded as a two-orbital Hubbard model composed only of $xy$ and $zx$ orbitals. Note that for this two orbital model, the hopping amplitudes are symmetric and there is no off-diagonal elements. Moreover, when $J$=$0$, in the two orbital model, there exists an extra $SU(4)$ symmetry involving both spin and orbital degrees of freedom. In such a case, the effective Hamiltonian in the strong-coupling limit is given by the $SU(4)$ spin-orbital model, which has been investigated intensively in recent years.[@Troyer1; @shibatasu4b; @afflecksu4] Concerning a less symmetric case than $SU(4)$, the effect of $J$ has also been discussed,[@shibatasu4; @boulatsu4] where anisotropic exchange interactions arise in the orbital part. Note that the spin-spin correlation function, presented in Fig. 3(a), is found to have a four-site periodicity and decay as a power law with critical exponent 3/2.[@comment-3/2] These results are consistent with previous analytical work [@afflecksu4b] and numerical analysis [@Troyer1; @shibatasu4] for the $SU(4)$ spin-orbital model. Here we stress that the spin-spin correlation functions for $n$=$5$ clearly present distinct behavior from the results already reported at $n$=$4$,[@onishihotta] where an exponential decay has been observed, as depicted as a linear-log plot in Fig. 3(b). The result indicates a $gapless$ spin-excitation spectrum for $n$=$5$, with power-law decaying correlations, in contrast to a gapfull behavior for $n$=$4$. Note that, for better comparison, we have normalized $C_{\rm spin}$ in such a way that the correlations are the same at distance one. We have eliminated the odd sites for $n$=$5$, since the results there are close to zero (see Fig. 3(a)). Note also that working with $m$=$350$, it is difficult to reach good accuracy for the correlations at large distances, since they are very small. For this reason, in our results we present only the first 19 sites.[@comment-gap] In Fig. 3(b), we also show the spin-spin correlation function for $n$=3, reported here for the first time to our knowledge. In the case of $n$=3, it is naively expected that the local spin $S$=3/2 is formed at each site. By analogy with the half-odd-integer-spin antiferromagnetic Heisenberg chains, we expect the power-law decay of the spin-spin correlation function and a gapless spin-excitation spectrum as well.[@Haldane-1983] On the contrary, we can observe in Fig. 3(b) that the spin-spin correlation function shows an exponential decay similar to the case for the integer-spin chains. To understand this peculiar behavior, it is necessary to take into account the effect of $t_{\rm 2g}$ orbitals. As mentioned above, electrons in the $yz$ orbital cannot hop, while electrons in the $xy$ and $zx$ orbitals move to adjacent sites with the same amplitude. Then, it is expected that only electrons in the $xy$ and $zx$ orbitals contribute to the exchange interaction, and the $n$=3 system is regarded as an effective $S$=1 chain, leading to the exponential decaying spin-spin correlation function. In Fig. 3(c), $S(q)$ is shown for several densities. Since the finite-size effects seem to be small, we consider $L$=$16$. As observed in these studies, the results suggest that the peak position changes linearly with the electron density as $q$=$(6-n)\pi/2$ (mod $\pi$). Note that this peak is clearly robust for $n$=$4$, as shown in the inset of Fig. 3(c), and substantially decreases its intensity by increasing the density $n$. It is important to remark that the inset of Fig. 3(a) is very similar to the results found by Ogata and Shiba in their study of the one-dimensional Hubbard model at quarter-filling and $U$=$\infty$ (see Fig. 9 of Ref. ). Clearly, in the model studied in this paper, the electrons in the two bands with a nonzero hopping behave like one-band models with a strong on-site repulsion, at least from the perspective of the spin correlations. Note, however, that these two one-band models are connected via the Coulombic repulsion which, as discussed below, will open a gap in the spectrum of charge excitations. Charge correlations at several densities ---------------------------------------- ![\[fig4\] (a) The charge gap $\Delta$ vs. $1/L$ at particular values of $U_{\rm eff}$ and $J$, and densities $n$=$4$ and $n$=$5$. (b) Same as (a) but for non-integer densities, and $U_{\rm eff}$=$10$ and $J$=$1$. (c) and (d) denote the charge gap for density $n$=$5$ and $L$=$12$. (c) contains $\Delta$ vs. $U'$ at $J$=1, while (d) shows $\Delta$ as a function of $J$ at $U'$=$11$. ](fig4.eps){width="0.8\linewidth"} To investigate the charge excitations, it is useful to measure the charge gap, defined as $\Delta$=$E(N_{e}+2)+E(N_{e}-2)-2E(N_{e})$, where $E(N_{e})$ denotes the lowest energy in the subspace with the total number of electrons $N_{e}$. In Fig. 4(a), the charge gap is shown as a function of $1/L$ at densities $n$=$4$ and $5$, for particular values of $U_{\rm eff}$ and $J$. Clearly, at these densities the charge gap extrapolates to a nonzero value in the thermodynamic limit, indicating that the system is an $insulator$. On the other hand, as shown in Fig. 4(b), for non-integer electron densities, the charge gaps seem to extrapolate to zero in the thermodynamic limit, suggesting a metallic behavior. These results indicate that a transition from an insulating phase to a metallic regime is obtained by changing the density away from $n$=5. In Fig. 4(c), the charge gap for the density $n$=$5$ and $L$=$12$ is presented. It appears that $U'$ is the main driver of the system into an insulating phase. On the order hand, the Hund’s coupling $J$ has the opposite effect: As observed in Fig. 4(d), by increasing $J$ the charge gap decreases. Note that $U'$ plays a role similar to that of the nearest-neighbor Coulomb repulsion $V$ in the two-leg ladder extended Hubbard model (with the two legs playing the role of the two orbitals in our model). In the ladder case, it is known that $V$ drives the system to an insulator at quarter-filling.[@daulnoack] ![\[fig5\] The charge structure factor $N^{\gamma,\gamma'}(q)$ and the charge-charge correlation function $C(j)$, at density $n$=$5$. (a) and (b) are for $U_{\rm eff}$=$10$, $J$=$10$ and $L$=$64$. This corresponds to the FM regime of Fig. 1. The dashed line is a fit using the function $a\cos(\pi j)/j$ with an appropriate fitting parameter $a$. (c) and (d) are for $U_{\rm eff}$=$10$, $J$=$1$, and $L$=$32$. This is in the PM regime of Fig.1. (e) and (f) are the same as (c) and (d), respectively, but for $J$=$5$. ](fig5.eps){width="0.8\linewidth"} We have also investigated the charge structure factor, defined as $$N^{\gamma,\gamma'}(q)= \frac{1}{2L} \sum_{j,k}e^{iq(j-k)} \left(N^{\gamma,\gamma'}(j,k)+N^{\gamma',\gamma}(j,k)\right),$$ where $N^{\gamma,\gamma'}(j,k)$=$\langle \delta n_{\gamma}(j)\delta n_{\gamma'}(k)\rangle$ and $\delta n_{\gamma}(j)$=$n_{\gamma}(j)-\langle n_{\gamma}(j)\rangle$. In a periodic system $N^{\gamma,\gamma'}(j,k)$=$N^{\gamma',\gamma}(j,k)$. However, with open boundary conditions, as used in our investigation, this is not valid any more, due to the presence of Friedel oscillations. Using the definition discussed above, $N^{\gamma,\gamma'}(q=0)$ is always *zero*. In our calculations, we obtained $N^{\gamma,\gamma'}(q=0)$$<$$10^{-4}$, indicating that we have retained enough states in the truncation process to satisfy this constraint. ![\[fig6\] The charge structure factor $N^{zx,zx}(q)$ for several densities and using $U_{\rm eff}$=$10$, $J$=$1$, and $L$=$16$. The arrows indicate the cusp positions. ](fig6.eps){width="0.8\linewidth"} The best indication of a true long-range-order (LRO) can be obtained by the system-size dependence of $N^{\gamma,\gamma'}(q)$. If $N^{\gamma,\gamma'}(q^{*})/L\rightarrow constant$ as $L\rightarrow\infty$, at some particular $q^{*}$, a true LRO characterized by $q^{*}$ is present. Carrying out this analysis, we have found no evidence of LRO in the charge sector of $n$=5. In Figs. 5(a), (c), and (e), typical examples of the charge structure factor for the FM and PM phases at density $n$=$5$ are presented. In the FM phase, we are able to explore very large system sizes, since we can measure the correlations in the sector of $S_{\rm total}^{z}$=max, with a much smaller Hilbert space than for the PM phase. Although we did not find LRO, the behavior of the structure factor suggests that in the FM phase the charge-charge correlation presents a quasi-LRO due to the presence of a robust peak at $q$=$\pi$. In fact, in the charge-charge correlation function, defined as $$C(l)=\frac{1}{M}\sum_{|i-j|=l} \left\langle \delta n_{zx}(i)\delta n_{zx}(j)\right\rangle,$$ we observe a slow power-law decay, as shown in Fig. 5(b). This correlation oscillates as $\cos(\pi j)/j$, as indicated by the dotted curve in Fig. 5(b). The DMRG data agree very nicely with a fit using this function. These strong charge oscillations suggest that the system may develop LRO rapidly, when a coupling in the direction perpendicular to the chains is introduced. Then, spin-FM charge-ordered states should be seriously considered as a possibility for Co-oxide materials, although more detailed calculations are needed to confirm this speculation. Note also that the negative values of $N^{zx,xy}(q=\pi)$ suggest an alternation of charge occupation between the $zx$ and $xy$ orbitals, as in the schematic representation in Fig. 1 (FM phase). Indeed, as discussed later in more detail, there is quasi-long-range AFO order. A similar result has already been found in the FM phase for the density $n$=$4$.[@onishihotta] On the other hand, in the PM phase, $N^{\gamma,\gamma'}\left(q\right)$ does not present a peak as sharp as for the FM phase, as shown in Fig. 5(c). In fact, the magnitude of the charge correlations is drastically different between the PM and FM phases, as can be seen from the absolute values of these correlations in the vertical axes of Figs. 5(b) and (d). Also note that the appearance of the cusp at $q$=$\pi/2$ is related to the four-site periodicity of the correlation $C(l)$, as shown in Fig. 5(d). Our results also suggest that the charges behave differently in two distinct regimes in the PM phase. At small $J$, the correlation $C(l)$ presents a four-sites periodicity, while for larger $J$, only a two-site periodicity is found, as observed in Fig. 5(f). In addition, the cusp of $N^{zx,zx}(q)$ present in the small-$J$ regime disappears at larger $J$ (Fig. 5(e)), apparently continuously. We have also observed that at small $J$, the position of the cusp changes with the electron density in a similar way as $S(q)$, as shown in Fig. 6. Orbital correlations at $n$=5 ----------------------------- ![\[fig7\] (a) The orbital structure factor $T(q)$ versus momentum for $U_{\rm eff}$=$10$, $J$=$10$, and $L$=$64$ with $\theta_i$=$\theta$=$0$. (b) The orbital correlation $C_{\rm orbital}(j)$ for the same parameters as used in (a). The dashed line is a fit using the function $a\cos(\pi j)/j$. (c) and (d) are the same as (a) and (b), but for $U_{\rm eff}$=$10$, $J$=$1$, and $L$=$32$. (e) and (f) contain the correlations $C_{\rm spin}(j)$ and $C_{\rm orbital}(j)$ for $U_{\rm eff}$=$10$, $J$=$0$, and $L$=$32$. All the results are for the density $n$=$5$. ](fig7.eps){width="0.8\linewidth"} ![\[fig8\] (a) The size dependence of the orbital structure factor $T(q)$ at $q$=$\pi$ with $\theta_{i}$=$\theta$=$0$, at density $n$=$5$. (b) $T(q)$ vs. $\theta$ for particular values of $q$. ](fig8.eps){width="0.7\linewidth"} Consider now the possibility of orbital order. In the PM phase and for $n$=$5$, we have found that the $xy$ and $zx$ orbitals are those of relevance, since the $yz$ orbitals are fully occupied. Note that in the PM phase and at $n$=$4$, the orbital degree of freedom becomes inactive due to the ferro-orbital order.[@onishihotta] Then, here we take the pseudospin representation for the $xy$ and $zx$ orbitals, and measure the orbital correlations to determine the orbital structure. For this purpose, we introduce an angle $\theta_{j}$ to characterize the orbital shape at each site. Using the angle $\theta_{j}$, we define the phase-dressed operators as $$\left\{ \begin{array}{l} f_{j,a,\sigma}= e^{i\theta_{j}/2} \left(\cos(\theta_{j}/2)d_{j,xy,\sigma} +\sin(\theta_{j}/2)d_{j,zx,\sigma}\right),\\ f_{j,b,\sigma}= e^{i\theta_{j}/2} \left(-\sin(\theta_{j}/2)d_{j,xy,\sigma} +\cos(\theta_{j}/2)d_{j,zx,\sigma}\right). \end{array} \right.$$ The optimal orbitals, $a$ and $b$, are determined so as to maximize the orbital structure factor, defined by $$T\left(q\right)= \frac{1}{L}\sum_{j,k}e^{iq(j-k)}\langle T^{z}(i)T^{z}(j) \rangle,$$ where $T^{z}(j)$=$\sum_{\sigma}(f_{j,a,\sigma}^{\dagger}f_{j,a,\sigma} -f_{j,b,\sigma}^{\dagger}f_{j,b,\sigma})/2$. Let us first focus on the case $\theta_{i}$=$\theta$=0. In Figs. 7(a) and (c), typical examples of the orbital structure factor in the FM and PM phases at density $n$=$5$ are presented. Note that these results are similar to those of the charge structure factor shown in Figs. 5(a) and (c), as previously anticipated. Also, as shown in Figs. 7(b) and (d), concerning the orbital correlation function defined as $$C_{\rm orbital}(l) =\frac{1}{M}\sum_{|i-j|=l}\langle T^{z}(i)T^{z}(j) \rangle,$$ we find the same form as $C(l)$, as observed in Figs. 5(b) and (d). In the FM phase, as shown in Fig. 7(b), $C_{\rm orbital}(l)$ decays as $\cos(\pi j)/j$, which is the signature of quasi-long-range AFO. On the other hand, in the PM phase, we observe a four-site periodicity of $C_{\rm orbital}(j)$ as well as that of $C_{\rm spin}(j)$, while the peak position of $T(q)$ is at $q$=$\pi$ for $U_{\rm eff}$=10 and $J$=1. Note that the spin-spin correlation function shows the four-site periodicity and $S(q)$ has the peak at $q$=$\pi/2$ for $U_{\rm eff}$=10 and $J$=1, as shown in Fig. 3(a). To clarify the similarity between the two-orbital model composed of the $xy$ and $zx$ orbitals and the $SU(4)$ spin-orbital model, we investigate $C_{\rm spin}(l)$ and $C_{\rm orbital}(l)$ for the present $t_{\rm 2g}$ model, at $U_{\rm eff}$=$10$ and $J$=0. As shown in Figs. 7(e) and (f), it is clearly observed that $C_{\rm orbital}(l)$ and $C_{\rm spin}(l)$ present exactly the same behavior with a four-site periodicity, due to the presence of the $SU(4)$ symmetry at $J$=0. When we include the effect of $J$, the spin and orbital degrees of freedom are no longer equivalent, but we can observe the four-site periodicity in both $C_{\rm orbital}(l)$ and $C_{\rm spin}(l)$ due to the influence of the $SU(4)$ symmetry at $J$=0. Thus, the short-range orbital correlation for small $J$ originates in the $SU(4)$ singlet at $J$=0. It should be mentioned that there is no indication of orbital LRO in the PM phase, since $T(\pi)$ converges to a finite value in the thermodynamic limit, as shown in Fig. 8(a). On the other hand, although we have found no signature of orbital order between the $xy$ and $zx$ orbitals through the orbital structure factor $T(q)$ for $\theta_{i}$=$\theta$=0, a more complex combination between these orbitals could exist, but we cannot observe it directly from $T(q)$ with $\theta$=0. In order to consider other combinations, we set $\theta_{i}$=$\theta$ and change the value of $\theta$. However, even in this case, we do not observe any changes in $T(q)$, as observed in Fig. 8(b). Namely, the orbital correlation does not change due to the rotation in the orbital space, and we cannot determine the optimal orbitals. Note that even if we optimize $\theta_{i}$ at each site, $T(\pi)$ is always maximum. Thus, we conclude that the states considered in our investigations do not have long-range orbital order in the PM phase. Conclusions =========== In this paper, we have investigated the properties of the one-dimensional Hubbard model with three active orbitals, with emphasis on electron densities of relevance for cobalt oxides. We envision this work as a first step toward a numerical accurate study of many-body Hamiltonians for Co-oxides including the Coulombic repulsion. Our main result is the identification of two dominant tendencies in the ground state. For example, at sufficiently large Hund’s coupling, a tendency toward a fully saturated ferromagnetic state exists. This state may develop long-range charge order in the case when the interorbital repulsion $U'$ is large, at density $n$=5, and for nonzero values of the coupling in the direction perpendicular to the chains. In previous investigations of Co-oxides models, tendencies toward FM were also discussed (see, for instance, Refs.  and and references therein). Thus, evidence is accumulating that magnetic states should be of relevance for these materials. This is clearly compatible with experiments for perovskites cobaltites.[@cobalt-PS] However, thus far only for large Na doping, magnetism has been observed experimentally in triangular lattices.[@cobalt-cava] This result may arise from the competition with the higher dimensional version of the PM state discussed in this work. This state has short-range correlations in all channels, and in some limits it has an extra $SU(4)$ symmetry as in two orbital models. While this exact symmetry may appear only in one dimension and for $J$=0, remnants may remain under more realistic conditions. To the extent that our results can be qualitatively extended to higher dimensions, the main competition in Co-oxide models originates from FM and PM states, with long-range and short-range spin and charge order, respectively. Of course, the effect of geometrical frustration could be also an important ingredient to bring about the complex spin-charge-orbital structure in the triangular-lattice systems. In fact, two of the authors have revealed that the spin frustration is suppressed due to the orbital ordering in the $e_{\rm g}$-orbital model on a zigzag chain.[@Onishi-Hotta] In addition, in the present work we have identified metal-insulator transitions with doping away from $n$=5, while the main properties in the spin and charge sectors remain similar as for the integer density $n$=5. The next challenge is to increase the dimensionality of the $t_{\rm 2g}$ system toward two dimensions by studying ladders and/or zigzag chains. Work is in progress in this direction. This work was supported by DMR-0454504 (E. D. and J. C. X.) and FAPESP-04/09689-2 (J. C. X.). T. H. is supported by the Japan Society for the Promotion of Science and by the Ministry of Education, Culture, Sports, Science, and Technology of Japan. [99]{} K. Takada, H. Sakurai, E. Takayama-Muromachi, F. Izumi, R. A. Dilanian, and T. Sasaki, Nature **422**, 53 (2003); R. E. Schaak, T. Klimczuk, M. L. Foo, and R. J. Cava, Nature **424**, 527 (2003). Y. Wang, N. S. Rogado, R. J. Cava, and N. P. Ong, Nature **423**, 425 (2003); T. Motohashi, R. Ueda, E. Naujalis, T. Tojo, I. Terasaki, T. Atake, M. Karppinen, and H. Yamauchi, Phys. Rev. B **67**, 64406 (2003). J. Sugiyama, H. Itahara, T. Tani, J. H. Brewer, and E. J. Ansaldo, Phys. Rev. B **66**, 134413 (2002); J. Sugiyama, J. H. Brewer, E. J. Ansaldo, B. Hitti, M. Mikami, Y. Mori, and T. Sasaki, Phys. Rev. B **69**, 214423 (2004). E. Dagotto, [*Complexity in Strongly Correlated Electronic Systems*]{}, to appear in Science, 2005. R. Caciuffo, D. Rinaldi, G. Barucca, J. Mira, J. Rivas, M. A. Senaris-Rodriguez, P. G. Radaelli, D. Fiorani, and J. B. Goodenough, Phys. Rev. B **59**, 1068 (1999); P. L. Kuhns, M. J. R. Hoch, W. G. Moulton, A. P. Reyes, J. Wu, and C. Leighton, Phys. Rev. Lett. **91**, 127202 (2003); J. Wu, J. W. Lynn, C. J. Glinka, J. Burley, H. Zheng, J. F. Mitchell, and C. Leighton, Phys. Rev. Lett. **94**, 037201 (2005); Y. Sun, Y.-K. Tang, and Z.-H. Cheng, cond-mat/0505189. E. Dagotto, T. Hotta, and A. Moreo, Phys. Rep. **344**, 1 (2001); E. Dagotto, *Nanoscale Phase Separation and Colossal Magnetoresistance*, Springer-Verlag, Berlin, 2002. T. Motohashi, V. Caignaert, V. Pralong, M. Hervieu, A. Maignan, and B. Raveau, cond-mat/0504379 and references therein. I. Terasaki, Y. Sasago, K. Uchinokura, Phys. Rev. B **56**, R12685 (1997). See also G. Mahan, B. Sales, and J. Sharp, Physics Today, March 1997, page 42; F. J. Di Salvo, Science **285**, 703 (1999); H. W. Eng, W. Prellier, S. Hebert, D. Grebille, L. Mechin, and B. Mercey, cond-mat/0410177 and references therein. D. J. Singh, Phys. Rev. B **68**, 020503 (2003). N. Bulut, W. Koshibae, and S. Maekawa, cond-mat/0502347 and references therein. M. Mochizuki, Y. Yanase, and M. Ogata, J. Phys. Soc. Jpn. **74**, 1670 (2005) and references therein. M. Ogata, J. Phys. Soc. Jpn. **72**, 1839 (2003); G. Baskaran, Phys. Rev. Lett. **91**, 097003 (2003); B. Kumar and B. S. Shastry, Phys. Rev. B **68**, 104508 (2003) and references therein. H. Ikeda, Y. Nisikawa, and K. Yamada, J. Phys. Soc. Jpn. **73**, 17 (2004); Y. Nishikawa, H. Ikeda, and K. Yamada, J. Phys. Soc. Jpn. **73**, 1127 (2004). S. R. White, Phys. Rev. Lett. **69**, 2863 (1992). E. Dagotto, Rev. Mod. Phys. **66**, 763 (1994). H. Onishi and T. Hotta, Phys. Rev. B **70**, 100402(R), (2003). Y. Q. Li, M. Ma, D. N. Shi, and F. C. Zhang, Phys. Rev. Lett. **81**, 3527 (1998). B. Frischmuth, F. Mila, and M. Troyer, Phys. Rev. Lett. **82**, 835 (1999). Y. Yamashita, N. Shibata, and K. Ueda, Phys. Rev. B **61**, 4012 (2000). C. Itoi, S. Qin, and I. Affleck, Phys. Rev. B **61**, 6747 (2000). Y. Yamashita, N. Shibata, and K. Ueda, Phys. Rev. B **58**, 9114 (1998). H. C. Lee, P. Azaria, and E. Boulat, Phys. Rev. B **69**, 155109 (2005). Actually, it is expected that this exponent $3/2$ will depend on $U$, $J$ and the density $n$. Here, we do not intend to present a systematic study of this exponent. Our intention is only to show that at large $U$, it has the same exponent as the $SU(4)$ model. Note also the similarity with the 1D Hubbard model, where this exponent for $U\rightarrow\infty$ is also $3/2$ (see, for example, J. Voit, Rep. Prog. Phys. **58**, 977 (1995)). I. Affleck, Nucl. Phys. **B265**, 409 (1986). Note that we tried to calculate directly the finite spin-gap anticipated from the $n$=3 and 4 results. However, this is a difficult task, since even for small systems the spin gap is already very small (less than $10^{-2}$). F. D. M. Haldane, Phys. Lett. **93A**, 464 (1983); Phys. Rev. Lett. **50**, 1153 (1983). M. Ogata and H. Shiba, Phys. Rev. B **41**, 2326 (1990). S. Daul and R. M. Noack, Phys. Rev. B **58**, 2635 (1998). H. Onishi and T. Hotta, Phys. Rev. B **71**, 180410(R) (2005).
--- abstract: 'Canonical correlation analysis is a family of multivariate statistical methods for the analysis of paired sets of variables. Since its proposition, canonical correlation analysis has for instance been extended to extract relations between two sets of variables when the sample size is insufficient in relation to the data dimensionality, when the relations have been considered to be non-linear, and when the dimensionality is too large for human interpretation. This tutorial explains the theory of canonical correlation analysis including its regularised, kernel, and sparse variants. Additionally, the deep and Bayesian CCA extensions are briefly reviewed. Together with the numerical examples, this overview provides a coherent compendium on the applicability of the variants of canonical correlation analysis. By bringing together techniques for solving the optimisation problems, evaluating the statistical significance and generalisability of the canonical correlation model, and interpreting the relations, we hope that this article can serve as a hands-on tool for applying canonical correlation methods in data analysis.' author: - 'VIIVI UURTIO JOÃO M. MONTEIRO JAZ KANDOLA JOHN SHAWE-TAYLOR DELMIRO FERNANDEZ-REYES JUHO ROUSU' bibliography: - 'csur\_refs.bib' title: A Tutorial on Canonical Correlation Methods --- &lt;ccs2012&gt; &lt;concept&gt; &lt;concept\_id&gt;10010147.10010257.10010258.10010260.10010271&lt;/concept\_id&gt; &lt;concept\_desc&gt;Computing methodologies Dimensionality reduction and manifold learning&lt;/concept\_desc&gt; &lt;concept\_significance&gt;300&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;/ccs2012&gt; Author’s addresses: V. Uurtio ([email protected]) [and]{} J. Rousu ([email protected]), Helsinki Institute for Information Technology HIIT, Department of Computer Science, Aalto University, Konemiehentie 2, 02150 Espoo, Finland; J. M. Monteiro ([email protected]), Department of Computer Science, University College London, and Max Planck Centre for Computational Psychiatry and Ageing Research, University College London, Gower Street, London WC1E 6BT, UK; J. Shawe-Taylor ([email protected]) [and]{} D. Fernandez-Reyes ([email protected]), Department of Computer Science, University College London, Gower Street, London WC1E 6BT, UK; J. Kandola ([email protected]), Division of Brain Sciences, Imperial College London, DuCane Road, London WC12 0NN. Introduction ============ When a process can be described by two sets of variables corresponding to two different aspects, or views, analysing the relations between these two views may improve the understanding of the underlying system. In this context, a relation is a mapping of the observations corresponding to a variable of one view to the observations corresponding to a variable of the other view. For example in the field of medicine, one view could comprise variables corresponding to the symptoms of the disease and the other to the risk factors that can have an effect on the disease incidence. Identifying the relations between the symptoms and the risk factors can improve the understanding of the disease exposure and give indications for prevention and treatment. Examples of these kind of two-view settings, where the analysis of the relations could provide new information about the functioning of the system, occur in several other fields of science. These relations can be determined by means of canonical correlation methods that have been developed specifically for this purpose. Since the proposition of canonical correlation analysis (CCA) by H. Hotelling [@hotelling1935most; @hotelling1936relations], relations between variables have been explored in various fields of science. CCA was first applied to examine the relation of wheat characteristics to flour characteristics in an economics study by F. Waugh in 1942 [@waugh1942regressions]. Since then, studies in the fields of psychology [@hopkins1969statistical; @dunham1975canonical], geography [@monmonier1973improving], medicine [@lindsey1985canonical], physics [@wong1980study], chemistry [@tu1989canonical], biology [@sullivan1982distribution], time-series modeling [@heij1991modified], and signal processing [@schell1995programmable] constitute examples of the early application fields of CCA. In the beginning of the 21$^{st}$ century, the applicability of CCA has been demonstrated in modern fields of science such as neuroscience, machine learning and bioinformatics. Relations have been explored for developing brain-computer interfaces [@cao2015sequence; @nakanishi2015comparison] and in the field imaging genetics [@fang2016joint]. CCA has also been applied for feature selection [@ogura2013variable], feature extraction and fusion [@shen2013orthogonal], and dimension reduction [@wang2013dimension]. Examples of application studies conducted in the fields of bioinformatics and computational biology include [@rousu2013biomarker; @seoane2014canonical; @baur2015canonical; @sarkar2015dna; @cichonska2016metacca]. The vast range of application domains emphasises the utility of CCA in extracting relations between variables. Originally, CCA was developed to extract linear relations in overdetermined settings, that is when the number of observations exceeds the number of variables in either view. To extend CCA to underdetermined settings that often occur in modern data analysis, methods of regularisation have been proposed. When the sample size is small, Bayesian CCA also provides an alternative to perform CCA. The applicability of CCA to underdetermined settings has been further improved through sparsity-inducing norms that facilitate the interpretation of the final result. Kernel methods and neural networks have been introduced for uncovering non-linear relations. At present, canonical correlation methods can be used to extract linear and non-linear relations in both over- and underdetermined settings. In addition to the already described variants of CCA, alternative extensions have been proposed, such as the semi-paired and multi-view CCA. In general, CCA algorithms assume one-to-one correspondence between the observations in the views, in other words, the data is assumed to be paired. However, in real datasets some of the observations may be missing in either view, which means that the observations are semi-paired. Examples of semi-paired CCA algorithms comprise [@blaschko2008semi], [@kimura2013semicca], [@chen2012unified], and [@zhang2014semi]. CCA has also been extended to more than two views by [@horst1961relations], [@carroll1968generalization], [@kettenring1971canonical], and [@van1984linear]. In multi-view CCA the relations are sought among more than two views. Some of the modern extensions of multi-view CCA comprise its regularised [@tenenhaus2011regularized], kernelised [@tenenhaus2015kernel], and sparse [@tenenhaus2014variable] variants. Application studies of multi-view CCA and its modern variants can be found in neuroscience [@kang2013sparse], [@chen2014removal], feature fusion [@yuan2011novel] and dimensionality reduction [@yuan2014fractional]. However, both the semi-paired and multi-view CCA are beyond the scope of this tutorial. This tutorial begins with an introduction to the original formulation of CCA. The basic framework and statistical assumptions are presented. The techniques for solving the CCA optimisation problem are discussed. After solving the CCA problem, the approaches to interpret and evaluate the result are explained. The variants of CCA are illustrated using worked examples. Of the extended versions of CCA, the tutorial concentrates on the topics of regularised, kernel, and sparse CCA. Additionally, the deep and Bayesian CCA variants are briefly reviewed. This tutorial acquaints the reader with canonical correlation methods, discusses where they are applicable and what kind of information can be extracted. Canonical Correlation Analysis ============================== The Basic Principles of CCA {#basic} --------------------------- CCA is a two-view multivariate statistical method. In multivariate statistical analysis, the data comprises multiple variables measured on a set of observations or individuals. In the case of CCA, the variables of an observation can be partitioned into two sets that can be seen as the two views of the data. This can be illustrated using the following notations. Let the views $a$ and $b$ be denoted by the matrices $X_a$ and $X_b$, of sizes $n \times p$ and $n \times q$ respectively. The row vectors $\mathbf{x}_a^k \in \mathbb{R}^p$ and $\mathbf{x}_b^k \in \mathbb{R}^q$ for $k=1,2,\dots,n$ denote the sets of empirical multivariate observations in $X_a$ and $X_b$ respectively. The observations are assumed to be jointly sampled from a normal multivariate distribution. A reason for this is that the normal multivariate model approximates well the distribution of continuous measurements in several sampled distributions [@anderson1958introduction]. The column vectors $\mathbf{a}_i \in \mathbb{R}^n$ for $i=1,2,\dots,p$ and $\mathbf{b}_j \in \mathbb{R}^n$ for $j=1,2,\dots,q$ denote the variable vectors of the $n$ observations respectively. The inner product between two vectors is either denoted by $\langle \mathbf{a},\mathbf{b} \rangle$ or $\mathbf{a}^T \mathbf{b}$. Throughout this tutorial, we assume that the variables are standardised to zero mean and unit variance. In CCA, the aim is to extract the linear relations between the variables of $X_a$ and $X_b$. CCA is based on linear transformations. We consider the following transformations $$X_a {\mathbf{w}_a}= {\mathbf{z}_a}\quad \text{and } \quad X_b {\mathbf{w}_b}= {\mathbf{z}_b}$$ where $X_a \in \mathbb{R}^{n \times p}$, ${\mathbf{w}_a}\in \mathbb{R}^p$, ${\mathbf{z}_a}\in \mathbb{R}^n$, $X_b \in \mathbb{R}^{n \times q}$, ${\mathbf{w}_b}\in \mathbb{R}^q$, and ${\mathbf{z}_b}\in \mathbb{R}^n$. The data matrices $X_a$ and $X_b$ represent linear transformations of the positions ${\mathbf{w}_a}$ and ${\mathbf{w}_b}$ onto the images ${\mathbf{z}_a}$ and ${\mathbf{z}_b}$ in the space $\mathbb{R}^n$. The positions ${\mathbf{w}_a}$ and ${\mathbf{w}_b}$ are often referred to as canonical weight vectors and the images ${\mathbf{z}_a}$ and ${\mathbf{z}_b}$ are also termed as canonical variates or scores. The constraints of CCA on the mappings are that the position vectors of the images ${\mathbf{z}_a}$ and ${\mathbf{z}_b}$ are unit norm vectors and that the enclosing angle, $\theta \in [0, \frac{\pi}{2}]$ [@golub1995canonical; @dauxois1997canonical], between ${\mathbf{z}_a}$ and ${\mathbf{z}_b}$ is minimised. The cosine of the angle, also referred to as the canonical correlation, between the images ${\mathbf{z}_a}$ and ${\mathbf{z}_b}$ is given by the formula $\cos({\mathbf{z}_a},{\mathbf{z}_b}) = \langle {\mathbf{z}_a},{\mathbf{z}_b}\rangle / ||{\mathbf{z}_a}||||{\mathbf{z}_b}||$ and due to the unit norm constraint $\cos({\mathbf{z}_a},{\mathbf{z}_b}) = \langle {\mathbf{z}_a},{\mathbf{z}_b}\rangle.$ Hence the basic principle of CCA is to find two positions ${\mathbf{w}_a}\in \mathbb{R}^p$ and ${\mathbf{w}_b}\in \mathbb{R}^q$ that after the linear transformations $X_a \in \mathbb{R}^{n \times p}$ and $X_b \in \mathbb{R}^{n \times q}$ are mapped onto an $n$-dimensional unit ball and located in such a way that the cosine of the angle between the position vectors of their images ${\mathbf{z}_a}\in \mathbb{R}^n$ and ${\mathbf{z}_b}\in \mathbb{R}^n$ is maximised. The images ${\mathbf{z}_a}$ and ${\mathbf{z}_b}$ of the positions ${\mathbf{w}_a}$ and ${\mathbf{w}_b}$ that result in the smallest angle, $\theta_1$, determine the first canonical correlation which equals $\cos \theta_1$ [@bjorck1973numerical]. The smallest angle is given by $$\label{alg_cca} \begin{gathered} \cos \theta_1 = \max_{{\mathbf{z}_a}, {\mathbf{z}_b}\in \mathbb{R}^n} \langle {\mathbf{z}_a},{\mathbf{z}_b}\rangle, \\ ||{\mathbf{z}_a}||_2 = 1 \quad ||{\mathbf{z}_b}||_2 = 1 \end{gathered}$$ Let the maximum be obtained by ${\mathbf{z}_a}^1$ and ${\mathbf{z}_b}^1$. The pair of images ${\mathbf{z}_a}^2$ and ${\mathbf{z}_b}^2$, that has the second smallest enclosing angle $\theta_2$, is found in the orthogonal complements of ${\mathbf{z}_a}^1$ and ${\mathbf{z}_b}^1$. The procedure is continued until no more pairs are found. Hence the $r$ angles $\theta_r \in [0, \frac{\pi}{2}]$ for $r=1,2,\cdots,q$ when $p>q$ that can be found are recursively defined by $$\begin{gathered} \cos \theta_r = \max_{{\mathbf{z}_a}, {\mathbf{z}_b}\in \mathbb{R}^n} \langle {\mathbf{z}_a}^{r}, {\mathbf{z}_b}^r \rangle, \\ ||{\mathbf{z}_a}^r||_2 = 1 \quad ||{\mathbf{z}_b}^r||_2 = 1 \\ \langle {\mathbf{z}_a}^{r}, {\mathbf{z}_a}^{j} \rangle = 0 \quad \langle {\mathbf{z}_b}^{r}, {\mathbf{z}_b}^{j} \rangle = 0, \\ \forall j \neq r: \quad j,r = 1, 2, \dots, \min(p,q).\end{gathered}$$ The number of canonical correlations, $r$, corresponds to the dimensionality of CCA. Qualitatively, the dimensionality of CCA can be also seen as the number of patterns that can be extracted from the data. When the dimensionality of CCA is large, it may not be relevant to solve all the positions ${\mathbf{w}_a}$ and ${\mathbf{w}_b}$ and images ${\mathbf{z}_a}$ and ${\mathbf{z}_b}$. In general, the value of the canonical correlation and the statistical significance are considered to convey the importance of the pattern. The first estimation strategy for finding the number of statistically significant canonical correlation coefficients was proposed in [@bartlett1941statistical]. The techniques have been further developed in [@fujikoshi1979estimation; @tu1991bootstrap; @gunderson1997estimating; @yamada2006permutation; @lee2007canonical; @sakurai2009asymptotic]. In summary, the principle behind CCA is to find two positions in the two data spaces respectively that have images on a unit ball such that the angle between them is minimised and consequently the canonical correlation is maximised. The linear transformations of the positions are given by the data matrices. The number of relevant positions can be determined by analysing the values of the canonical correlations or by applying statistical significance tests. Finding the positions and the images in CCA {#solving} ------------------------------------------- The position vectors ${\mathbf{w}_a}$ and ${\mathbf{w}_b}$ having images ${\mathbf{z}_a}$ and ${\mathbf{z}_b}$ in the new coordinate system of a unit ball that have a maximum cosine of the angle in between can be obtained using techniques of functional analysis. The eigenvalue-based methods comprise solving a standard eigenvalue problem, as originally proposed by Hotelling in [@hotelling1936relations], or a generalised eigenvalue problem [@bach2002kernel; @hardoon2004canonical]. Alternatively, the positions and the images can be found using the singular value decomposition (SVD), as introduced in [@healy1957rotation]. The techniques can be considered as standard ways of solving the CCA problem. #### Solving CCA Through the Standard Eigenvalue Problem In the technique of Hotelling, both the positions ${\mathbf{w}_a}$ and ${\mathbf{w}_b}$ and the images ${\mathbf{z}_a}$ and ${\mathbf{z}_b}$ are obtained by solving a standard eigenvalue problem. The Lagrange multiplier technique [@hotelling1936relations; @hooper1959simultaneous] is employed to obtain the characteristic equation. Let $X_a$ and $X_b$ denote the data matrices of sizes $n \times p$ and $n \times q$ respectively. The sample covariance matrix $C_{ab}$ between the variable column vectors in $X_a$ and $X_b$ is $C_{ab}=\frac{1}{n-1} X_a^T X_b$. The empirical variance matrices between the variables in $X_a$ and $X_b$ are given by $C_{aa}=\frac{1}{n-1}X_a^T X_a$ and $C_{bb}=\frac{1}{n-1} X_b^T X_b$ respectively. The joint covariance matrix is then $$\label{covmat} \begin{pmatrix} C_{aa} & C_{ab} \\ C_{ba} & C_{bb} \end{pmatrix}.$$ The first and greatest canonical correlation that corresponds to the smallest angle is between the first pair of images $\mathbf{z}_a=X_a {\mathbf{w}_a}$ and $\mathbf{z}_b=X_b {\mathbf{w}_b}$. Since the correlation between $\mathbf{z}_a$ and $\mathbf{z}_b$ does not change with the scaling of $\mathbf{z}_a$ and $\mathbf{z}_b$, we can constrain ${\mathbf{w}_a}$ and ${\mathbf{w}_b}$ to be such that $\mathbf{z}_a$ and $\mathbf{z}_b$ have unit variance. This is given by $$\begin{aligned} \mathbf{z}_a^T \mathbf{z}_a =& {\mathbf{w}_a}^T X_a^T X_a {\mathbf{w}_a}= {\mathbf{w}_a}^T C_{aa} {\mathbf{w}_a}= 1, \label{aa} \\ \mathbf{z}_b^T \mathbf{z}_b =& {\mathbf{w}_b}^T X_b^T X_b {\mathbf{w}_b}= {\mathbf{w}_b}^T C_{bb} {\mathbf{w}_b}= 1. \label{bb}\end{aligned}$$ Due to the normality assumption and comparability, the variables of $X_a$ and $X_b$ should be centered such that they have zero means. In this case, the covariance between $\mathbf{z}_a$ and $\mathbf{z}_b$ is given by $$\label{ab} \mathbf{z}_a^T \mathbf{z}_b = {\mathbf{w}_a}^T X_a^T X_b {\mathbf{w}_b}= {\mathbf{w}_a}^T C_{ab} {\mathbf{w}_b}.$$ Substituting (\[ab\]), (\[aa\]) and (\[bb\]) into the algebraic problem in Equation (\[alg\_cca\]), we obtain: $$\begin{gathered} \cos \theta = \max_{{\mathbf{z}_a}, {\mathbf{z}_b}\in \mathbb{R}^n} \langle {\mathbf{z}_a},{\mathbf{z}_b}\rangle = \max_{{\mathbf{w}_a}\in \mathbb{R}^p, {\mathbf{w}_b}\in \mathbb{R}^q} {\mathbf{w}_a}^T C_{ab} {\mathbf{w}_b}, \\ ||{\mathbf{z}_a}||_2 = \sqrt{{\mathbf{w}_a}^T C_{aa} {\mathbf{w}_a}} = 1 \quad ||{\mathbf{z}_b}||_2 = \sqrt{{\mathbf{w}_b}^T C_{bb} {\mathbf{w}_b}} = 1.\end{gathered}$$ In general, the constraints (\[aa\]) and (\[bb\]) are expressed in squared form, ${\mathbf{w}_a}^T C_{aa} {\mathbf{w}_a}= 1$ and ${\mathbf{w}_b}^T C_{bb} {\mathbf{w}_b}= 1$. The problem can be solved using the Lagrange multiplier technique. Let $$L = {\mathbf{w}_a}^T C_{ab} {\mathbf{w}_b}- \frac{\rho_1}{2} ({\mathbf{w}_a}^T C_{aa} {\mathbf{w}_a}-1) - \frac{\rho_2}{2} ({\mathbf{w}_b}^T C_{bb} {\mathbf{w}_b}- 1)$$ where $\rho_1$ and $\rho_2$ denote the Lagrange multipliers. Differentiating $L$ with respect to ${\mathbf{w}_a}$ and ${\mathbf{w}_b}$ gives $$\begin{aligned} \frac{\delta L}{\delta {\mathbf{w}_a}} = C_{ab} {\mathbf{w}_b}- \rho_1 C_{aa} {\mathbf{w}_a}= \mathbf{0} \label{dif1}\\ \frac{\delta L}{\delta {\mathbf{w}_b}} = C_{ba} {\mathbf{w}_a}- \rho_2 C_{bb} {\mathbf{w}_b}= \mathbf{0} \label{dif2}\end{aligned}$$ Multiplying (\[dif1\]) from the left by ${\mathbf{w}_a}^T$ and (\[dif2\]) from the left by ${\mathbf{w}_b}^T$ gives $$\begin{aligned} {\mathbf{w}_a}^T C_{ab} {\mathbf{w}_b}- \rho_1 {\mathbf{w}_a}^T C_{aa} {\mathbf{w}_a}= 0 \\ {\mathbf{w}_b}^T C_{ba} {\mathbf{w}_a}- \rho_2 {\mathbf{w}_b}^T C_{bb} {\mathbf{w}_b}= 0. \\ $$ Since ${\mathbf{w}_a}^T C_{aa} {\mathbf{w}_a}= 1$ and ${\mathbf{w}_b}^T C_{bb} {\mathbf{w}_b}= 1$, we obtain that $$\label{res} \rho_1 = \rho_2 = \rho.$$ Substituting (\[res\]) into Equation (\[dif1\]) we obtain $$\label{wa} {\mathbf{w}_a}= \frac{C_{aa}^{-1} C_{ab} {\mathbf{w}_b}}{\rho}.$$ Substituting (\[wa\]) into (\[dif2\]) we obtain $$\frac{1}{\rho} C_{ba} C_{aa}^{-1} C_{ab} {\mathbf{w}_b}- \rho C_{bb} {\mathbf{w}_b}= 0$$ which is equivalent to the generalised eigenvalue problem of the form $$C_{ba} C_{aa}^{-1} C_{ab} {\mathbf{w}_b}= \rho^2 C_{bb} {\mathbf{w}_b}.$$ If $C_{bb}$ is invertible, the problem reduces to a standard eigenvalue problem of the form $$C_{bb}^{-1} C_{ba} C_{aa}^{-1} C_{ab} {\mathbf{w}_b}= \rho^2 {\mathbf{w}_b}.$$ The eigenvalues of the matrix $C_{bb}^{-1} C_{ba} C_{aa}^{-1} C_{ab}$ are found by solving the characteristic equation $$|C_{bb}^{-1} C_{ba} C_{aa}^{-1} C_{ab} - \rho^2 I | = 0.$$ The square roots of the eigenvalues correspond to the canonical correlations. The technique of solving the standard eigenvalue problem is shown in Example \[cca\_standard\]. \[cca\_standard\] We generate two data matrices $X_a$ and $X_b$ of sizes $n \times p$ and $n \times q$, where $n=60$, $p=4$ and $q=3$, respectively as follows. The variables of $X_a$ are generated from a random univariate normal distribution, $\mathbf{a}_1, \mathbf{a}_2, \mathbf{a}_3, \mathbf{a}_4 \sim N(0,1)$. We generate the following linear relations $$\begin{aligned} \mathbf{b}_1 &= \mathbf{a}_3 + \boldsymbol \xi_1 \\ \mathbf{b}_2 &= \mathbf{a}_1 + \boldsymbol \xi_2 \\ \mathbf{b}_3 &= -\mathbf{a}_4 + \boldsymbol \xi_3\end{aligned}$$ where $\boldsymbol \xi_1 \sim N(0,0.2), \boldsymbol \xi_2 \sim N(0,0.4),$ and $\boldsymbol \xi_3 \sim N(0,0.3)$ denote vectors of normal noise. The data is standardised such that every variable has zero mean and unit variance. The joint covariance matrix $C$ in (\[covmat\]) of the generated data is given by $$C = \left( \begin{array}{cccc|ccc} 1.00 & 0.34 & -0.11 & 0.21 & -0.10 & 0.92 & -0.21 \\ 0.34 & 1.00 & -0.08 & 0.03 & -0.10 & 0.34 & 0.06 \\ -0.11 & -0.08 & 1.00 & -0.30 & 0.98 & -0.03 & 0.30 \\ 0.21 & 0.03 & -0.30 & 1.00 & -0.25 & 0.12 & -0.94 \\ \hline -0.10 & -0.10 & 0.98 & -0.25 & 1.00 & -0.03 & 0.25 \\ 0.92 & 0.34 & -0.03 & 0.12 & -0.03 & 1.00 & -0.13 \\ -0.21 & 0.06 & 0.30 & -0.94 & 0.25 & -0.13 & 1.00 \\ \end{array} \right) = \left( \begin{array}{c|c} C_{aa} & C_{ab} \\ \hline C_{ba} & C_{bb} \\ \end{array} \right).$$ Now we compute the eigenvalues of the characteristic equation $$|C_{bb}^{-1} C_{ba} C_{aa}^{-1} C_{ab} - \rho^2 I| = 0.$$ The square roots of the eigenvalues of $ C_{bb}^{-1} C_{ba} C_{aa}^{-1} C_{ab}$ are $\rho_1 = 0.99$, $\rho_2 = 0.94$, and $\rho_3 = 0.92$. The eigenvectors ${\mathbf{w}_b}$ satisfy the equation $$(C_{bb}^{-1} C_{ba} C_{aa}^{-1} C_{ab} - \rho^2 I) {\mathbf{w}_b}= 0.$$ Hence we obtain $${\mathbf{w}_b}^1 = \begin{pmatrix} -0.97 \\ -0.04 \\ -0.22 \end{pmatrix} {\mathbf{w}_b}^2 = \begin{pmatrix} -0.39 \\ -0.37 \\ 0.85 \end{pmatrix} {\mathbf{w}_b}^3 = \begin{pmatrix} 0.19 \\ -0.86 \\ -0.46 \end{pmatrix}$$ and ${\mathbf{w}_a}$ vectors satisfy $$\begin{aligned} {\mathbf{w}_a}^1 &= \frac{C_{aa}^{-1} C_{ab} {\mathbf{w}_b}^1}{\rho_1} = \begin{pmatrix} -0.04 \\ -0.00 \\ -0.99 \\ 0.18 \end{pmatrix} {\mathbf{w}_a}^2 &= \frac{C_{aa}^{-1} C_{ab} {\mathbf{w}_b}^2}{\rho_2} = \begin{pmatrix} -0.41 \\ 0.09 \\ -0.41 \\ -0.83 \end{pmatrix} {\mathbf{w}_a}^3 &= \frac{C_{aa}^{-1} C_{ab} {\mathbf{w}_b}^3}{\rho_3} = \begin{pmatrix} -0.84 \\ -0.10 \\ 0.14 \\ 0.52 \end{pmatrix}.\end{aligned}$$ The vectors ${\mathbf{w}_b}^1,{\mathbf{w}_b}^2$, and ${\mathbf{w}_b}^3$ and ${\mathbf{w}_a}^1,{\mathbf{w}_a}^2$, and ${\mathbf{w}_a}^3$ correspond to the pairs of positions $({\mathbf{w}_a}^1,{\mathbf{w}_b}^1),({\mathbf{w}_a}^2,{\mathbf{w}_b}^2)$ and $({\mathbf{w}_a}^3,{\mathbf{w}_b}^3)$ that have the images $({\mathbf{z}_a}^1,{\mathbf{z}_b}^1),({\mathbf{z}_a}^2,{\mathbf{z}_b}^2)$ and $({\mathbf{z}_a}^3,{\mathbf{z}_b}^3)$. In linear CCA, the canonical correlations equal to the square roots of the eigenvalues, that is $\langle {\mathbf{z}_a}^1,{\mathbf{z}_b}^1 \rangle = 0.99$, $\langle {\mathbf{z}_a}^2,{\mathbf{z}_b}^2 \rangle = 0.94$, and $\langle {\mathbf{z}_a}^3,{\mathbf{z}_b}^3 \rangle = 0.92$. $\qed$ #### Solving CCA Through the Generalised Eigenvalue Problem The positions ${\mathbf{w}_a}$ and ${\mathbf{w}_b}$ and their images ${\mathbf{z}_a}$ and ${\mathbf{z}_b}$ can also be solved through a generalised eigenvalue problem [@bach2002kernel; @hardoon2004canonical]. The equations in (\[dif1\]) and (\[dif2\]) can be represented as simultaneous equations $$\begin{aligned} C_{ab} {\mathbf{w}_b}= & \rho C_{aa} {\mathbf{w}_a}\\ C_{ba} {\mathbf{w}_a}= & \rho C_{bb} {\mathbf{w}_b}\end{aligned}$$ that are equivalent to $$\label{geneig} \begin{pmatrix} \mathbf{0} & C_{ab} \\ C_{ba} & \mathbf{0} \end{pmatrix} \begin{pmatrix} {\mathbf{w}_a}\\ {\mathbf{w}_b}\end{pmatrix}= \rho \begin{pmatrix} C_{aa} & \mathbf{0} \\ \mathbf{0} & C_{bb} \end{pmatrix} \begin{pmatrix} {\mathbf{w}_a}\\ {\mathbf{w}_b}\end{pmatrix}.$$ The equation (\[geneig\]) represents a generalised eigenvalue problem of the form $\beta A \mathbf{x} = \alpha B \mathbf{x}$ where the pair $(\beta,\alpha)=(1,\alpha)$ is an eigenvalue of the pair $(A,B)$ [@saad2011numerical; @golub2012matrix]. The pair of matrices $A \in \mathbb{R}^{(p+q)\times(p+q)}$ and $B \in \mathbb{R}^{(p+q)\times(p+q)}$ is also referred to as matrix pencil. In particular, $A$ is symmetric and $B$ is symmetric positive-definite. The pair $(A,B)$ is then called the symmetric pair. As shown in [@watkins2004fundamentals], a symmetric pair has real eigenvalues and $(p+q)$ linearly independent eigenvectors. To express the generalised eigenvalue problem in the form $A \mathbf{x} = \rho B \mathbf{x}$, the generalised eigenvalue is given by $\rho=\frac{\alpha}{\beta}$. Since the generalised eigenvalues come in pairs $\{\rho_1,-\rho_1,\rho_2,-\rho_2,\dots,\rho_p,-\rho_p,0\}$ where $p<q$, the positive generalised eigenvalues correspond to the canonical correlations. \[cca\_geneig\] Using the data in Example \[cca\_standard\], we apply the formulation of the generalised eigenvalue problem to obtain the positions ${\mathbf{w}_a}$ and ${\mathbf{w}_b}$. The resulting generalised eigenvalues are $$\{0.99,0.94,0.92,0.00,-0.92,-0.94,-0.99\}.$$ The generalised eigenvectors that correspond to the positive generalised eigenvalues in descending order are $${\mathbf{w}_a}^1 = \begin{pmatrix} -0.04 \\ -0.00 \\ -1.00 \\ 0.18 \end{pmatrix} {\mathbf{w}_a}^2 = \begin{pmatrix} 0.48 \\ -0.11 \\ 0.48 \\ 0.98 \end{pmatrix} {\mathbf{w}_a}^3 = \begin{pmatrix} -0.97 \\ -0.11 \\ 0.16 \\ 0.60 \end{pmatrix}$$ $${\mathbf{w}_b}^1 = \begin{pmatrix} -0.98 \\ -0.04 \\ -0.23 \end{pmatrix} {\mathbf{w}_b}^2 = \begin{pmatrix} 0.46 \\ 0.43 \\ -1.00 \end{pmatrix} {\mathbf{w}_b}^3 = \begin{pmatrix} 0.22 \\ -1.00 \\ -0.54 \end{pmatrix}$$ The vectors ${\mathbf{w}_a}^1,{\mathbf{w}_a}^2$, and ${\mathbf{w}_a}^3$ and ${\mathbf{w}_b}^1,{\mathbf{w}_b}^2$, and ${\mathbf{w}_b}^3$ correspond to the pairs of positions $({\mathbf{w}_a}^1,{\mathbf{w}_b}^1),({\mathbf{w}_a}^2,{\mathbf{w}_b}^2)$ and $({\mathbf{w}_a}^3,{\mathbf{w}_b}^3).$ The canonical correlations are $\langle {\mathbf{z}_a}^1,{\mathbf{z}_b}^1 \rangle = 0.99$, $\langle {\mathbf{z}_a}^2,{\mathbf{z}_b}^2 \rangle = 0.94$, and $\langle {\mathbf{z}_a}^3,{\mathbf{z}_b}^3 \rangle = 0.92$. The entries of the position pairs differ to some extent from the solutions to the standard eigenvalue problem in the Example \[cca\_standard\]. This is due to the numerical algorithms that are applied to solve the eigenvalues and eigenvectors. Additionally, the signs may also be opposite. This can be seen when comparing the second pairs of positions with the Example \[cca\_standard\]. This results from the symmetric nature of CCA. $\qed$ #### Solving CCA Using the SVD The technique of applying the SVD to solve the CCA problem was first introduced by [@healy1957rotation] and described by [@ewerbring1989canonical] as follows. First, the variance matrices $C_{aa}$ and $C_{bb}$ are transformed into identity forms. Due to the symmetric positive definite property, the square root factors of the matrices can be found using a Cholesky or eigenvalue decomposition: $$C_{aa} = C_{aa}^{1/2} C_{aa}^{1/2} \quad \text{and} \quad C_{bb} = C_{bb}^{1/2} C_{bb}^{1/2}.$$ Applying the inverses of the square root factors symmetrically on the joint covariance matrix in (\[covmat\]) we obtain $$\begin{pmatrix} C_{aa}^{-1/2} & \mathbf{0} \\ \mathbf{0} & C_{bb}^{-1/2} \end{pmatrix} \begin{pmatrix} C_{aa} & C_{ab} \\ C_{ba} & C_{bb} \end{pmatrix} \begin{pmatrix} C_{aa}^{-1/2} & \mathbf{0} \\ \mathbf{0} & C_{bb}^{-1/2} \end{pmatrix} = \begin{pmatrix} I_{q} & C_{aa}^{-1/2} C_{ab} C_{bb}^{-1/2} \\ C_{bb}^{-1/2} C_{ba} C_{aa}^{-1/2} & I_{p} \end{pmatrix}.$$ The position vectors ${\mathbf{w}_a}$ and ${\mathbf{w}_b}$ can hence be obtained by solving the following SVD $$C_{aa}^{-1/2} C_{ab} C_{bb}^{-1/2} = U^T S V$$ where the columns of the matrices $U$ and $V$ correspond to the sets of orthonormal left and right singular vectors respectively. The singular values of matrix $S$ correspond to the canonical correlations. The positions ${\mathbf{w}_a}$ and ${\mathbf{w}_b}$ are obtained from $${\mathbf{w}_a}= C_{aa}^{-1/2} U \quad {\mathbf{w}_b}= C_{bb}^{-1/2} V$$ The method is shown in Example \[cca\_svd\]. \[cca\_svd\] The method of solving CCA using the SVD is demonstrated using the data of Example \[cca\_standard\]. We compute the matrix $$C_{aa}^{-1/2} C_{ab} C_{bb}^{-1/2} = \begin{pmatrix} -0.02 & 0.90 & -0.06 \\ -0.07 & 0.20 & 0.11 \\ 0.98 & 0.04 & 0.04 \\ 0.01 & -0.02 & -0.93 \end{pmatrix}.$$ The SVD gives $$\begin{gathered} C_{aa}^{-1/2} C_{ab} C_{bb}^{-1/2} = \\ \underbrace{\begin{pmatrix} -0.03 & -0.03 & 0.95 & -0.30 \\ -0.47 & 0.03 & -0.28 & 0.84 \\ -0.86 & -0.26 & 0.11 & 0.44 \end{pmatrix}}_{U^T} \underbrace{\begin{pmatrix} 0.99 & 0.00 & 0.00\\ 0.00 & 0.94 & 0.00 \\ 0.00 & 0.00 & 0.92 \\ 0.00 & 0.00 & 0.00 \end{pmatrix}}_{S} \underbrace{\begin{pmatrix} 0.95 & -0.29 & 0.15 \\ 0.01 & -0.44 & -0.90 \\ 0.33 & 0.85 & -0.41 \end{pmatrix}}_{V}.\end{gathered}$$ The singular values of the matrix $S$ correspond to the canonical correlations. The positions ${\mathbf{w}_a}$ and ${\mathbf{w}_b}$ are given by $${\mathbf{w}_a}^1 = C_{aa}^{-1/2} \mathbf{u}^1 = \begin{pmatrix} 0.04 \\ 0.00 \\ 0.94 \\ -0.17 \end{pmatrix} {\mathbf{w}_a}^2 = C_{aa}^{-1/2} \mathbf{u}^2 = \begin{pmatrix} -0.43 \\ 0.10 \\ -0.43 \\ -0.87 \end{pmatrix} {\mathbf{w}_a}^3 = C_{aa}^{-1/2} \mathbf{u}^3 = \begin{pmatrix} -0.91 \\ -0.10 \\ 0.14 \\ 0.56 \end{pmatrix}$$ $${\mathbf{w}_b}^1 = C_{bb}^{-1/2} \mathbf{v}^1 = \begin{pmatrix} 0.93 \\ 0.04\\ 0.21 \end{pmatrix} {\mathbf{w}_b}^2 = C_{bb}^{-1/2} \mathbf{v}^2 = \begin{pmatrix} -0.40 \\ -0.38 \\ 0.89 \end{pmatrix} {\mathbf{w}_b}^3 = C_{bb}^{-1/2} \mathbf{v}^3 = \begin{pmatrix} 0.21 \\ -0.93 \\ -0.50 \end{pmatrix}$$ where $\mathbf{u}^i$ and $\mathbf{v}^i$ for $i=1,2,3$ correspond to the left and right singular vectors. The vectors ${\mathbf{w}_a}^1,{\mathbf{w}_a}^2$, and ${\mathbf{w}_a}^3$ and ${\mathbf{w}_b}^1,{\mathbf{w}_b}^2$, and ${\mathbf{w}_b}^3$ correspond to the pairs of positions $({\mathbf{w}_a}^1,{\mathbf{w}_b}^1),({\mathbf{w}_a}^2,{\mathbf{w}_b}^2)$ and $({\mathbf{w}_a}^3,{\mathbf{w}_b}^3).$ The canonical correlations are $\langle {\mathbf{z}_a}^1,{\mathbf{z}_b}^1 \rangle = 0.99$, $\langle {\mathbf{z}_a}^2,{\mathbf{z}_b}^2 \rangle = 0.94$, and $\langle {\mathbf{z}_a}^3,{\mathbf{z}_b}^3 \rangle = 0.92$. $\qed$ The main motivation for improving the eigenvalue-based technique was the computational complexity. The standard and generalised eigenvalue methods scale with the cube of the input matrix dimension, in other words, the time complexity is $\mathcal{O}(n^3)$, for a matrix of size $n \times n$. The input matrix $C_{aa}^{-1/2} C_{ab} C_{bb}^{-1/2}$ in the SVD-based technique is rectangular. This gives a time complexity of $\mathcal{O}(mn^2)$, for a matrix of size $m \times n$. Hence the SVD-based technique is computationally more tractable for very large datasets. To recapitulate, the images ${\mathbf{z}_a}$ and ${\mathbf{z}_b}$ of the positions ${\mathbf{w}_a}$ and ${\mathbf{w}_b}$ that successively maximise the canonical correlation can be obtained by solving a standard [@hotelling1936relations] or a generalised eigenvalue problem [@bach2002kernel; @hardoon2004canonical] or by applying the SVD [@healy1957rotation; @ewerbring1989canonical]. The CCA problem can also be solved using alternative techniques. The only requirements are that the successive images on the unit ball are orthogonal and that the angle is minimised. Evaluating the Canonical Correlation Model ------------------------------------------ The pair of position vectors that have images on the unit ball with a minimum enclosing angle correspond to the canonical correlation model obtained from the training data. The entries of these position vectors convey the relations between the variables obtained from the sampling distribution. In general, a statistical model is validated in terms of statistical significance and generalisability. To assess the statistical significance of the relations obtained from the training data, Bartlett’s sequential test procedure [@bartlett1941statistical] can be applied. Although the technique was presented in 1941, it is still applied in timely CCA application studies such as [@marttinen2013genome; @kabir2014canonical; @song2016canonical]. The generalisability of the canonical correlation model determines whether the relations obtained from the training data can be considered to represent general patterns occurring in the sampling distribution. The methods of testing the statistical significance and generalisability of the extracted relations represent standard ways to evaluate the canonical correlation model. The entries of the position vectors ${\mathbf{w}_a}$ and ${\mathbf{w}_b}$ can be used as a means to analyse the linear relations between the variables. The linear relation corresponding to the value of the canonical correlation is found between the entries that are of the greatest value. The values of the entries of the position vectors ${\mathbf{w}_a}$ and ${\mathbf{w}_b}$ are visualised in Figure \[coefficients\]. The linear relation that corresponds to the canonical correlation of $\langle {\mathbf{z}_a}^1,{\mathbf{z}_b}^1 \rangle=0.99$ is found between the variables $\mathbf{a}_3$ and $\mathbf{b}_1$. Since the signs of both entries are negative, the relation is positive. The second pair of positions $({\mathbf{w}_a}^2,{\mathbf{w}_b}^2)$ conveys the negative relation between $\mathbf{a}_4$ and $\mathbf{b}_3$. The positive relation between $\mathbf{a}_1$ and $\mathbf{b}_2$ can be identified from the entries of the third pair of positions $({\mathbf{w}_a}^3,{\mathbf{w}_b}^3)$. In [@meredith1964canonical], structure correlations were introduced as a means to analyse the relations between the variables. Structure correlations are the correlations of the original variables, $\mathbf{a}_i \in \mathbb{R}^n$ for $i=1,2,\dots,p$ and $\mathbf{b}_j \in \mathbb{R}^n$ for $j=1,2,\dots,q$, with the images, ${\mathbf{z}_a}\in \mathbb{R}^n$ or ${\mathbf{z}_b}\in \mathbb{R}^n$. In general, the structure correlations convey how the images ${\mathbf{z}_a}$ and ${\mathbf{z}_b}$ are aligned in the space $\mathbb{R}^n$ in relation to the variable axes. In [@ter1990interpreting], the structure correlations were visualised on a biplot to facilitate the interpretation of the relations. To plot the variables on the biplot, the correlations of the original variables of both sets with two successive images, for example the images $({\mathbf{z}_a}^1,{\mathbf{z}_a}^2)$, of one of the sets are computed. The plot is interpreted by the cosine of the angles between the variable vectors which is given by $\cos(\mathbf{a},\mathbf{b}) = \langle \mathbf{a},\mathbf{b} \rangle / ||\mathbf{a}||||\mathbf{b}||.$ Hence a positive linear relation is shown by an acute angle while an obtuse angle depicts a negative linear relation. A right angle corresponds to a zero correlation. Three biplots of the data and results of Example \[cca\_standard\] are shown in Figure \[biplot\]. In each of the biplots, the same relations that were identified in Figure \[coefficients\] can be found by analysing the angles between the variable vectors. The extraction of the relations can be enhanced by changing the pairs of images with which the correlations are computed. The statistical significance tests of the canonical correlations evaluate whether the obtained pattern can be considered to occur non-randomly. The sequential test procedure of Bartlett [@bartlett1938further] determines the number of statistically significant canonical correlations in the data. The procedure to evaluate the statistical significance of the canonical correlations is described in [@fujikoshi1979estimation]. We test the hypothesis $$H_0 : \min(p,q) = k \text{ against } H_1: \min(p,q) > k$$ where $k = 0,1,\dots,p$ when $p<q$. If the hypothesis $H_0: \min(p,q) = j$ is rejected for $j = 0,1,\dots,k-1$ but accepted for $H_1: \min(p,q) > k-1$ the number of statistically significant canonical correlations can be estimated as $k$. For the test, the Bartlett-Lawley statistic, $L_k$ is applied $$L_k = - \big(n-k-\frac{1}{2}(p+q+1) + \sum_{j=1}^{k} r_j^{-2} \big) \ln \big(\prod_{j=k+1}^{\min(p,q)} (1-r_j^2) \big).$$ where $r_j$ denotes the $j^{th}$ canonical correlation. The asymptotic null distribution of $L_k$ is the chi-squared with $(p-k)(q-k)$ degrees of freedom. Hence we first test that no canonical relation exists between the two views. If we reject the hypothesis $H_0$ we continue to test that one canonical relation exists. If all the canonical patterns are statistically significant even the hypothesis $H_0 : \min(p,q) = k-1$ is rejected. \[bartlett\] We demonstrate the sequential test procedure of Bartlett using the simulated setting of Examples \[cca\_standard\], \[cca\_geneig\] and \[cca\_svd\]. In the setting, $n=60$, $p=4$ and $p=3$. Hence $\min(p,q)=3$. First, we test that there are no canonical correlations $$H_0 : \min(p,q) = 0 \text{ against } H_1: \min(p,q) > 0$$ The Bartlett-Lawley statistic is $L_0 = 296.82$. Since $L_0 \sim \chi^2(12)$ the critical value at the significance level $\alpha = 0.01$ is $P(\chi^2 \geq 26.2) = 0.01$. Since $L_0 = 296.82 > 26.2$ the hypothesis $H_0$ is rejected. Next we test that there is one canonical correlation. $$H_0 : \min(p,q) = 1 \text{ against } H_1: \min(p,q) > 1$$ The Bartlett-Lawley statistic is $L_1 = 154.56$ and $L_1 \sim \chi^2(6)$. The critical value at the significance level $\alpha = 0.01$ is $P(\chi^2 \geq 16.8) = 0.01$. Since $L_1 = 154.56 > 16.8$ the hypothesis $H_0$ is rejected. We continue to test that there are two canonical correlations $$H_0 : \min(p,q) = 2 \text{ against } H_1: \min(p,q) > 2$$ The Bartlett-Lawley statistic is $L_2 = 70.95$ and $L_2 \sim \chi^2(2)$. The critical value at the significance level $\alpha = 0.01$ is $P(\chi^2 \geq 9.21) = 0.01$. Since $L_1 = 70.95 > 9.21$ the hypothesis $H_0$ is rejected. Hence the hypothesis $H_1: \min(p,q) > 2$ is accepted and all three canonical patterns are statistically significant. $\qed$ To determine whether the extracted relations can be considered generalisable, or in other words general patterns in the sampling distribution, the linear transformations of the position vectors ${\mathbf{w}_a}$ and ${\mathbf{w}_b}$ need to be performed using test data. Unlike training data, test data originates from the sampling distribution but were not used in the model computation. Let the matrices $X_{a}^{test} \in \mathbb{R}^{m \times p}$ and $X_{b}^{test} \in \mathbb{R}^{m \times q}$ denote the test data of $m$ observations. The linear transformations of the position vectors ${\mathbf{w}_a}$ and ${\mathbf{w}_b}$ are then $$X_a^{test} {\mathbf{w}_a}= {\mathbf{z}_a}^{test} \quad \text{and } \quad X_b^{test} {\mathbf{w}_b}= {\mathbf{z}_b}^{test}$$ where the images ${\mathbf{z}_a}^{test}$ and ${\mathbf{z}_b}^{test}$ are in the space $\mathbb{R}^m$. The cosine of the angle between the test images $\cos({\mathbf{z}_a}^{test},{\mathbf{z}_b}^{test})= \langle {\mathbf{z}_a}^{test}, {\mathbf{z}_b}^{test} \rangle$ implies the generalisability. If the canonical correlations computed from test data also result in high correlation values we can deduce that the relations can generally be found from the particular sampling distribution. \[generalisability\] We evaluate the generalisability of the canonical correlation model obtained in Example \[cca\_standard\]. The test data matrices $X_a^{test}$ and $X_b^{test}$ of sizes $m \times p$ and $m \times q$ where $m=40, p = 4,$ and $q=3$ are from the same distributions as described in Example \[cca\_standard\]. The $40$ observations were not included in the computation of the model. The test canonical correlations corresponding to the positions $({\mathbf{w}_a}^1,{\mathbf{w}_b}^1),({\mathbf{w}_a}^2,{\mathbf{w}_b}^2)$ and $({\mathbf{w}_a}^3,{\mathbf{w}_b}^3)$ are $\langle {\mathbf{z}_a}^1,{\mathbf{z}_b}^1 \rangle = 0.98$, $\langle {\mathbf{z}_a}^2,{\mathbf{z}_b}^2 \rangle = 0.98$, $\langle {\mathbf{z}_a}^3,{\mathbf{z}_b}^3 \rangle = 0.98.$ The high values indicate that the extracted relations can be considered generalisable. $\qed$ The canonical correlation model can be evaluated by assessing the statistical significance and testing the generalisability of the relations. The statistical significance of the model can be determined by testing whether the extracted canonical correlations are not non-zero by chance. The generalisability of the relations can be assessed using new observations from the sampling distribution. These evaluation methods can generally be applied to test the validity of the extracted relations obtained using any variant of CCA. Extensions of Canonical Correlation Analysis ============================================ Regularisation Techniques in Underdetermined Systems {#reg} ---------------------------------------------------- CCA finds linear relations in the data when the number of observations exceeds the number of variables in either view. This possibly guarantees the non-singularity of the variance matrices $C_{aa}$ and $C_{bb}$ when solving the CCA problem. In the case of the standard eigenvalue problem, the matrices $C_{aa}$ and $C_{bb}$ should be non-singular so that they can be inverted. In the case of the SVD method, singular $C_{aa}$ and $C_{bb}$ may not have the square root factors. If the number of observations is less than the number of variables it is likely that some of the variables are collinear. Hence a sufficient sample size reduces the collinearity of the variables and guarantees the non-singularity of the variance matrices. The first proposition to solve the problem of insufficient sample size was presented in [@vinod1976canonical]. A more recent technique to regularise CCA has been proposed in [@cruz2014fast]. In the following, we present the original method of regularisation [@vinod1976canonical] due to its popularity in CCA applications [@gonzalez2009highlighting], [@yamamoto2008canonical], and [@soneson2010integrative]. In the work of [@vinod1976canonical], the singularity problem was proposed to be solved by regularisation. In general, the idea is to improve the invertibility of the variance matrices $C_{aa}$ and $C_{bb}$ by adding arbitrary constants $c_1 > 0$ and $c_2 > 0$ to the diagonal $C_{aa} + c_1 I$ and $C_{bb} + c_2 I.$ The constraints of CCA become $$\begin{aligned} {\mathbf{w}_a}^T \big(C_{aa} + c_1 I \big) {\mathbf{w}_a}=& 1 \\ {\mathbf{w}_b}^T \big(C_{bb} + c_2 I \big) {\mathbf{w}_b}=& 1 \end{aligned}$$ and hence the magnitudes of the position vectors ${\mathbf{w}_a}$ and ${\mathbf{w}_b}$ are smaller when regularisation, $c_1 > 0$ and $c_2 > 0$, is applied. The regularised CCA optimisation problem is given by $$\begin{gathered} \cos \theta = \max_{{\mathbf{w}_a}\in \mathbb{R}^p, {\mathbf{w}_b}\in \mathbb{R}^q} {\mathbf{w}_a}^T C_{ab} {\mathbf{w}_b}, \\ {\mathbf{w}_a}^T \big(C_{aa} + c_1 I \big) {\mathbf{w}_a}= 1 \quad {\mathbf{w}_b}^T \big(C_{bb} + c_2 I \big) {\mathbf{w}_b}= 1.\end{gathered}$$ The positions ${\mathbf{w}_a}$ and ${\mathbf{w}_b}$ can be found by solving the standard eigenvalue problem $$\big(C_{bb} + c_2 I \big)^{-1} C_{ba} \big(C_{aa} + c_1 I \big)^{-1} C_{ab} {\mathbf{w}_b}= \rho^2 {\mathbf{w}_b}.$$ or the generalised eigenvalue problem $$\label{geneigreg} \begin{pmatrix} \mathbf{0} & C_{ab} \\ C_{ba} & \mathbf{0} \end{pmatrix} \begin{pmatrix} {\mathbf{w}_a}\\ {\mathbf{w}_b}\end{pmatrix}= \rho \begin{pmatrix} C_{aa} + c_1 I & \mathbf{0} \\ \mathbf{0} & C_{bb} + c_2 I \end{pmatrix} \begin{pmatrix} {\mathbf{w}_a}\\ {\mathbf{w}_b}\end{pmatrix}.$$ As in the case of linear CCA, the canonical correlations correspond to the inner products between the consecutive image pairs $\langle {\mathbf{z}_a}^{i},{\mathbf{z}_b}^{i} \rangle$ where $i=1,2,\dots,\min(p,q)$. The regularisation proposed by [@vinod1976canonical] makes the CCA problem solvable but introduces new parameters $c_1 > 0$ and $c_2 > 0$ that have to be chosen. The first proposition of applying a leave-one-out cross-validation procedure to automatically select the regularisation parameters was presented in [@leurgans1993canonical]. Cross-validation is a well-established nonparametric model selection procedure to evaluate the validity of statistical predictions. One of its earliest applications have been presented in [@larson1931shrinkage]. A cross-validation procedure entails the partitioning of the observations into subsamples, selecting and estimating a statistic which is first measured on one subsample, and then validated on the other hold-out subsample. The method of cross-validation is discussed in detail for example in [@stone1974cross], [@efron1979computers], [@browne2000cross], and more recently in [@arlot2010survey]. The cross-validation approach specifically developed for CCA has been further extended in [@waaijenborg2008quantifying; @yamamoto2008canonical; @gonzalez2009highlighting; @soneson2010integrative]. In cross-validation, the size of the hold-out subsample varies depending on the size of the dataset. A leave-one-out cross-validation procedure is an option when the sample size is small and partitioning of the data into several folds, as is done in $k$-fold cross-validation, is not feasible. $5$-fold cross-validation saves computation time in relation to leave-one-out cross-validation if the sample size is large enough to partition the observations into five folds where each fold is used as a test set in turn. In general, as demonstrated for example in [@krstajic2014cross], a $k$-fold cross-validation procedure should be repeated when an optimal set of parameters are searched for. Repetitions decrease the variance of the average values measured across the test folds. Algorithm \[alg:one\] outlines an approach to determine the optimal regularisation parameters in CCA. Pre-defined ranges for values of $c_1$; $c_2$ Initialise $r=1$ Compute the mean of the $R$ values for $\cos({\mathbf{z}_a},{\mathbf{z}_b})$ obtained at $c_1$ and $c_2$ Return the combination $c_1$ and $c_2$ that maximises $\cos({\mathbf{z}_a},{\mathbf{z}_b})$ \[reg\_cca\] To demonstrate the procedure of regularisation in underdetermined settings, we use the same simulated data as in the previous examples but we include additional normally distributed variables. The data matrices $X_a$ and $X_b$ of sizes $n \times p$ and $n \times q$, where $n=60$, $p=70$ and $q=10$, respectively as follows. The variables of $X_a$ are generated from a random univariate normal distribution, $\mathbf{a}_1, \mathbf{a}_2, \dots, \mathbf{a}_{70} \sim N(0,1)$. We generate the following linear relations $$\begin{aligned} \mathbf{b}_1 &= \mathbf{a}_3 + \boldsymbol \xi_1 \\ \mathbf{b}_2 &= \mathbf{a}_1 + \boldsymbol \xi_2 \\ \mathbf{b}_3 &= -\mathbf{a}_4 + \boldsymbol \xi_3\end{aligned}$$ where $\boldsymbol \xi_1 \sim N(0,0.01), \boldsymbol \xi_2 \sim N(0,0.03), \boldsymbol \xi_3 \sim N(0,0.02)$ denote vectors of normal noise. The remaining variables of $X_b$ are generated from random univariate normal distribution, $\mathbf{a}_4, \mathbf{a}_5, \dots, \mathbf{a}_{10} \sim N(0,1)$. The data is standardised such that every variable has zero mean and unit variance. To construct the matrix $C_{bb}^{-1}C_{ba}C_{aa}^{-1}C_{ab}$, the variance matrices $C_{aa}$ and $C_{bb}$ need to be non-singular. Since $C_{aa}$ is obtained from a rectangular matrix, collinearity makes it close to singular. We therefore add a positive constant to the diagonal $C_{aa} + c_1 I$ to make it invertible. $C_{bb}$ is invertible since the data matrix $X_b$ has more rows than columns. The optimal value for the regularisation parameter $c_1$ can be determined for instance through repeated $k$-fold cross-validation. As shown in Figure \[test\_cancor\], the optimal value $c_1 = 0.09$ was obtained through 50 times repeated 5-fold cross-validation using the procedure presented in the Algorithm \[alg:one\]. The positions ${\mathbf{w}_a}$ and ${\mathbf{w}_b}$ and their respective images ${\mathbf{z}_a}$ and ${\mathbf{z}_b}$ on a unit ball are found by solving the eigenvalues of the characteristic equation $$|C_{bb}^{-1} C_{ba} \big(C_{aa} + c_1 I \big)^{-1} C_{ab} - \rho^2 I| = 0.$$ The number of relations equals $min(p,q) = 10$. The square roots of the first three eigenvalues are $\rho_1 = 0.98$, $\rho_2 = 0.97$ and $\rho_3 = 0.96$. The respective three eigenvectors that correspond to the positions ${\mathbf{w}_b}$ satisfy the equation $$(C_{bb}^{-1} C_{ba} \big(C_{aa} + c_1 I \big)^{-1} C_{ab} - \rho^2 I) {\mathbf{w}_b}= 0.$$ The positions ${\mathbf{w}_a}$ are found using the formula $${\mathbf{w}_a}^i = \frac{\big(C_{aa} + c_1 I \big)^{-1} C_{ab} {\mathbf{w}_b}^i}{\rho_i}$$ where $i=1,2,3$ corresponds to the sorted eigenvalues and eigenvectors. By rounding correct to three decimal places, the first three canonical correlations are $\langle {\mathbf{z}_a}^1,{\mathbf{z}_b}^1 \rangle = 0.999$, $\langle {\mathbf{z}_a}^2,{\mathbf{z}_b}^2 \rangle = 0.998$, $\langle {\mathbf{z}_a}^3,{\mathbf{z}_b}^3 \rangle = 0.996.$ The extracted linear relations are visualised in Figure \[regcca\]. $\qed$ When either or both of the data views consists of more variables than observations, regularisation can be applied to make the variance matrices non-singular. This involves finding optimal non-negative scalar parameters that, when added to the diagonal entries, improve the invertibility of the variance matrices. After improving the invertibility of the variance matrices, the regularised CCA problem can be solved using the standard techniques. Bayesian Approaches for Robustness {#bayes} ---------------------------------- Probabilistic approaches have been proposed to improve the robustness of CCA when the sample size is small and to be able to make more flexible distributional assumptions. A robust method generates a valid model regardless of outlying observations. In the following, a brief introduction to Bayesian CCA is provided. A detailed review on Bayesian CCA and its recent extensions can be found in [@klami2013bayesian]. An extension of CCA to probabilistic models was first proposed in [@bach2005probabilistic]. The probabilistic model contains the latent variables $\mathbf{y}^k \in \mathbb{R}^o$, where $o=\min(p,q)$, that generate the observations $\mathbf{x}_a^k \in \mathbb{R}^p$ and $\mathbf{x}_b^k \in \mathbb{R}^q$ for $k=1,2,\dots,n$. The latent variable model is defined by $$\begin{gathered} \mathbf{y} \sim \mathcal{N}(0,I_d), \quad o \geq d \geq 1 \\ \mathbf{x}_a | \mathbf{y} \sim \mathcal{N}(S_a\mathbf{y}+\boldsymbol{\mu}_a, \Psi_a), \quad S_a \in \mathbb{R}^{p \times d}, \Psi_a \succeq 0 \\ \mathbf{x}_b | \mathbf{y} \sim \mathcal{N}(S_b\mathbf{y}+\boldsymbol{\mu}_b, \Psi_b), \quad S_b \in \mathbb{R}^{q \times d}, \Psi_b \succeq 0 \\ \end{gathered}$$ where $\mathcal{N}(\boldsymbol{\mu},\Sigma)$ denotes the normal multivariate distribution with mean $\boldsymbol{\mu}$ and covariance $\Sigma$. The $S_a$ and $S_b$ correspond to the transformations of the latent variables $\mathbf{y}^k \in \mathbb{R}^o$. The $\Psi_a$ and $\Psi_b$ denote the noise covariance matrices. The maximum likelihood estimates of the parameters $S_a, S_b, \Psi_a, \Psi_b, \boldsymbol{\mu}_a$ and $\boldsymbol{\mu}_b$ are given by $$\begin{gathered} \hat{S_a} = C_{aa} W_{ad} M_a \quad \hat{S_b} = C_{bb} W_{bd} M_b \\ \hat{\Psi_a} = C_{aa}-\hat{S_a}\hat{S_a}^T \quad \hat{\Psi_b} = C_{bb}-\hat{S_b}\hat{S_b}^T \\ \hat{\boldsymbol{\mu}}_a = \frac{1}{n} \sum_{k=1}^n \mathbf{x}_a^k \quad \hat{\boldsymbol{\mu}}_b = \frac{1}{n} \sum_{k=1}^n \mathbf{x}_b^k \\ \end{gathered}$$ where $M_a,M_b \in \mathbb{R}^{d \times d}$ are arbitrary matrices such that $M_a M_b^T=P_d$ and the spectral norms of $M_a$ and $M_b$ are smaller than one. $P_d$ is the diagonal matrix of the first $d$ canonical correlations. The $d$ columns of $W_{ad}$ and $W_{bd}$ correspond to the positions $\mathbf{w}_a^i$ and $\mathbf{w}_b^i$ for $i=1,2,\dots,d$ obtained using any of the standard techniques described in section \[basic\]. The posterior expectations of $\mathbf{y}$ given $\mathbf{x}_a$ and $\mathbf{x}_b$ are $E(\mathbf{y}|\mathbf{x}_a)=M_a^T W_{ad}^T (\mathbf{x}_a-\hat{\mu_a})$ and $E(\mathbf{y}|\mathbf{x}_b)=M_b^T W_{bd}^T (\mathbf{x}_b-\hat{\mu_b})$. As stated in [@bach2005probabilistic], regardless of what $M_a$ and $M_b$ are, $E(\mathbf{y}|\mathbf{x}_a)$ and $E(\mathbf{y}|\mathbf{x}_b)$ lie in the $d$-dimensional subspaces of $\mathbb{R}^p$ and $\mathbb{R}^q$ which are identical to those obtained by linear CCA. The generative model of [@bach2005probabilistic] was further developed in [@archambeau2006robust] by replacing the normal noise with the multivariate Student’s t distribution. This improves the robustness against outlying observations that are then better modeled by the noise term [@klami2013bayesian]. A Bayesian extension of CCA was proposed by [@klami2007local] and [@wang2007variational]. To perform Bayesian analysis, the probabilistic model has to be supplemented with prior distributions of the model parameters. In [@klami2007local] and [@wang2007variational], the prior distribution of the covariance matrices $\Psi_a$ and $\Psi_b$ was chosen to be the inverse-Wishart distribution. The automatic relevance determination [@neal2012bayesian] prior was selected for the linear transformations $S_a$ and $S_b$. The inference on the posterior distribution was made by applying a variational mean-field algorithm [@wang2007variational] and Gibbs sampling [@klami2007local]. As in the case of the linear CCA, the variance matrices obtained from high-dimensional data make the inference of the probabilistic and Bayesian CCA models difficult [@klami2013bayesian]. This is because the variance matrices need to be inverted in the inference algorithms. To perform Bayesian CCA on high-dimensional data, dimensionality reduction techniques should be applied as a preprocessing step, as has been done for example in [@huopaniemi2010multivariate]. An advantage of Bayesian CCA, in relation to linear CCA, is the application of the prior distributions that enable to take the possible underlying structure in the data into account. Examples of studies where sparse models were obtained by means of the prior distribution include [@archambeau2009sparse] and [@rai2009multi]. In addition to modeling the structure of the data, in [@klami2012bayesian] the Bayesian CCA was extended such that any exponential family distribution could model the noise, not only the normal. In summary, probabilistic and Bayesian CCA provide alternative ways to interpret the CCA by means of latent variables. Bayesian CCA may be more feasible in settings where knowledge regarding the data can be incorporated through the prior distributions. Additionally, noise can be modelled by other exponential family distribution functions than the normal distribution. Uncovering Linear and Non-Linear Relations ------------------------------------------ CCA [@hotelling1936relations] finds linear relations between variables belonging to two views that both are overdetermined. The first proposition to extend CCA to uncover non-linear relations using an optimal scaling method was presented in [@burg1983non]. At the turn of the 21$^{st}$ century, artificial neural networks were incorporated in the CCA framework for finding non-linear relations [@lai1999neural; @fyfe2000canonical; @hsieh2000nonlinear]. Deep CCA [@andrew2013deep] is an example of a recent non-linear CCA variant employing artificial neural networks. Shortly after the introduction of the neural networks, propositions of applying kernel methods in CCA were presented in [@lai2000kernel; @akaho2001kernel; @van2001kernel; @melzer2001nonlinear; @bach2002kernel]. Since then, the kernelised version of CCA has received considerable attention in terms of theoretical foundations [@hardoon2004canonical; @fukumizu2007statistical; @alam2008sensitivity; @blaschko2008semi; @hardoon2009convergence; @cai2013distance] and applications [@melzer2003appearance; @wang2005nonlinear; @hardoon2007unsupervised; @larson2014kernel]. In the following, we present how kernel CCA can be applied to uncover nonlinear relations between the variables. We then provide a brief overview on deep CCA. To extract linear relations, CCA is performed in the data spaces of $X_a \in \mathbb{R}^{n \times p}$ and $X_b \in \mathbb{R}^{n \times q}$ where the $n$ rows correspond to the observations and the $p$ and $q$ columns correspond to the variables. The relations between the variables are found analysing the positions ${\mathbf{w}_a}\in \mathbb{R}^p$ and ${\mathbf{w}_b}\in \mathbb{R}^q$ that have such images ${\mathbf{z}_a}= X_a {\mathbf{w}_a}$ and ${\mathbf{z}_b}= X_b {\mathbf{w}_b}$ on a unit ball in $\mathbb{R}^n$ that have a minimum enclosing angle. The extracted relations are linear since the positions ${\mathbf{w}_a}$ and ${\mathbf{w}_b}$ and their images ${\mathbf{z}_a}$ and ${\mathbf{z}_b}$ were obtained in the Euclidean space. To extract non-linear relations, the positions ${\mathbf{w}_a}$ and ${\mathbf{w}_b}$ should be found in a space where the distances, or measures of similarity, between objects are non-linear. This can be achieved using kernel methods, that is by transforming the original observations $\mathbf{x}_a^i \in \mathbb{R}^p $ and $\mathbf{x}_b^i \in \mathbb{R}^q$, where $i=1,2,\dots,n$, to Hilbert spaces $\mathcal{H}_a$ and $\mathcal{H}_b$ through feature maps $\phi_a: \mathbb{R}^p \mapsto \mathcal{H}_a$ and $\phi_b: \mathbb{R}^q \mapsto \mathcal{H}_b$. The similarity of the objects is captured by a symmetric positive semi-definite kernel, corresponding to the inner products in the Hilbert spaces $K_a(\mathbf{x}^i_a,\mathbf{x}^j_a) = \langle \boldsymbol{\phi}_a(\mathbf{x}^i_a),\boldsymbol{\phi}_a(\mathbf{x}^j_a) \rangle_{\mathcal{H}_a}$ and $K_b(\mathbf{x}^i_b,\mathbf{x}^j_b) = \langle \boldsymbol{\phi}_b(\mathbf{x}^i_b),\boldsymbol{\phi}_b(\mathbf{x}^j_b)\rangle_{\mathcal{H}_b}.$ The feature maps are typically non-linear and result in high-dimensional intrinsic spaces $\boldsymbol{\phi}_a(\mathbf{x}_a^i) \in \mathcal{H}_a$ and $\boldsymbol{\phi}_b(\mathbf{x}_b^i) \in \mathcal{H}_b$ for $i=1,2,\dots,n$. Through kernels, CCA can be used to extract non-linear correlations, relying on the fact that the CCA solution can always be found within the span of the data [@bach2002kernel; @scholkopf1998nonlinear]. The basic principles behind kernel CCA are similar to CCA. First, the observations are transformed to Hilbert spaces $\mathcal{H}_a$ and $\mathcal{H}_b$ using symmetric positive semi-definite kernels $$K_a(\mathbf{x}^i_a,\mathbf{x}^j_a) = \langle \boldsymbol{\phi}_a(\mathbf{x}^i_a),\boldsymbol{\phi}_a(\mathbf{x}^j_a) \rangle_{\mathcal{H}_a} \text{ and } K_b(\mathbf{x}^i_b,\mathbf{x}^j_b) = \langle \boldsymbol{\phi}_b(\mathbf{x}^i_b),\boldsymbol{\phi}_b(\mathbf{x}^j_b)\rangle_{\mathcal{H}_b}$$ where $i,j=1,2,\dots,n$. As derived in [@bach2002kernel], the original data matrices $X_a \in \mathbb{R}^{n \times p}$ and $X_b \in \mathbb{R}^{n \times q}$ can be substituted by the Gram matrices $K_a \in \mathbb{R}^{n \times n}$ and $K_b \in \mathbb{R}^{n \times n}$. Let ${\boldsymbol\alpha}$ and ${\boldsymbol\beta}$ denote the positions in the kernel space $\mathbb{R}^n$ that have the images ${\mathbf{z}_a}=K_a {\boldsymbol\alpha}$ and ${\mathbf{z}_b}=K_b {\boldsymbol\beta}$ on the unit ball in $\mathbb{R}^n$ with a minimum enclosing angle in between. The kernel CCA problem is hence $$\begin{gathered} \cos({\mathbf{z}_a},{\mathbf{z}_b}) = \max_{{\mathbf{z}_a}, {\mathbf{z}_b}\in \mathbb{R}^n} \langle {\mathbf{z}_a},{\mathbf{z}_b}\rangle = {\boldsymbol\alpha}^T K_a^T K_b {\boldsymbol\beta}, \\ ||{\mathbf{z}_a}||_2=\sqrt{{\boldsymbol\alpha}^T K_a^2 {\boldsymbol\alpha}} = 1 \quad ||{\mathbf{z}_b}||_2=\sqrt{{\boldsymbol\beta}^T K_b^2 {\boldsymbol\beta}} = 1\end{gathered}$$ As in CCA, the optimisation problem can be solved using the Lagrange multiplier technique. $$L = {\boldsymbol\alpha}^T K_a^T K_b {\boldsymbol\beta}- \frac{\rho_1}{2} ({\boldsymbol\alpha}^T K_a^2 {\boldsymbol\alpha}-1) - \frac{\rho_2}{2} ({\boldsymbol\beta}^T K_b^2 {\boldsymbol\beta}- 1)$$ where $\rho_1$ and $\rho_2$ denote the Lagrange multipliers. Differentiating $L$ with respect to ${\boldsymbol\alpha}$ and ${\boldsymbol\beta}$ gives $$\begin{aligned} \frac{\delta L}{\delta {\boldsymbol\alpha}} = K_a K_b {\boldsymbol\beta}- \rho_1 K_a^2 {\boldsymbol\alpha}= \mathbf{0} \label{kdif1}\\ \frac{\delta L}{\delta {\boldsymbol\beta}} = K_b K_a {\boldsymbol\alpha}- \rho_2 K_b^2 {\boldsymbol\beta}= \mathbf{0} \label{kdif2}\end{aligned}$$ Multiplying (\[dif1\]) from the left by ${\boldsymbol\alpha}^T$ and (\[dif2\]) from the left by ${\boldsymbol\beta}^T$ gives $$\begin{aligned} {\boldsymbol\alpha}^T K_a K_b {\boldsymbol\beta}- \rho_1 {\boldsymbol\alpha}^T K_a^2 {\boldsymbol\alpha}= 0 \label{ks1}\\ {\boldsymbol\beta}^T K_b K_a {\boldsymbol\alpha}- \rho_2 {\boldsymbol\beta}^T K_b^2 {\boldsymbol\beta}= 0. \label{ks2}\end{aligned}$$ Since ${\boldsymbol\alpha}^T K_a^2 {\boldsymbol\alpha}= 1$ and ${\boldsymbol\beta}^T K_b^2 {\boldsymbol\beta}= 1$, we obtain that $$\label{kres} \rho_1 = \rho_2 = \rho.$$ Substituting (\[kres\]) into Equation (\[kdif1\]) we obtain $$\label{kwa} {\boldsymbol\alpha}= \frac{K_a^{-1} K_a^{-1} K_a K_b {\boldsymbol\beta}}{\rho}=\frac{K_a^{-1} K_b {\boldsymbol\beta}}{\rho}.$$ Substituting (\[kwa\]) into (\[kdif2\]) we obtain $$\frac{1}{\rho} K_b K_a K_a^{-1} K_b {\boldsymbol\beta}- \rho K_b^2 {\boldsymbol\beta}= 0$$ which is equivalent to the generalised eigenvalue problem of the form $$K_b^2 {\boldsymbol\beta}= \rho^2 K_b^2 {\boldsymbol\beta}.$$ If $K_b^2$ is invertible, the problem reduces to a standard eigenvalue problem of the form $$I {\boldsymbol\beta}= \rho^2 {\boldsymbol\beta}.$$ Clearly, in the kernel space, if the Gram matrices are invertible the resulting canonical correlations are all equal to one. Regularisation is therefore needed to solve the kernel CCA problem. Kernel CCA can be regularised in a similar manner as presented in Section \[reg\] [@bach2002kernel; @hardoon2004canonical]. We constrain the norms of the position vectors ${\boldsymbol\alpha}$ and ${\boldsymbol\beta}$ by adding constants $c_1$ and $c_2$ to the diagonals of the Gram matrices $K_a$ and $K_b$ $$\begin{aligned} {\boldsymbol\alpha}^T \big(K_a + c_1 I \big)^2 {\boldsymbol\alpha}=& 1 \label{kvara} \\ {\boldsymbol\beta}^T \big(K_b + c_2 I \big)^2 {\boldsymbol\beta}=& 1. \label{kvarb}\end{aligned}$$ The solution can then be found by solving the standard eigenvalue problem $$\big(K_b + c_1 I \big)^{-2} K_b K_a \big(K_a + c_2 I \big)^{-2} K_a K_b {\boldsymbol\alpha}= \rho^2 {\boldsymbol\alpha}.$$ As in the case of CCA, kernel CCA can also be solved through the generalised eigenvalue problem [@bach2002kernel]. Since the data matrices $X_a$ and $X_b$ can be substituted by the corresponding Gram matrices $K_a$ and $K_b$, the formulation becomes $$\label{kgeneig} \begin{pmatrix} \mathbf{0} & K_a K_b \\ K_b K_a & \mathbf{0} \end{pmatrix} \begin{pmatrix} {\boldsymbol\alpha}\\ {\boldsymbol\beta}\end{pmatrix}= \rho \begin{pmatrix} \big(K_a + c_1 I \big)^2 & \mathbf{0} \\ \mathbf{0} & \big(K_b + c_2 I \big)^2 \end{pmatrix} \begin{pmatrix} {\boldsymbol\alpha}\\ {\boldsymbol\beta}\end{pmatrix}$$ where the constants $c_1$ and $c_2$ denote the regularisation parameters. In Example \[kcca\_ex\], kernel CCA, solved through the generalised eigenvalue problem, is performed on simulated data. \[kcca\_ex\] We generate a simulated dataset as follows. The data matrices $X_a$ and $X_b$ of sizes $n \times p$ and $n \times q$, where $n=150$, $p=7$ and $q=8$, respectively as follows. The seven variables of $X_a$ are generated from a random univariate normal distribution, $\mathbf{a}_1, \mathbf{a}_2, \dots, \mathbf{a}_7 \sim N(0,1)$. We generate the following relations $$\begin{aligned} \mathbf{b}_1 &= \exp(\mathbf{a}_3) + \boldsymbol \xi_1 \\ \mathbf{b}_2 &= \mathbf{a}_1^3 + \boldsymbol \xi_2 \\ \mathbf{b}_3 &= -\mathbf{a}_4 + \boldsymbol \xi_3\end{aligned}$$ where $\boldsymbol \xi_1 \sim N(0,0.4)$, $\boldsymbol \xi_2 \sim N(0,0.2)$ and $\boldsymbol \xi_3 \sim N(0,0.3)$ denote vectors of normal noise. The five other variables of $X_b$ are generated from a random univariate normal distribution, $\mathbf{b}_4, \mathbf{b}_5, \dots, \mathbf{b}_8 \sim N(0,1)$. The data is standardised such that every variable has zero mean and unit variance. In kernel CCA, the choice of the kernel function affects what kind of relations can be extracted. In general, a Gaussian kernel $K(\mathbf{x},\mathbf{y})=exp(-\frac{||\mathbf{x}-\mathbf{y}||^2}{2\sigma^2})$ is used when the data is assumed to contain non-linear relations. The width parameter $\sigma$ determines the non-linearity in the distances between the data points computed in the form of inner products. Increasing the value of $\sigma$ makes the space closer to Euclidean while decreasing makes the distances more non-linear. The optimal value for $\sigma$ is best determined using a re-sampling method such as a cross-validation scheme, for example procedure similar to the one presented in Algorithm \[alg:one\]. In this example, we applied the “median trick”, presented in [@song2010hilbert], according to which the $\sigma$ corresponds to the median of Euclidean distances computed between all pairs of observations. The median distances for the data in this example were $\sigma_a = 3.53 $ and $\sigma_b = 3.62$ for the views $X_a$ and $X_b$ respectively. The kernels were centred by $\tilde{K} = K - \frac{1}{n} \mathbf{j} \mathbf{j}^T K - \frac{1}{n} K \mathbf{j} \mathbf{j}^T + \frac{1}{n^2} (\mathbf{j}^T K\mathbf{j}) \mathbf{j} \mathbf{j}^T $ where $\mathbf{j}$ contains only entries of value one [@shawe2004kernel]. In addition to the kernel parameters, also the regularisation parameters $c_1$ and $c_2$ need to be optimised to extract the correct relations. As in the case of regularised CCA, a repeated cross-validation procedure can be applied to identify the optimal pair of parameters. For the data in this example, the optimal regularisation parameters were $c_1=1.50$ and $c_2=0.60$ when a 20 times repeated 5-fold cross-validation was applied. The first three canonical correlations at the optimal parameter values were $\langle {\mathbf{z}_a}^1,{\mathbf{z}_b}^1 \rangle = 0.95$, $\langle {\mathbf{z}_a}^2,{\mathbf{z}_b}^2 \rangle=0.89$, and $\langle {\mathbf{z}_a}^3,{\mathbf{z}_b}^3 \rangle=0.87.$ The interpretation of the relations cannot be performed from the positions ${\boldsymbol\alpha}$ and ${\boldsymbol\beta}$ since they are obtained in the kernel spaces. In the case of simulated data, we know what kind of relations are contained in the data. We can compute the linear correlation coefficient between the simulated relations and the transformed pairs of positions ${\mathbf{z}_a}$ and ${\mathbf{z}_b}$ [@chang2013canonical]. The correlation coefficients are shown in Table \[tab:one\]. The exponential relation was extracted in the second pair $({\mathbf{z}_a}^2,{\mathbf{z}_b}^2)$, the 3$^{rd}$ order polynomial relation was extracted in the third pair $({\mathbf{z}_a}^3,{\mathbf{z}_b}^3)$ and the linear relation in the first pair $({\mathbf{z}_a}^1,{\mathbf{z}_b}^1)$. $\qed$ In [@hardoon2004canonical], an alternative formulation of the standard eigenvalue problem was presented when the data contains a large number of observations. If the sample size is large, the dimensionality of the Gram matrices $K_a$ and $K_b$ can cause computational problems. Partial Gram-Schmidt orthogonalization (PGSO) [@cristianini2002latent] was proposed as a matrix decomposition method. PGSO results in $$\begin{aligned} K_a \simeq& R_a R_a^T \\ K_b \simeq& R_bR_b^T.\end{aligned}$$ Substituting these into the Equations (\[kdif1\]) and (\[kdif2\]) and multiplying by $R_a^T$ and $R_b^T$ respectively we obtain $$\begin{aligned} R_a^T R_a R_a^TR_b R_b^T{\boldsymbol\beta}- \rho R_a^T R_a^T R_a R_a^T R_a {\boldsymbol\alpha}= \mathbf{0} \label{4.1}\\ R_b^T R_b R_b^T R_a R_a^T {\boldsymbol\alpha}- \mu R_b^T R_b^T R_b R_b^T R_b {\boldsymbol\beta}= \mathbf{0} \label{4.2}.\end{aligned}$$ Let $D_{aa}=R_a^TR_a$, $D_{ab}=R_a^TR_b$, $D_{ba}=R_b^TR_a$, and $D_{bb}=R_b^TR_b$ denote the blocks of the new sample covariance matrix. Let $\tilde{{\boldsymbol\alpha}}=R_a^T{\boldsymbol\alpha}$ and $\tilde{{\boldsymbol\beta}}=R_b^T{\boldsymbol\beta}$ denote the positions ${\boldsymbol\alpha}$ and ${\boldsymbol\beta}$ in the reduced space. Using these substitutions in (\[4.1\]) and (\[4.2\]) we obtain $$\begin{aligned} D_{aa}D_{ab}\tilde{{\boldsymbol\beta}} - \rho D_{aa}^2 \tilde{{\boldsymbol\alpha}} = \mathbf{0} \label{4.3}\\ D_{bb}D_{ba}\tilde{{\boldsymbol\alpha}} - \rho D_{bb}^2 \tilde{{\boldsymbol\beta}} = \mathbf{0} \label{4.4}.\end{aligned}$$ If $D_{aa}$ and $D_{bb}$ are invertible we can multiply (\[4.3\]) by $D_{aa}^{-1}$ and (\[4.4\]) by $D_{bb}^{-1}$ which gives $$\begin{aligned} D_{ab}\tilde{{\boldsymbol\beta}} - \rho D_{aa} \tilde{{\boldsymbol\alpha}} = \mathbf{0} \label{4.5}\\ D_{ba}\tilde{{\boldsymbol\alpha}} - \rho D_{bb} \tilde{{\boldsymbol\beta}} = \mathbf{0} \label{4.6}.\end{aligned}$$ and hence $$\label{hardoon_beta} \tilde{{\boldsymbol\beta}} = \frac{D_{bb}^{-1} D_{ba} \tilde{{\boldsymbol\alpha}}}{\rho}$$ which, after a substitution into (\[4.3\]), results in a generalised eigenvalue problem $$\label{kgene} D_{ab}D_{bb}^{-1} D_{ba} \tilde{{\boldsymbol\alpha}} = \rho^2 D_{aa} \tilde{{\boldsymbol\alpha}}.$$ To formulate the problem as a standard eigenvalue problem, let $D_{aa}=SS^T$ denote the complete Cholesky decomposition where $S$ is a lower triangular matrix and let $\hat{{\boldsymbol\alpha}}=S^T\tilde{{\boldsymbol\alpha}}$. Substituting these into (\[kgene\]) we obtain $$S^{-1}D_{ab}D_{bb}^{-1} D_{ba}S^{\prime -1} \hat{{\boldsymbol\alpha}} = \rho^2 \hat{{\boldsymbol\alpha}}.$$ If regularisation using the parameter $\kappa$ is combined with dimensionality reduction the problem becomes $$\label{hardoon_standard} S^{-1}D_{ab}\big(D_{bb} + \kappa I \big)^{-1} D_{ba}S^{\prime -1} \hat{{\boldsymbol\alpha}} = \rho^2 \hat{{\boldsymbol\alpha}}.$$ A numerical example of the method presented by [@hardoon2004canonical] is given in Example \[hardoon\_kcca\]. \[hardoon\_kcca\] We generate a simulated dataset as follows. The data matrices $X_a$ and $X_b$ of sizes $n \times p$ and $n \times q$, where $n=10000$, $p=7$ and $q=8$, respectively as follows. The seven variables of $X_a$ are generated from a random univariate normal distribution, $\mathbf{a}_1, \mathbf{a}_2, \dots, \mathbf{a}_7 \sim N(0,1)$. We generate the following relations $$\begin{aligned} \mathbf{b}_1 &= \exp(\mathbf{a}_3) + \boldsymbol \xi_1 \\ \mathbf{b}_2 &= \mathbf{a}_1^3 + \boldsymbol \xi_2 \\ \mathbf{b}_3 &= -\mathbf{a}_4 + \boldsymbol \xi_3\end{aligned}$$ where $\boldsymbol \xi_1 \sim N(0,0.4)$, $\boldsymbol \xi_2 \sim N(0,0.2)$ and $\boldsymbol \xi_3 \sim N(0,0.3)$ denote vectors of normal noise. The five other variables of $X_b$ are generated from a random univariate normal distribution, $\mathbf{b}_4, \mathbf{b}_5, \dots, \mathbf{b}_8 \sim N(0,1)$. The data is standardised such that every variable has zero mean and unit variance. A Gaussian kernel is used for both views. The width parameter is set using the median trick to $\sigma_a = 3.56$ and $\sigma_b = 3.60.$ The kernels were centred. The positions ${\boldsymbol\alpha}$ and ${\boldsymbol\beta}$ are found solving the standard eigenvalue problem in (\[hardoon\_standard\]) and applying the Equation (\[hardoon\_beta\]). We set the regularisation parameter $\kappa=0.5$. The first three canonical correlations at the optimal parameter values were $\langle {\mathbf{z}_a}^1,{\mathbf{z}_b}^1 \rangle=0.97$, $\langle {\mathbf{z}_a}^2,{\mathbf{z}_b}^2 \rangle=0.97$, and $\langle {\mathbf{z}_a}^3,{\mathbf{z}_b}^3 \rangle=0.96.$ The correlation coefficients between the simulated relations and the transformed variables are shown in Table \[tab:two\]. The exponential relation was extracted in the first pair $({\mathbf{z}_a}^1,{\mathbf{z}_b}^1)$, the 3$^{rd}$ order polynomial relation was extracted in the second pair $({\mathbf{z}_a}^2,{\mathbf{z}_b}^2)$ and the linear relation in the third pair $({\mathbf{z}_a}^3,{\mathbf{z}_b}^3)$. $\qed$ Non-linear relations are also taken into account through neural networks which are employed in deep CCA [@andrew2013deep]. In deep CCA, every observation $\mathbf{x}_a^k \in \mathbb{R}^p$ and $\mathbf{x}_b^k \in \mathbb{R}^q$ for $k=1,2,\dots,n$ is non-linearly transformed multiple times in an iterative manner through a layered network. The number of units in a layer determines the dimension of the output vector which is put in the next layer. As is explained in [@andrew2013deep], let the first layer have $c_1$ units and the final layer $o$ units. The output vector of the first layer for the observation $\mathbf{x}_a^1 \in \mathbb{R}^p$, is $\mathbf{h}_1=s(S_1^1 \mathbf{x}_a^1 + b_1^1) \in \mathbb{R}^{c_1}$, where $S_1^1 \in \mathbb{R}^{c_1 \times p}$ is a matrix of weights, $b_1^1 \in \mathbb{R}^{c_1}$ is a vector of bias, and $s: \mathbb{R} \mapsto \mathbb{R}$ is a non-linear function applied to each element. The logistic and tanh functions are examples of popular non-linear functions. The output vector $\mathbf{h}_1$ is then used to compute the output of the following layer in similar manner. The final transformed vector $f_1(\mathbf{x}_a^1)=s(S_d^1 h_{d-1}+b_d^1)$ is in the space of $\mathbb{R}^{o}$, for a network with $d$ layers. The same procedure is applied to the observations $\mathbf{x}_b^k \in \mathbb{R}^q$ for $k=1,2,\dots,n$. In deep CCA, the aim is to learn the optimal parameters $S_d$ and $b_d$ for both views such that the correlation between the transformed observations is maximised. Let $H_a \in \mathbb{R}^{o \times n}$ and $H_b \in \mathbb{R}^{o \times n}$ denote the matrices that have the final transformed output vectors in their columns. Let $\tilde{H_a}=H_a-\frac{1}{n}H_a\mathbf{1}$ denote the centered data matrix and let $\hat{C}_{ab}=\frac{1}{m-1}\tilde{H}_a\tilde{H}_{b}^T$ and $\hat{C}_{aa}=\frac{1}{m-1}\tilde{H}_a\tilde{H}_{a}^T+r_a I$, where $r_a$ is a regularisation constant, denote the covariance and variance matrices. Same formulae are used to compute the covariance and variance matrices for view $b$. As in section \[basic\], the total correlation of the top $k$ components of $H_a$ and $H_b$ is the sum of the top $k$ singular values of the matrix $T=\hat{C}_{aa}^{-1/2}\hat{C}_{ab}\hat{C}_{bb}^{-1/2}$. If $k=o$, the correlation is given by the trace norm of $T$, that is $$corr(H_a,H_b)=tr(T^T T)^{1/2}.$$ The optimal parameters $S_d$ and $b_d$ maximise the trace norm using gradient-based optimisation. The details of the algorithm can be found in [@andrew2013deep]. In summary, kernel and deep CCA provide alternatives to the linear CCA when the relations in the data can be considered to be non-linear and the sample size is small in relation to the data dimensionality. When applying kernel CCA on a real dataset, prior knowledge of the relations of interest can help in the analysis of the results. If the data is assumed to contain both linear and non-linear relations a Gaussian kernel could be a first option. The choice of the kernel function depends on what kind of relations the data can be considered to contain. The possible relations can be extracted by testing how the image pairs correlate with the functions of variables. Deep CCA provides an alternative to compute maximal correlation between the views although the neural network makes the identification of the type of relations difficult. Improving the Interpretability by Enforcing Sparsity ---------------------------------------------------- The extraction of the linear relations between the variables in CCA and regularised CCA relies on the values of the entries of the position vectors that have images on the unit ball with a minimum enclosing angle. The relations can be inferred when the number of variables is not too large for a human to interpret. However, in modern data analysis, it is common that the number of variables is of the order of tens of thousands. In this case, the values of the entries of the position vectors should be constrained such that only a subset of the variables would have a non-zero value. This would facilitate the interpretation since only a fraction of the total number of variables need to be considered when inferring the relations. To constrain some of the values of the entries of the position vectors to zero, which is also referred to as to enforce sparsity, tools of convex analysis can be applied. In literature, sparsity has been enforced on the position vectors using soft-thresholding operators [@parkhomenko2007genome], elastic net regularisation [@waaijenborg2008quantifying], penalised matrix decomposition combined with soft-thresholding [@witten2009penalized], and convex least squares optimisation [@hardoon2011sparse]. The sparse CCA formulations presented in [@parkhomenko2007genome; @waaijenborg2008quantifying; @witten2009penalized] find sparse position vectors that can be applied to infer linear relations between the variables with non-zero entries. The formulation in [@hardoon2011sparse] differs from the preceding propositions in terms of the optimisation criterion. The canonical correlation is found between the image obtained from the linear transformation defined by the data space of one view and the image obtained from the linear transformation defined by the kernel of the other view. The selection of which sparse CCA should be applied for a specific task depends on the research question and prior knowledge regarding the variables. The sparse CCA algorithm of [@parkhomenko2007genome] can be applied when the aim is to find sparse position vectors and no prior knowledge regarding the variables is available. The positions and images are solved using the SVD, as presented in Section \[solving\]. Sparsity is enforced on the entries of the positions by iteratively applying the soft-thresholding operator [@donoho1995adapting] on the pair of left and right orthonormal singular vectors until convergence. The soft-thresholding operator is a proximal mapping of the $L_1$ norm [@bach2011convex]. The consecutive pairs of sparse left and right singular vectors are obtained by deflating the found pattern from the matrix on which the SVD is computed. The sparse CCA hence results in a sparse set of linearly related variables. The elastic net CCA [@waaijenborg2008quantifying] finds sparse position vectors but also considers possible groupings in the variables. The elastic net [@zou2005regularization] combines the LASSO [@tibshirani1996regression] and the ridge [@hoerl1970ridge] penalties. The elastic net penalty incorporates a grouping effect in the variable selection. The term variable selection refers to that a selected variable has a non-zero entry in the position vector. In the soft-thresholding CCA of [@parkhomenko2007genome], the assignment of a non-zero entry is independent of the other entries within the vector. In the elastic net CCA, the ridge penalty groups the variables by the values of the entries and the LASSO penalty either eliminates a group by shrinking the entries of the variables within the group to zero or leaves them as non-zero. The algorithm is based on an iterative scheme of multiple regression. As in [@parkhomenko2007genome], the computations are performed in the data space and therefore the extracted relations are also linear. The penalised matrix decomposition (PMD) formulation of sparse CCA [@witten2009penalized] is based on finding low-rank approximations of the covariance matrix $C_{ab}$. An $n \times p$ sized matrix $X$ with rank $K < min(p,q)$ can be approximated using the SVD [@eckart1936approximation] by $$\sum_{k=1}^r \sigma_k \mathbf{u}_k \mathbf{v}_k^T = \underset{\tilde{X} \in M(r)}{\text{argmin}} ||X-\tilde{X}||_F^2$$ where $\mathbf{u}_k$ denotes the column of the matrix $U$, $\mathbf{v}_k$ denotes the column of the matrix $U$, $\sigma_k$ denotes the $k^{th}$ singular value on the diagonal of $S$, $M(r)$ is the set of rank $r$ $n \times p$ matrices and $r<<K$. In the case of CCA, the matrix to be approximated is the covariance matrix $X=C_{ab}.$ The optimisation problem in the PMD context is given by $$\begin{gathered} \min_{{\mathbf{w}_a}\in \mathbb{R}^p, {\mathbf{w}_b}\in \mathbb{R}^q} \frac{1}{2} || C_{ab} - \sigma {\mathbf{w}_a}{\mathbf{w}_b}^T||_F^2, \\ ||{\mathbf{w}_a}||_2 = 1 \quad ||{\mathbf{w}_b}||_2 = 1, \\ ||{\mathbf{w}_a}||_1 \leq c_1 \quad ||{\mathbf{w}_b}||_1 \leq c_2, \quad \sigma \geq 0\end{gathered}$$ which is equivalent to $$\begin{gathered} \cos \theta = \max_{{\mathbf{w}_a}\in \mathbb{R}^p, {\mathbf{w}_b}\in \mathbb{R}^q} {\mathbf{w}_a}^T C_{ab} {\mathbf{w}_b}, \\ ||{\mathbf{w}_a}||_2 \leq 1 \quad ||{\mathbf{w}_b}||_2 \leq 1, \\ ||{\mathbf{w}_a}||_1 \leq c_1 \quad ||{\mathbf{w}_b}||_1 \leq c_2.\end{gathered}$$ The aim is to find $r$ pairs of sparse position vectors ${\mathbf{w}_a}$ and ${\mathbf{w}_b}$ such that their outer products represent low-rank approximations of the original $C_{ab}$ and hence extracts the $r$ linear relations from the data. The exact derivation of the algorithm to solve the PMD optimisation problem is given in [@witten2009penalized]. In general, the position vectors, that generate 1-rank approximations of the covariance matrix, are found in an iterative manner. To find one 1-rank approximation, the soft-thresholding operator is applied as follows. Let the soft-thresholding operator be given by $$S(a,c) = sign(a)(|a|-c)_{+}$$ where $c>0$ is a constant. The following formula is applied in the derivation of the algorithm $$\begin{gathered} \max_{\mathbf{u}} \langle \mathbf{u}, \mathbf{a} \rangle, \\ s.t. \quad ||\mathbf{u}||_2^2 \leq 1, ||\mathbf{u}||_1 < c.\end{gathered}$$ The solution is given by $\mathbf{u}= \frac{S(\mathbf{a},\delta)}{||S(\mathbf{a},\delta)||_2}$ with $\delta=0$ if $||\mathbf{u}_1|| \leq c$. Otherwise, $\delta$ is selected such that $||\mathbf{u}_1|| = c.$ Sparse position vectors are the obtained by Algorithm \[alg:pmd\]. At every iteration, the $\delta_1$ and $\delta_2$ are selected by binary search. Initialise $||\mathbf{{\mathbf{w}_b}}||_2=1$ $\sigma \leftarrow {\mathbf{w}_a}^T C_{ab} {\mathbf{w}_b}$ To obtain several 1-rank approximations, a deflation step is included such that when the converged vectors ${\mathbf{w}_a}$ and ${\mathbf{w}_b}$ are found, the extracted relations are subtracted from the covariance matrix $C_{ab}^{k+1} \leftarrow C^{k}-\sigma_k {\mathbf{w}_a}^k {\mathbf{w}_b}^{k T}.$ In this way, the successive solutions remain orthogonal which is a contstraint of CCA. \[pmd\] To demonstrate the PMD formulation of sparse CCA, we generate the following data. The data matrices $X_a$ and $X_b$ of sizes $n \times p$ and $n \times q$, where $n=50$, $p=100$ and $q=150$, respectively as follows. The variables of $X_a$ are generated from a random univariate normal distribution, $\mathbf{a}_1, \mathbf{a}_2, \cdots, \mathbf{a}_{100} \sim N(0,1)$. We generate the following linear relations $$\begin{aligned} \mathbf{b}_1 &= \mathbf{a}_3 + \boldsymbol \xi_1 \label{rel1}\\ \mathbf{b}_2 &= \mathbf{a}_1 + \boldsymbol \xi_2 \label{rel2}\\ \mathbf{b}_3 &= -\mathbf{a}_4 + \boldsymbol \xi_3 \label{rel3}\end{aligned}$$ where $\boldsymbol \xi_1 \sim N(0,0.08), \boldsymbol \xi_2 \sim N(0,0.07),$ and $ \boldsymbol \xi_3 \sim N(0,0.05)$ denote vectors of normal noise. The other variables of $X_b$ are generated from a random univariate normal distribution, $\mathbf{b}_4, \mathbf{b}_5, \cdots, \mathbf{b}_{150} \sim N(0,1)$. The data is standardised such that every variable has zero mean and unit variance. We apply the R implementation of [@witten2009penalized] which is available in the PMA package. We extract three rank-1 approximations. The values of the entries of the pairs of position vectors $({\mathbf{w}_a}^1,{\mathbf{w}_b}^1),({\mathbf{w}_a}^2,{\mathbf{w}_b}^2)$ and $({\mathbf{w}_a}^3,{\mathbf{w}_b}^3)$ corresponding to canonical correlations $\langle {\mathbf{z}_a}^1,{\mathbf{z}_b}^1 \rangle = 0.95$, $\langle {\mathbf{z}_a}^2,{\mathbf{z}_b}^2 \rangle = 0.92$, $\langle {\mathbf{z}_a}^3,{\mathbf{z}_b}^3 \rangle = 0.91$ are shown in Figure \[pmdfig\]. The first 1-rank approximation extracted (\[rel3\]), the second (\[rel2\]), and the third (\[rel3\]). $\qed$ The sparse CCA of [@hardoon2011sparse] is a sparse convex least squares formulation that differs from the preceding versions. The canonical correlation is found between the linear transformations between a data space view and a kernel space view. The aim is to find a sparse set of variables in the data space view that relate to a sparse set of observations, represented in terms of relative similarities, in the kernel space view. An example of a setting, where relations of this type can provide useful information, is in bilingual analysis as described in [@hardoon2011sparse]. When finding relations between words of two languages, it may be useful to know in what kind of contexts can a word be used in the translated language. The optimisation problem is given by $$\begin{gathered} \cos({\mathbf{z}_a},{\mathbf{z}_b}) = \max_{{\mathbf{z}_a}, {\mathbf{z}_b}\in \mathbb{R}^n} \langle {\mathbf{z}_a},{\mathbf{z}_b}\rangle = {\mathbf{w}_a}^T X_a^T K_b {\boldsymbol\beta}, \\ ||{\mathbf{z}_a}||_2=\sqrt{{\mathbf{w}_a}^T X_a^T X_a {\mathbf{w}_a}} = 1 \quad ||{\mathbf{z}_b}||_2=\sqrt{{\boldsymbol\beta}^T K_b^2 {\boldsymbol\beta}} = 1\end{gathered}$$ which is equivalent to the convex sparse least squares problem $$\begin{gathered} \min_{{\mathbf{w}_a},{\boldsymbol\beta}} ||X_a {\mathbf{w}_a}- K_b {\boldsymbol\beta}||^2 + \mu ||{\mathbf{w}_a}||_1 + \gamma ||\tilde{{\boldsymbol\beta}}||_1 \\ s.t \quad ||{\boldsymbol\beta}||_{\infty}=1\end{gathered}$$ where $\mu$ and $\gamma$ are fixed parameters that control the trade-off between function objective and the level of sparsity of the entries of the position vectors ${\mathbf{w}_a}$ and ${\boldsymbol\beta}$. The constraint $||{\boldsymbol\beta}||_{\infty}=1$ is set to avoid the trivial solution (${\mathbf{w}_a}=\mathbf{0},{\boldsymbol\beta}=\mathbf{0}$). The $k^{th}$ entry of ${\boldsymbol\beta}$ is set to $\beta_k=1$ and the remaining entries in $\tilde{{\boldsymbol\beta}}$ are constrained by 1-norm. The idea is to fix one sample as a basis for comparison and rank the other similar samples based on the fixed sample. The optimisation problem is solved by iteratively minimising the gap between the primal and dual Lagrangian solutions. The procedure is outlined in Algorithm \[alg:scca\]. The exact computational steps can be found in [@hardoon2011sparse]. The Algorithm \[alg:scca\] is used to extract one relation or pattern from the data. To extract the successive patterns, deflation is applied to obtain the residual matrices from which the already found pattern is removed. In Example \[scca\_ex\], the extraction of the first pattern is shown. \[scca\_ex\] In the sparse CCA of [@hardoon2011sparse], the idea is to determine the relations of the variables in the data space view $X_a$ to the observations in the kernel space view $K_b$ where the observations comprise the variables of the view $b$. This setting differs from all of the previous examples where the idea was to find relations between the variables. Since one of the views is kernelised, the relations cannot be explicitly simulated. We therefore demonstrate the procedure on data generated from random univariate normal distribution as follows. The data matrices $X_a$ and $X_b$ of sizes $n \times p$ and $n \times q$, where $n=50$, $p=100$ and $q=150$, respectively are generated as follows. The variables of $X_a$ and $X_b$ are generated from random univariate normal distribution, $\mathbf{a}_1, \mathbf{a}_2, \cdots, \mathbf{a}_{100} \sim N(0,1)$ and $\mathbf{b}_1, \mathbf{b}_2, \cdots, \mathbf{b}_{150} \sim N(0,1)$ respectively. The data is standardised such that every variable has zero mean and unit variance. The Gaussian kernel function $K(\mathbf{x},\mathbf{y})=exp(-||\mathbf{x}-\mathbf{y}||^2/2\sigma^2)$ is used to compute the similarities for the view $b$. The choice of the kernel is justified since the underlying distribution is normal. The width parameter is set to $\sigma=17.25$ using the median trick. The kernel matrix is centred by $\tilde{K} = K - \frac{1}{n} \mathbf{j} \mathbf{j}^T K - \frac{1}{n} K \mathbf{j} \mathbf{j}^T + \frac{1}{n^2} (\mathbf{j}^T K\mathbf{j}) \mathbf{j} \mathbf{j}^T$ where $\mathbf{j}$ contains only entries of value one [@shawe2004kernel]. To find the positions ${\mathbf{w}_a}$ and ${\boldsymbol\beta}$, we solve $$\begin{gathered} f = \min_{{\mathbf{w}_a},{\boldsymbol\beta}} ||X_a {\mathbf{w}_a}- K_b {\boldsymbol\beta}||^2 + \mu ||{\mathbf{w}_a}||_1 + \gamma ||\tilde{{\boldsymbol\beta}}||_1 \\ s.t \quad ||{\boldsymbol\beta}||_{\infty}=1\end{gathered}$$ using the implementation proposed in [@uurtio2015canonical]. As stated in [@hardoon2011sparse], to determine which variable in the data space view $X_a$ is most related to the observation in $K_b$, the algorithm needs to be run for all possible values of $k$. This means that every observation is in turn set as a basis for comparison and a sparse set of the remaining observations $\tilde{{\boldsymbol\beta}}$ is computed. The optimal value of $k$ gives the minimum objective value $f$. We run the algorithm by initially setting the value of the entry $\beta_k=1$ for $k=1,2,\dots,n$. The minimum objective value $f=0.03$ was obtained at $k=29$. This corresponds to a canonical correlation of $\langle {\mathbf{z}_a},{\mathbf{z}_b}\rangle = 0.88$. The values of the entries of ${\mathbf{w}_a}$ and ${\boldsymbol\beta}$ are shown in Figure \[scca\_exfig\]. The observation corresponding to $k=29$ in the kernelised view $K_b$ is most related to the variables $\mathbf{a}_{15}, \mathbf{a}_{16}, \mathbf{a}_{18}, \mathbf{a}_{20},$ and $\mathbf{a}_{24}$. $\qed$ The sparse versions of CCA can be applied to settings when the large number of variables hinders the inference of the relations. When the interest is to extract sparse linear relations between the variables, the proposed algorithms of [@parkhomenko2007genome; @waaijenborg2008quantifying; @witten2009penalized] provide a solution. The algorithm of [@hardoon2011sparse] can be applied if the focus is to find how the variables of one view relate to the observations that correspond to the combined sets of the variables in the other view. In other words, the approach is useful if the focus is not to uncover the explicit relations between the variables but to gain insight how a variable relates to a complete set of variables of an observation. Discussion ========== This tutorial presented an overview on the methodological evolution of canonical correlation methods focusing on the original linear, regularised, kernel, and sparse CCA. Succinct reviews were also conducted on the Bayesian and neural network-based deep CCA variants. The aim was to explain the theoretical foundations of the variants using the linear algebraic interpretation of CCA. The methods to solve the CCA problems were described using numerical examples. Additionally, techniques to assess the statistical significance of the extracted relations and the generalisability of the patterns were explained. The aim was to delineate the applicabilities of the different CCA variants in relation to the properties of the data. In CCA, the aim is to determine linear relations between variables belonging to two sets. From a linear algebraic point of view, the relations can be found by analysing the linear transformations defined by the two views of the data. The most distinct relations are obtained by analysing the entries of the first pair of position vectors in the two data spaces that are mapped onto a unit ball such that their images have a minimum enclosing angle. The less distinct relations can be identified from the successive pairs of position vectors that correspond to the images with a minimum enclosing angle obtained from the orthogonal complements of the preceding pairs of images. This tutorial presented three standard ways of solving the CCA problem, that is by solving either a standard [@hotelling1935most; @hotelling1936relations] or a generalised eigenvalue problem [@bach2002kernel; @hardoon2004canonical], or by applying the SVD [@healy1957rotation; @ewerbring1989canonical]. The position vectors of the two data spaces, that convey the related pairs of variables, can be obtained using alternative techniques than the ones selected for this tutorial. The three methods were chosen because they have been much applied in CCA literature and they are relatively straightforward to explain and implement. Additionally, to understand the further extensions of CCA, it is important to know how it originally has been solved. The extensions are often further developed versions of the standard techniques. For didactic purposes, the synthetic datasets used for the worked examples were designed to represent optimal data settings for the particular CCA variants to uncover the relations. The relations were generated to be one-to-one, in other words one variable in one view was related with only one variable in the other view. In real datasets, which are often much larger than the synthetic ones in this paper, the relations may not be one-to-one but rather many-to-many (one-to-two, two-to-three, etc.). As in the worked examples, these relations can also be inferred by examining the entries of the position vectors of the two data spaces. However, the understanding of how the one-to-one relations are extracted provides means to uncover the more complex relations. To apply the linear CCA, the sample size needs to exceed the number of variables of both views which means that the system is required to be overdetermined. This is to guarantee the non-singularity of the variance matrices. If the sample size is not sufficient, regularisation [@vinod1976canonical] or Bayesian CCA [@klami2013bayesian] can be applied. The feasibility of regularisation has not been studied in relation to the number of variables exceeding the number of observations. Improving the invertibility by introducing additional bias has been shown to work in various settings but the limit when the system is too underdetermined that regularisation cannot assist in recovering the underlying relations has not been resolved. Bayesian CCA is more robust against outlying observations, when compared with linear CCA, due to its generative model structure. In addition to linear relations, non-linear relations are taken into account in kernelised and neural network-based CCA. Kernel methods enable the extraction of non-linear relations through the mapping to a Hilbert space [@bach2002kernel; @hardoon2004canonical]. When applying kernel methods in CCA, the disparity between the number of observations and variables can be huge due to very dimensional kernel induced feature spaces, a challenge that is tackled by regularisation. The types of relations that can be extracted, is determined by the kernel function that was selected for the mapping. Linear relations are extracted by a linear kernel and non-linear relations by non-linear kernel functions such as the Gaussian kernel. Although kernelisation extends the range of extractable relations, it also complicates the identification of the type of relation. A method to determine the type of relation involves testing how the image vectors correlate with a certain type of function. However, this may be difficult if no prior knowledge of the relations is available. Further research on how to select the optimal kernel functions to determine the most distinct relations underlying in the data could facilitate the final inference making. Neural network-based deep CCA is an alternative to kernelised CCA, when the aim is to find a high correlation between the final output vectors obtained through multiple non-linear transformations. However, due to the network structure, it is not straightforward to identify the relations between the variables. As a final branch of the CCA evolution, this tutorial covered sparse versions of CCA. Sparse CCA variants have been developed to facilitate the extraction of the relations when the data dimensionality is too high for human interpretation. This has been addressed by enforcing sparsity on the entries of the position vectors [@parkhomenko2007genome; @waaijenborg2008quantifying; @witten2009penalized]. As an alternative to operating in the data spaces, [@hardoon2011sparse] proposed a primal-dual sparse CCA in which the relations are obtained between the variables of one view and observations of the other. The sparse variants of CCA in this tutorial were selected based on how much they have been applied in literature. As a limitation of the selected variants, sparsity is enforced on the entries of the position vectors without regarding the possible underlying dependencies between the variables which has been addressed in the literature of structured sparsity [@chen2012structured]. In addition to studying the techniques of solving the optimisation problems of CCA variants, this tutorial gave a brief introduction to evaluating the canonical correlation model. Bartlett’s sequential test procedure [@bartlett1938further; @bartlett1941statistical] was given as an example of a standard method to assess the statistical significance of the canonical correlations. The techniques of identifying the related variables through visual inspection of biplots [@meredith1964canonical; @ter1990interpreting] were presented. To assess whether the extracted relations can be considered to occur in any data with the same underlying sampling distribution, the method of applying both training and test data was explained. As an alternative method, the statistical significance of the canonical correlation model could be assessed using permutation tests [@rousu2013biomarker]. The visualisation of the results using the biplots is mainly applicable in the case of linear relations. Alternative approaches could be considered to visualise the non-linear relations extracted by kernel CCA. To conclude, this tutorial compiled the original, regularised, kernel, and sparse CCA into a unified framework to emphasise the applicabilities of the four variants in different data settings. The work highlights which CCA variant is most applicable depending on the sample size, data dimensionality and the type of relations of interest. Techniques for extracting the relations are also presented. Additionally, the importance of assessing the statistical significance and generalisability of the relations is emphasised. The tutorial hopefully advances both the practice of CCA variants in data analysis and further development of novel extensions. The software used to produce the examples in this paper are available for download at https://github.com/aalto-ics-kepaco/cca-tutorial. The work by Viivi Uurtio and Juho Rousu has been supported in part by Academy of Finland (grant 295496/D4Health). João M. Monteiro was supported by a PhD studentship awarded by Fundação para a Ciência e a Tecnologia (SFRH/BD/88345/2012). John Shawe-Taylor acknowledges the support of the EPSRC through the C-PLACID project Reference: EP/M006093/1.
--- abstract: 'We study on the third quantization of a Kaluza-Klein toy model. In this model time ($x$) is defined by the scale factor of universe, and the space coordinate ($y$) is defined by the ratio of the scales of the ordinary space and the internal space. We calculate the number density of the universes created from nothing and examine whether the compactification can be explained statistically by the idea of the third quantization.' --- \#1\#2\#3\#4[[\#1]{} [**\#2**]{}, \#3 (\#4)]{} MMC-M-11\ November  1997\ [Third Quantization of Kaluza-Klein Cosmology and Compactification]{}\ Yoshiaki OHKUWA [^1]\ Department of Mathematics, Miyazaki Medical College, Kiyotake,\ Miyazaki 889-16, Japan\ 0.3cm 0.5cm Introduction ============= The problem of time is now considered as one of the deepest problems in quantum cosmology$ .^{\sst [1]}$ It has many complicated aspects and is still controversial, though many ideas have been proposed to solve it$.^{\sst [1,2]}$ Usually, the Wheeler-DeWitt equation is considered as the fundamental equation in quantum cosmology$.^{\sst [3]}$ However, because the Wheeler-DeWitt equation is a hyperbolic second-order differential equation (the Klein-Gordon type), there is a problem in the naive interpretation that $|\Psi|^2$ is a probability, where $\Psi$ is a solution to the Wheeler-DeWitt equation. One of the proposed ideas to solve this problem is the third quantization in analogy with the second quantization of the Klein-Gordon equation $.^{\sst [4-15]}$ The Kaluza-Klein theory is one of unified theories of gravity and matter fields$.^{\sst [16]}$ In this theory it is assumed that the space-time has higher dimensions, the higher-dimensional space is a product of ordinary (external) space and internal space, and the latter is small, which is called compactification. The gravitational field and matter field are contained in the metric tensors of the higher-dimensional space-time. The quantum cosmology of the Kaluza-Klein theory has been studied by many authors$,^{\sst [17-22]}$ and the third quantization of it has also been studied $.^{\sst [23-26]}$ However, as far as the present author knows, the idea of the third quantization has not been utilized directly to explain the compactification. In this paper we will examine the third quantization of a Kaluza-Klein cosmology, in which time ($x$) is defined by the scale factor of universe, and the space coordinate ($y$) is defined by the ratio of the scale of the ordinary space and that of the internal space. And we will calculate the number density of the universes created from nothing. The compactification could be explained statistically, if many of universes created from nothing had such $y$ that means compactification. We will find that there is a possibility to explain the compactification, when both the external and internal spaces are three-dimensional flat space $R^3$ . In §2 we will consider the quantum cosmology of a Kaluza-Klein toy model, which will be third quantized in §3. In §4 we will calculate the number density of universes created from nothing, and in §5 we will examine the possibility to explain the compactification statistically through the idea of the third quantization. We summarize in §6. Quantum Cosmology of Kaluza-Klein Toy Model =========================================== Let us start from a (1+n+m)-dimensional space-time. We consider the following minisuperspace model in which the (n+m)-dimensional space is a product of a space with n dimensions and a space with m dimensions$.^{\sst [17, 18, 23-26]}$ The metric is assumed to be ds\^2 &= &g\_[M N]{} d x\^[M]{} d x\^[N]{}  ,\ &= &-N\^2 (t) d t\^2 + a\^2 (t) [g]{}\_ d x\^d x\^+ b\^2 (t) [g]{}\_[m n]{} d x\^m d x\^n  . Here $N(t)$ is the lapse function, $a(t)$ and $b(t)$ are the scale factors of the two spaces, $g_{\sst M N}$ are the (1+n+m)-dimensional metric tensors, and ${\tilde g}_{\mu \nu}$, $ {\hat g}_{m n}$ are metric tensors of $M^n$, $M^m$, respectively, where $M^n$ is $S^n$, $R^n$ or $H^n$. The Einstein action with a cosmological constant $\Lambda$ is written as S = d\^[1+n+m]{} x [L]{}  , = ( R - 2 )  . Substituting Eqs. (1) into Eqs. (2), we have S &= & d t L  ,\ L &= & N a\^n b\^m  , where $v_{n m} = \int \! d^{n+m} x \sqrt{{\tilde g}{\hat g}}$, ${\tilde g} = det {\tilde g}_{\mu \nu}$, ${\hat g} = det {\hat g}_{m n}$, ${\dot a}= \frac{d a}{d t}$ and $k_n = 1, 0, -1$ when $M^n$ is $S^n, R^n, H^n$, respectively. Since the action (3) is not diagonal with respect to $a, b$, we change variables as a = r \^m , b = r \^[-n]{} , where $r$ is a scale factor and $\gamma$ determines the ratio of $a / b$. With these variables the Lagrangian (3) becomes L &= &-\^2 + - N U  ,\ c\_r &= &(n+m)(n+m-1)  , c\_= nm(n+m)  ,\ U &= &-{ r\^[n+m-2]{} - 2r\^[n+m]{} }  . 0.3cm Then the Hamiltonian reads H &= &N [H]{}  ,\ [H]{} &= &- + + U  , where $ \ p_r =\frac{\partial L}{\partial {\dot r}} = -c_r N^{-1} r^{n+m-2} {\dot r}\ , \ \, p_\gamma =\frac{\partial L}{\partial {\dot \gamma}} = c_\gamma N^{-1} r^{n+m} \gamma^{-2} {\dot \gamma}\ . $ From the Hamiltonian constraint ${\cal H} \approx 0$ , we obtain the Wheeler-DeWitt equation, &(r, ) = 0  ,\ &V = - { r\^[2(n+m-2)]{} - 2r\^[2(n+m-1)]{} }  , where $\Psi (r, \gamma)$ is a wave function of universe, $\oop, \ooq$ are parameters of operator ordering. Changing variables by r = e\^x , = e\^y  , where $x$ and $y$ determine the scale of universe and the ratio of two spaces, respectively, we obtain { + (-1) - + e\^[2x]{} V } (x , y) = 0  . 0.2cm If we choose $\oop = 1 , \ \ooq = 1$ , Eq. (8) becomes (x , y) = 0  . The Wheeler-DeWitt equation (9) is the Klein-Gordon type, and $|\Psi|^2$ is not conserved. Therefore, there is a difficulty in the naive interpretation that $|\Psi|^2$ is a probability. We will investigate the third quantization of this model in the next section. Third Quantization ================== Let us regard $x$ as time and $y$ as the space coordinate. The third quantized action to yield the Wheeler-DeWitt equation (9) is S\_[3Q]{} &= & dx dy [L]{}\_[3Q]{}  ,\ [L]{}\_[3Q]{} &= &  . The canonical momentum is given by $$\Pi_\Psi = \frac{\partial {\cal L}_{\sst 3Q}}{\partial \dpdx} = \dpdx \ ,$$ and the Hamiltonian reads $${\cal H}_{\sst 3Q} = \frac{1}{2} \biggl[ \Pi_\Psi^2 + \frac{c_r}{c_\gamma}\biggl( \dpdy \biggr)^2 + e^{2x} V \Psi^2 \biggr] \ .$$ To quantize this model, we impose the canonical commutation relations &= &i (y-y\^)  ,\  \[(x , y) , (x , y\^)\] &= &\[\_(x , y) , \_(x , y\^)\] = 0  . Let us write a complete set of normalized positive frequency solutions of Eq. (9) as $\{ u_p (x , y) \}$ , where $p$ labels the mode function and $u_p$ satisfies the normalization condition, i d y (u\_p\^\* u\_q - u\_q u\_p\^\* ) = (p-q)  . Using these normal modes, we expand $\Psi (x , y)$ as (x , y) = d p \[ a\_p u\_p(x , y) + a\_p\^u\_p\^\* (x , y) \]  , where $a_p$ and $a_p^\dagger$ satisfy = (p-q)  ,   \[a\_p, a\_q\] = \[a\_p\^, a\_q\^\]  = 0  . Therefore, $a_p$ and $a_p^\dagger$ are annihilation and creation operators of a universe with $p$, respectively. The vacuum state $|0 \rangle$ is defined by a\_p |0 = 0   p  , and the Fock space is spanned by $ a_{p_1}^\dagger a_{p_2}^\dagger \cdots |0 \rangle \ . $ Universe Creation from Nothing ============================== Since the potential $V$ in Eqs. (6) is time ($x$) and space ($y$) dependent, universes are created from nothing$.^{\sst [7, 10, 27]}$ In order to see this and for simplicity, let us consider the case that both the ordinary space and the internal space are flat ($k_n = 0, k_m = 0$) $.^{\sst [24-26]}$ We assume that $v_{n m}$ is some properly fixed finite constant. In this case Eq. (9) is (x, y) = 0 with $c_{\sst \Lambda} = \frac{c_r v_{n m}}{4 \pi G} \Lambda$ . The normal mode function $u_p (x , y)$ of Eq. (16) can be calculated as u\_p (x , y) = [N]{}\_p Z\_ (z) e\^[ipy]{}  , where we have assumed $\Lambda > 0$ , $z = \frac{\sqrt{c_{\sst \Lambda}}}{n+m} e^{(n+m)x}$ , $\nu = \frac{-i }{n+m} \sqrtcrcg |p|$ , ${\cal N}_p$ is a normalization factor that satisfies Eq. (12), $Z_\nu$ is a Bessel function, and $p$ can be regarded as a canonical momentum of $y$ . We define in-mode function $u_p^{in}(x , y)$ as u\_p\^[in]{}(x , y) &= &[N]{}\_p\^[in]{} J\_(z) e\^[i p y]{}  ,\ [N]{}\_p\^[in]{} &= & ( [sinh]{} | p | )\^[-]{}  , which satisfies $$u_p^{in}(x , y) \propto exp \Biggl[-i \Biggl( \sqrtcrcg \, |p| x - py \Biggr) \Biggr]\ ,$$ when $x \rightarrow - \infty$ . The expansion of $\Psi$ is $$\Psi(x , y) = \int \! d p \, [ a_p^{in} u_p^{in}(x , y) + {a_p^{in}}^\dagger {u_p^{in}}^* (x , y) ] \ ,$$ and the in vacuum $|0 , in \rangle$ , which we regard as “nothing”, is defined by a\_p\^[in]{} |0 , in = 0   p  . In the same way we define out-mode function $u_p^{out}(x , y)$ as u\_p\^[out]{}(x , y) &= &[N]{}\_p\^[out]{} H\_[-]{}\^[(2)]{} (z) e\^[i p y]{}  ,\ [N]{}\_p\^[out]{} &= & [exp]{} ( | p | )  , which satisfies $$u_p^{out}(x , y) \propto {\rm exp} \biggl[ -\frac{n+m}{2} x -i \biggl( \znm -py \biggr) \biggr] \, ,$$ when $x \rightarrow \infty \, .\footnote{If we choose $H_\nu^{(2)} (z)$ instead of $H_{-\nu}^{(2)} (z)$ in Eqs. (20), Eqs. (21)-(33) will not change, but the adiabatic vacuum $\vaca$ in Eqs. (36) corresponds to $H_{-\nu}^{(2)} (z)$. } $ The expansion of $\Psi$ is $$\Psi(x , y) = \int \! d p \, [ a_p^{out} u_p^{out}(x , y) + {a_p^{out}}^\dagger {u_p^{out}}^* (x , y) ] \ ,$$ and the out vacuum $|0 , out \rangle$ is defined by a\_p\^[out]{} |0 , out = 0   p  . The Bogoliubov coefficients $c_i (p , q) \ ( i = 1,2 )$ are defined by u\_p\^[out]{} (x , y) = d q \[ c\_1 (p , q) u\_q\^[in]{}(x , y) + c\_2 (p , q ) [u\_q\^[in]{}]{}\^\* (x , y) \]  . Using the relation $ H_{-\nu}^{(2)} (z) = \frac{i}{{\rm sin} \pi \nu} [ e^{-i \pi \nu} J_{-\nu} (z) - J_{\nu} (z) ] $ and Eqs. (18), (20), (22), we can calculate $c_i (p , q)$ as c\_1 (p,q) &= & (p-q)  ,\ c\_2 (p,q) &= & (p+q)  . The number density of the universe with $p$ created from nothing is defined as = 0 , in | [a\_p\^[out]{}]{}\^ a\_p\^[out]{} | 0 , in  . From Eqs. (19), (23), (24) we obtain = d q | c\_2 (p , q) |\^2  , where we have omitted an irrelevant constant$.^{\sst [24-26]}$ Note that this is a Planck distribution with respect to $|p|$ . Compactification of Internal Space ================================== Now let us examine whether the compactification can be explained directly from the third quantization or not. In order to know the number density of created universe with respect to $y$ , we define the operators $a_y^{out}$ as a\_y\^[out]{} = d p e\^[-i p y]{} a\_p\^[out]{}  , which satisfy = (y-y\^)  , \[a\_y\^[out]{} , [a\_[y\^]{}\^[out]{}]{} \] =   \[[a\_y\^[out]{}]{}\^, [a\_[y\^]{}\^[out]{}]{}\^\] = 0  . We can regard $a_y^{out}$ and ${a_y^{out}}^\dagger $ as the annihilation and creation operators of a universe with $y$, respectively. Then the number density with respect to $y$ can be defined by = 0 , in | [a\_y\^[out]{}]{}\^ a\_y\^[out]{} | 0 , in  . Using Eqs. (19),(23),(26),(28), we find &= & d p\^d p e\^[i (p\^- p) y]{} 0 , in | [a\_[p\^]{}\^[out]{}]{}\^a\_p\^[out]{} | 0 , in\ &= & d p\^d p d q e\^[i (p\^- p) y]{} c\_2 (p\^,q) c\_2\^\* (p,q)\ &= & , where $\cN$ is a constant which does not depend on $y$ . If we define a = e\^ , b = e\^ , = | - |  , = e\^ , these equations and Eqs. (4), (7) mean - = (n+m) y  , = { [rl]{} &(a b)\ &(a b) .  , that is $\Gamma$ represents the ratio of the scales of the larger space and the smaller space. Then we can calculate the number densities $\ndz , \nGz$ with respect to $\delta, \Gamma$ as &= & +   = (y 0) ,  \ &= & = (1 )  . Hence, for any finite $\Gamma_0$ , we obtain = \_ = \_  , where $Prob$ is a probability. This result means that many of the universes created from nothing have a large ratio of the size of two spaces. Note that in this toy model the two spaces are completely symmetric. Therefore, if we assume n = m = 3 ($M^3 = R^3$) and regard the greater space as the ordinary (external) space and the smaller space as the internal space, there seems to be a possibility that the compactification can be explained statistically.[^2] However, there remain some problems in the above discussion. First, let us take another model where, for example, n=3 , m=1 , the three-dimensional space is flat ($M^3 = R^3, k_3 = 0$) and the one-dimensional space is a circle ($M^1 = S^1 , k_1 =1$). In this case Eq. (9) becomes (x, y) = 0 with $c_{\sst \Lambda} = \frac{3}{8} \Bigl( \frac{v_{n m}}{\pi G} \Bigr)^2 \Lambda$ , and the same results as Eqs. (29), (33) hold. In this model we must regard the flat space $R^3$ as the external space and the circle $S^1$ as the internal space. Then Eqs. (29), (33) mean that there are both many universes which are compactified and those which are not compactified. Therefore, our discussion is model dependent. If we will be also able to obtain the same result as Eqs. (33) , in a more realistic model, for example, with n = m = 3 and $M^3 = S^3 , k_3 = 1$ , then there will be a possibility to explain the compactification statistically in this case. It will be also interesting if the compactification can be explained when the space-time has the topology $R \times S^3 \times S^3 \times S^3$ in ten dimensions. In this case, it will be required that one space $S^3$ is large and two other spaces $S^3 \times S^3$ are small. So it seems that further investigation will be necessary on more realistic models. Second, thus far we have interpreted Eq. (16) as a field equation in a flat metric, $ d s_{\sst 3Q}^2 = - d x^2 + d {\tilde y}^2 \ , \quad {\tilde y} = \sqrtcgcr \ y \ , $ with a time dependent potential $c_\Lambda e^{2(n+m)x } \ . $ However, Eq. (16) can be also regarded as a field equation with a mass $\sqrt{c_\Lambda}$ in the Milne metric, d s\_[3Q]{}\^2 &= &e\^[2(n+m)x]{} (- d x\^2 + d [y]{}\^2 ) = - d \^2 + \^2 d \^2 = - d X\^2 + d Y\^2  ,\ ( -c\_) &= &e\^[-2(n+m) x]{} = 0  , where $ \tau = \frac{1}{n+m} e^{(n+m) x} = \frac{z}{\sqrt{c_\Lambda}} , \ \chi = (n+m) {\tilde y} \ , \ X = \tau {\rm cosh} \chi , \ Y = \tau {\rm sinh} \chi \ , $ and $\Box$ is a d’Alembertian in the Milne metric $ .^{\sst [26, 27]} $ Following Ref. \[27\], let us define two vacua: = |0, in  , = |0, out   .\^ According to Ref. \[27\], the first vacuum $\vacc$ becomes the conformal vacuum in the limit $\Lambda \to 0$ , the second vacuum $\vaca$ is the adiabatic vacuum and a comoving observer who has proper time $\tau \propto X$ will see no created universe in this vacuum. So, if we choose $\vaca$ as the initial state , no universe will be created and compactification will not be able to be explained even in the case that n = m = 3 and $M^3 = R^3 , k_3 = 0$. It seems that further investigation will be needed on which vacuum should be preferred. Summary ======= We have studied on the third quantization of a Kaluza-Klein toy model, in which time ($x$) is defined by the scale factor of universe, and the space coordinate ($y$) is defined by the ratio of the scales of the ordinary space and the internal space. We calculated the number density of the universes created from nothing and found that there is a possibility to explain the compactification using the third quantization, when both the external and internal spaces are three-dimensional flat space $R^3$ . However, our discussion is model dependent, and further study will be necessary. Acknowledgments {#acknowledgments .unnumbered} =============== The author would like to thank Prof. C. Isham, Prof. T.W.B. Kibble, Dr. J.J. Halliwell, Prof. A. Hosoya and Prof. T. Kitazoe for valuable discussions and encouragement. He would also like to thank Imperial College for hospitality where a part of this work was done. This work was supported in part by Japanese Ministry of Education, Science, Sports and Culture. [99]{} C.J. Isham, in [*Integrable Systems, Quantum Groups, and Quantum Field Theories*]{}, eds. L.A. Ibort and M.A. Rodriguez (Kluwer, London, 1993); K.V. Kuchař, in [*Proceedings of the 4th Canadian Conference on General Relativity and Relativistic Astrophysics*]{}, eds. G. Kunstatter, D.E. Vincent and J.G. Williams (World Scientific, Singapore, 1992). See also, e.g., H. Kodama, ; ; P. Hájiček, ; J.B. Hartle and D. Marolf, ; S. Kauffman and L. Smolin, “A Possible Solution to the Problem of Time in Quantum Cosmology”, gr-qc/9703026; R. Brout and R. Parentani, “Time in Cosmology”, gr-qc/9705072. J.J. Halliwell, in [*Quantum Cosmology and Baby Universes*]{}, eds. S. Coleman, J.B. Hartle, T. Piran and S. Weinberg (World Scientific, Singapore, 1991). T. Banks, . S. Giddings and A. Strominger, . M. McGuigan, ; . A. Hosoya and M. Morikawa, . V.A. Rubakov, . W. Fischler, I. Klebanov, J. Polchinski and L. Susskind, . Y. Xiang and L. Liu, . H. Pohle, . S. Abe, . T. Horiguchi, . M.A. Castagnino, A. Gangui, F.D. Mazzitelli and I.I. Tkachev, . A. Vilenkin, . T. Appelquist, A. Chodos and P.G.O. Freund, [*Modern Kaluza-Klein Theories*]{} (Addison-Wesley, Reading, 1987). Z.C. Wu, ; X.M. Hu and Z.C. Wu, ; Z.C. Wu, . Y. Okada and M. Yoshimura, . J.J. Halliwell, ; . U. Carow-Watamura, T. Inami and S. Watamura, . Y. Zhong and X. Li, . F. Mellor, . A. Zhuk, . E.I. Guendelman and A.B. Kaganovich, . A.I. Zhuk, . A.I. Zhuk, . N.D. Birrell and P.C.W. Davies, [*Quantum Fields in Curved Space*]{} (Cambridge Univ. Press, Cambridge, 1982) . Following papers also examined another possibility to explain the compactification statistically: R.C. Myers, ; Y. Ohkuwa and W. Ogura, . See also, K. Yamamoto, T. Tanaka and M. Sasaki, . [^1]: E-mail address: [email protected] [^2]: Many ideas have been proposed to explain the compactification$,^{\sst [16]}$ but this possibility is a new one to explain it statistically$.^{\sst [28]}$
--- author: - 'Joana Ascenso1$^,$2$^,$' - Marco Lombardi - 'Charles J. Lada' - João Alves bibliography: - '/Users/jascenso/Dropbox/Science/bib.bib' title: 'The extinction law from photometric data: linear regression methods[^1]' --- [The properties of dust grains, in particular their size distribution, are expected to differ from the interstellar medium to the high-density regions within molecular clouds. Since the extinction at near-infrared wavelengths is caused by dust, the extinction law in cores should depart from that found in low-density environments if the dust grains have different properties.]{} [We explore methods to measure the near-infrared extinction law produced by dense material in molecular cloud cores from photometric data. ]{} [Using controlled sets of synthetic and semi-synthetic data, we test several methods for linear regression applied to the specific problem of deriving the extinction law from photometric data. We cover the parameter space appropriate to this type of observations.]{} [We find that many of the common linear-regression methods produce biased results when applied to the extinction law from photometric colors. We propose and validate a new method, LinES, as the most reliable for this effect. We explore the use of this method to detect whether or not the extinction law of a given reddened population has a break at some value of extinction.]{} Introduction {#sec:introduction} ============ The properties of interestellar dust appear to be fairly constant throughout the interstellar medium (ISM) of the Galaxy [@RiekeLebofsky85; @Kenyon:1998mz; @Lombardi:2006gf; @Jones:1980fr; @Martin:1990zr], reflecting the homogeneous physical conditions that characterize it. In the cold molecular cores, however, under lower temperatures and higher densities, the dust grains are believed to change, namely grow by coalescence and/or develop ice mantles [e.g., @Whittet88; @Ossenkopf93; @Whittet01; @Draine03; @Roman-Zuniga07; @Steinacker10]. Measuring these differences using methods other than detailed spectral analysis has proved somewhat challenging, but the advent of larger and more sensitive telescopes has started to reveal extinction laws toward these regions that depart from the ISM typical curves, particularly in the near- and mid-infrared regime[^2], putting forward the extinction law as a good indicator of grain properties. Whereas the extinction law in low-density regions and the ISM is well characterized by a power-law ($A_\lambda \propto \lambda^{-\beta}$) of index $\beta\sim1.8$, several authors have found a pronounced flattening of the extinction law in high density regions. @Lutz:1996aa and @Lutz:1999aa first noted a flat extinction law toward the Galactic center in this wavelength range using spectroscopy of hydrogen recombination lines. @Nishiyama:2006ve confirmed a flat extinction law toward the Galactic center using the colors of red clump stars, later confirmed independently by @Fritz11. Another example was found by @Indebetouw05, using a different method based on [ *Spitzer*]{} photometry, in the direction of and around the star forming region RCW 49. A number of other studies on the extinction law using [*Spitzer*]{} followed, namely @Flaherty:2007aa, @Chapman:2009ab and @McClure:2009aa, who found a gray extinction law for star forming regions and molecular clouds; @Chapman:2009aa and @Roman-Zuniga07, who found a gray extinction law for cloud cores; and @Nishiyama:2009aa, who found a similar law for the Galactic center. @Chapman:2009aa, @Chapman:2009ab and @McClure:2009aa go a step further by analyzing the dependence of the extinction law with extinction, finding that the extinction law becomes grayer at higher extinction regimes. More recently, @Cambresy11 measured an actual change of the extinction law within the same region for a threshold of $A_V=20$ mag in the Trifid Nebula. In this paper we address the issue of determining the extinction law from photometric data alone and the biases inherent to some of the fitting methods used frequently in the literature. We begin by defining the mathematical problem and the possible methods to extract a linear fit (Sect. \[sec:extlaw\]), validating each method with synthetic data (Sec. \[sec:tests\]). In Sect. \[sec:detect-break-extinct\] we address the issue of detecting a flattening of the extinction law with extinction. This is the first paper in a series of two. The following paper (Ascenso et al., [*in preparation*]{}) will apply the results reported here to actual observations of cores in the Pipe Nebula. The extinction law from photometric data {#sec:extlaw} ======================================== Defining the problem {#sec:problem} -------------------- The problem of deriving the extinction law from photometric data is one of linear regression: the goal is to determine the slope of the line that best fits the reddening-displaced positions of the stars in a color-color diagram. This slope, $\beta$, is the ratio of two color excesses that compose the color-color diagram, e.g., $\beta=E_{\lambda-K}/E_\mathit{H-K}$ in a $\lambda-K$ [*vs.*]{} $H-K$ diagram, and is, in this case, related to the extinction law $A_\lambda/A_K$ by: $$\label{eq:9} \frac{A_\lambda}{A_K}=\left(\frac{A_H}{A_K}-1\right)\frac{E_{\lambda-K}}{E_\mathit{H-K}}+1$$ Linear regression, however, is not a simple science. The presence of errors in both coordinates, that they vary as a function of the quantities being analyzed (heteroscedasticity), that they may be correlated, and the presence of intrinsic scatter, make the required linear regression analysis far more complex than the typical chi-squared minimization of residuals. Problems of this nature, in particular applied to astronomical analysis, have been debated in the literature for over 50 years [@Seares:1944aa; @Trumpler:1953aa], although every specific case seems to require a careful consideration of the methods to use. The particular distribution to which we presently wish to fit a linear function has the following characteristics: - The $X$ and $Y$ variables are photometric colors, obtained by subtracting the magnitudes of a star in two bands. Both variables are therefore subject to photometric errors that increase with magnitude but not in an entirely predictable way with color. - Because we observe (parts of) a non-homogeneous cloud, the amount of extinction is not the same for all stars, and since most of the area is at low extinction, the distribution of points in a color-color diagram is denser in the bluer end, and scarcer in the redder end. - The $X$ and $Y$ colors will usually have one band in common ([*e.g.*]{}, $J-H$ and $H-K$), which causes the errors to be correlated (anti-correlated in the example). - Because we observe the colors of a random sample of stars background to the cloud, the data will have intrinsic scatter caused by the range in spectral types of the stars. The intrinsic scatter is unrelated to extinction. To summarize, the position of each datapoint in the color-color diagram is determined by three factors: the intrinsic scatter from the dispersion in spectral types of field stars, the reddening caused by dust extinction, and the measurement (photometric) error. The first alone would trace the loci occupied by the unreddened colors of the population of stars in the field, mostly giant and main-sequence stars; the effect of extinction is to (dim the objects and) move each point along the line whose slope we want to determine; and the photometric errors scatter the points in an ellipse around the intrinsic, reddened position (not a circle because the errors are correlated). A proper method to measure the slope of this distribution should be robust enough to disentangle these effects and return the single slope of the reddening vector. Methods for linear regression {#sec:methods} ----------------------------- In this paper we test the following methods for linear regression applied to the problem described in the previous section. ### Least squares fitting {#sec:least-squar-fitt} The ordinary least-squares (OLS) fit is the simplest approach to linear regression. It works by minimizing the vertical distance of all data points to successive lines in the 2-D (slope and intercept) parameter space. Formally, it is equivalent to finding $\alpha_\mathit{OLS}$ and $\beta_\mathit{OLS}$ that minimize the quantity: $$\label{eq:3} \chi^2(\alpha_\mathit{OLS}, \beta_\mathit{OLS})=\sum_{i=1}^{N}(y_i - \alpha_\mathit{OLS} - \beta_\mathit{OLS} x_i)^2$$ where $(x_i, y_i)$ is the $i^{th}$ data point, and $\alpha$ and $\beta$ are the intercept and the slope one is trying to find. This method treats all data points the same, even though the position of some points will be more uncertain than that of others. The knowledge of the measurement errors for each data point can be used to attribute different weights to the different data points, thus optimizing the fit, which is equivalent to finding $\alpha_\mathit{WLS}$ and $\beta_\mathit{WLS}$ that minimize the quantity: $$\label{eq:4} \chi^2(\alpha_\mathit{WLS}, \beta_\mathit{WLS})=\sum_{i=1}^{N}\frac{(y_i - \alpha_\mathit{WLS} - \beta_\mathit{WLS} x_i)^2}{\sigma_{yi}^2 + \beta_\mathit{WLS}^2\sigma_{xi}^2}$$ where $\sigma_{xi}$ and $\sigma_{yi}$ are the measurement errors of the $i^{th}$ data point. This method is called the weighed least squares (WLS) method. ### Symmetrical methods {#sec:symmetrical-methods} Applications of the least squares method have been suggested to be most appropriate when the intrinsic scatter in the data dominates over the measurement errors (see @Isobe:1990aa for a detailed discussion and formal description). As opposed to the method described above, these treat both variables symmetrically, in such a way that it is no longer meaningful to speak of dependent and independent variables. Two of these methods are based on calculating the OLS slope of the Y $vs.$ X distribution, $\hat{\beta_1}=$ OLS(Y$|$X) following the notation of @Isobe:1990aa, and that of the X $vs.$ Y distribution, $\hat{\beta_2}=$ $1/$OLS(X$|$Y). The best fit is the line that bisects the OLS(Y$|$X) and the OLS(X$|$Y) in the bisector method (eq. \[eq:5\]), or the geometric mean of the OLS(Y$|$X) and OLS(X$|$Y) slopes in the geometric mean method (eq. \[eq:6\]). A third method, orthogonal regression (eq. \[eq:7\]), minimizes the distances of the points to a model line, but perpendicularly to the line instead of vertically as the regular least-squares method. @Isobe:1990aa clearly state that these three methods do not produce the same or equivalent solutions. The plus or minus in the equations below refer to the sign of the covariance of the two variables | positive if they are correlated, negative if anti-correlated. $$\label{eq:5} \hat{\beta}_\mathit{bisector}=\frac{\hat{\beta}_1\hat{\beta}_2-1+\sqrt{(1+\hat{\beta}_1^2)(1+\hat{\beta}_2^2)}}{\hat{\beta}_1+\hat{\beta}_2}$$ $$\label{eq:6} \hat{\beta}_\mathit{geom}=\pm\sqrt{\hat{\beta}_1\hat{\beta}_2}$$ $$\label{eq:7} \hat{\beta}_\mathit{orth}=\frac{\hat{\beta}_2-\hat{\beta}_1^{-1}\pm\sqrt{4+(\hat{\beta}_2-\hat{\beta}_1^{-1})^2}}{2}$$ ### Binning {#sec:binning} Binning data always causes loss of information, but, sometimes, what is lost in information, is gained in simplicity of analysis and results. In their studies of extinction by molecular clouds, @Lombardi06 developed a method to determine the extinction law that consisted in binning the colors in both axes and fitting the bins using the weighed least-squares fitting described in sect. \[sec:least-squar-fitt\]. The characterization of the data changes entirely, because (1) the data points are no longer measurements but the weighed average of many measurements, (2) the errors are no longer measurement errors but the dispersion of colors within each bin, and (3) the fewer high extinction points are given more weight than before as they are represented by a bin with the same weight as those bins containing more points. The latter point implies, although with the potential of introducing small number statistics issues, that the weight of any extinction range is the same, regardless of where the majority of datapoints lie. In this way, instead of the fit being weighed by the more abundant points at low extinction, as is the case for the fit to all datapoints, it is equally weighed by a much larger range of extinction thus allowing a better constrain of the extinction law. It also implies that part of the intrinsic scatter problem is eliminated as the high extinction population will be dominated by giants whose range in intrinsic colors is much narrower. For the purpose of these experiments we have binned the data in two ways: (1) along the color in the $X$-axis in bins of color, and (2) iteratively along an assumed reddening vector in bins of $A_V$ following the method described by @Lombardi06. ### BCES method {#sec:bces-methods} The standard BCES method [@Akritas:1996fk] starts from the simple observation that the slope $\beta$ that minimizes the standard OLS Eq. (1) can be alternatively written as $$\beta_\mathit{OLS} = \frac{\mathrm{Cov}(x,y)}{\mathrm{Var}(x)} \; ,$$ where $\mathrm{Cov}(x,y)$ is the observed covariance between the data $\{ x_i \}$ and $\{ y_i \}$, and $\mathrm{Var}(x)$ is the variance of $\{ x_i \}$. These two quantities can be evaluated from the usual equations $$\begin{gathered} \mathrm{Var}(x) = \frac{1}{N} \sum_{i=0}^n \bigl(x_i - \bar x \bigr)^2 \; , \\ \mathrm{Cov}(x) = \frac{1}{N} \sum_{i=0}^n \bigl(x_i - \bar x \bigr) \bigl(y_i - \bar y \bigr) \; ,\end{gathered}$$ where the bar indicates the average values. Note that for the specific purposes of the BCES method the variance and covariance of the data should be evaluated using $N$ in the denominator (instead of the more common $N-1$, used when estimating the variance and the mean from the same dataset). The equation for $\beta$ above suggests that we can easily take into account the presence of measurement errors on both $x$ and $y$ by simply subtracting their effect from the estimates of the covariance and variance. To this purpose, suppose that each point $(x_i, y_i)$ in our dataset is affected by a statistical error, so that the measured values $(\hat x_i, \hat y_i) = (x_i, y_i) + (\epsilon^x_i, \epsilon^y_i)$. The quantities $(\epsilon^x_i, \epsilon^y_i)$ represent the errors, and are drawn from some distribution; the errors on $x$ and $y$ are not assumed to be uncorrelated here. The presence of the errors changes the covariance and variance in the expression of $\beta$ above as follows $$\begin{gathered} \mathrm{Cov}(x,y) \mapsto \mathrm{Cov}(x, y) + \mathrm{Cov}(\epsilon^x, \epsilon^y) \; , \\ \mathrm{Var}(x) \mapsto \mathrm{Var}(x) + \mathrm{Var}(\epsilon^x) \; ,\end{gathered}$$ where we have introduced the variance and covariance of the errors measurements. Since the true, original variance and covariance is relevant for the estimate of $\beta$,the equations above suggest that in presence of measurement errors, we can replace the equation above with $$\label{eq:10} \beta_\mathit{BCES} = \frac{\mathrm{Cov}(x,y) - \mathrm(Cov)(\epsilon^x, \epsilon^y)}{\mathrm{Var}(x) - \mathrm(Var)(\epsilon^x)} \; .$$ Note that, in contrast to $(x,y)$, the variance and covariance of the measurement errors is assumed to be known and cannot be derived from the data alone. For example, in our case, if $(x,y)$ are two colors, say $x = H - K$ and $y = J - H$, we will have $$\begin{gathered} \mathrm{Cov}(\epsilon^x, \epsilon^y) = -\sigma^2_H \; , \\ \mathrm{Var}(\epsilon^x) = \sigma^2_H + \sigma^2_K \; .\end{gathered}$$ where $\sigma_H$ and $\sigma_K$ are the mean photometric error in the $H$ and $K$ bands, respectively. ### A new method for linear regression: LinES {#sec:new-method-lines} The standard BCES methods is very simple to implement, but, although, in principle, it should be able to work in presence of an intrinsic scatter, in practice, at least in the original version presented by @Akritas:1996fk, it is does not. The authors do mention the possibility of accounting for intrinsic scatter in the data, but only when $x$ is measured without error, which is not applicable to the case of the extinction law. Additionally, in the situations that they consider, the amount of intrinsic scatter is taken to be unknown. In our case, however, we can estimate this quantity directly from a control field with no extinction: the variance and covariance of the colors there is just the sum of the variance and covariance due to the intrinsic scatter in color of stars, and by the photometric errors for the control field. This way we can derive the slope using the following expression: $$\beta_\mathit{LinES} = \frac{\mathrm{Cov}(x,y) - \mathrm{Cov}(\epsilon^x, \epsilon^y) - \mathrm{Cov}(x^{cf},y^{cf}) + \mathrm{Cov}(\epsilon^{cfx}, \epsilon^{cfy})}{\mathrm{Var}(x) - \mathrm{Var}(\epsilon^x) - \mathrm{Var}(x^{cf}) + \mathrm{Var}(\epsilon^{cfx})} \; .$$ We call this estimate the LinES method ([**Lin**]{}ear regression with [**E**]{}rrors and [**S**]{}catter)[^3]. As shown in section \[sec:lines-method\], this method is robust against the presence of correlated errors in both variables, and against the presence of intrinsic scatter. Method validation {#sec:tests} ================= We tested the methods described above using controlled sets of synthetic data. The following sections discuss these data, and the effects of varying the data parameters on the results produced by each method. Synthetic data {#sec:synthetic-data} -------------- We constructed a set of simulations meant as controlled, realistic datasets comparable to the observed data for the Pipe Nebula cores [@Lombardi06; @Alves07]. The simulated data consisted of a number of points (stars) characterized by brightnesses in three bands - arbitrarily $J$, $H$ and $K$ - affected by a value of extinction that obeys a predetermined extinction law. We simulated three sets of data to test the different aspects of each method. For all sets, the $J$ brightnesses were drawn randomly from the distribution of $J$ luminosities from an observed unreddened control field toward the galactic bulge (Fig. \[fig:model-jlf\]). The synthetic extinction profile was defined as a lognormal distribution (Eq. ) centered at $\mu=\log(2.5)$, the logarithm of the median extinction of the Pipe, and with a width of $\sigma=0.46$, chosen so that the number of stars at high extinctions approximately matches that for the observed data of the Pipe cores (Fig. \[fig:model-av\]). $$\label{eq:1} \mathrm{PDF} = \frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{(\log(A_V)-\mu)^2}{2\sigma^2}}.$$ Because the probability distribution function (PDF) for the extinction is a lognormal, there will be many stars at low extinction and progressively fewer stars at high extinction. $A_K$ was then calculated from the relation $A_K/A_V = 0.112$ [@RiekeLebofsky85] and the values of extinction drawn from eq. were applied to the $J$, $H$ and $K$ brightnesses using the extinction law characterized by: $$\begin{gathered} \label{eq:2} A_H/A_K = 1.55 \\ \label{eq:2.1} A_J/A_K = 1.55(\beta + 1) - \beta.\end{gathered}$$ The second equation, derived from eq. for a $(J-H)$ [*vs.*]{} $(H-K)$ diagram, defines $\beta=E(J-H)/E(H-K)$, which means the reddening vector has a slope of $\beta$ in a $(J-H)$ [*vs.*]{} $(H-K)$ color-color diagram. $A_H/A_K=1.55$ is adopted from @Indebetouw05. ### Set 1: Homoscedastic data, no intrinsic scatter {#sec:set1} For the first set, the $H$ and $K$ brightnesses were derived from $J$ assuming that all stars have the typical intrinsic color of giants [$J-H = 0.7$ and $H-K = 0.15$, @BesselBret88][^4]. Each star was then assigned a value of extinction drawn randomly from the extinction profile (eqs. - , Fig. \[fig:model-av\]). In addition to reddening, each star was assigned errors in $J$, $H$ and $K$, to simulate the photometric errors inherent to real data. The errors were first applied in the simplest possible way: independently of brightness. The magnitudes of the errors were drawn randomly and independently for $J$, $H$ and $K$, from a normal distribution with $\mu = 0$ and $\sigma = 0.05$, so that 95% of the stars will have errors below 0.1 mag, the typical acceptable errors for photometry. A random value from these distributions was then added to the magnitudes of each star in all bands independently. The synthetic data produced in this way will henceforth be referred to as Set 1, or homoscedastic dataset, since the errors do not scale with the variables. ### Set 2: Heteroscedastic data, no intrinsic scatter {#sec:set2} A second approach was designed to reproduce the fact that the error in real observations does in fact increase with magnitude. We modeled this dependence using an error distribution in the form of a power-law $S(m) = Cm^x$, where $S$ is the typical error associated with magnitude $m$ (Fig. \[fig:model-err\]). The normalization constant $C$ was set so that 90% of the $25^{th}$ magnitude stars have an error up to 0.3 mag, and the index $x$, that defines how rapidly the errors increase with magnitude, was set to $4$. Both parameters were empirically chosen to produce a curve that resembles a typical error distribution of the NIR data, but other combinations around these values would also be good representations of the general distribution of photometric errors in a sample, and do not change the results. This function was then used to determine the width of the Gaussian from which the errors for each star were drawn according to its magnitude, the error for the bright stars being drawn from a narrow Gaussian, and that for the faint stars being drawn from a wider Gaussian (see the insets in Fig. \[fig:model-err\]). The synthetic data produced in this way will hereafter be referred to as Set 2, or heteroscedastic data, since the errors do scale with magnitude. ![image](fig4.eps){width="\textwidth"} Figure \[fig:gen-synth-data\] shows the construction of the synthetic data in the color-color diagrams for a given realization of heteroscedastic data without intrinsic scatter. ### Set 3: Heteroscedastic data with intrinsic scatter {#sec:set3} The final experiment was meant to represent the fact that the stars being observed through the cloud do not have a single color, but are rather distributed along the main-sequence and giant stars loci according to their individual masses and ages. This intrinsic “scatter” about a single colour was modeled by assuming a fraction $f$ of main-sequence-to-giant stars, and randomly populating the loci accordingly, regardless of the stars’ brightnesses. This does not produce a completely realistic set of data because the position of the stars would depend on their brightness (see Sect. \[sec:real-data\]), but our goal is to test how the presence of a generic, non-symmetrical intrinsic scatter affects the measurement of the slope of the extinction law. In this dataset there are then three contributions to the distribution of points in the color-color diagram: the distribution of intrinsic colors of stars background to the cloud, that spread the stars along the main-sequence and giant loci; the extinction, that moves each point from its intrinsic position along a line whose slope is determined by the properties of the dust; and the magnitude-dependent photometric error, that, in a color-color diagram, is equivalent to each point being drawn from an ellipse around each intrinsic and reddened point, whose dimensions depend on the observed brightness of the corresponding star. The data produced in this way will be referred to Set 3, or heteroscedastic data with intrinsic scatter. ### Synthetic control fields {#sec:synth-control-fields} For each set intended to pose as science data, we generated a corresponding set intended to pose as data from a control field. For Sets 1 and 2 the synthetic control fields had equivalent homoscedastic or heteroscedastic errors, respectively, drawn randomly but independently from the same distributions as the errors for the synthetic “science” fields. For Set 3, apart from the heteroscedastic errors, the control field simulation also included an amount of intrinsic scatter equivalent to that of the synthetic “science” field. Neither of the synthetic control fields included extinction, as they are meant to represent the population background to the cloud that is causing the reddening on the “science” field stars. “Real” data {#sec:real-data} ----------- There is one aspect of real observations that we cannot test with the synthetic datasets described above: whereas at low extinction both main-sequence and giants can be observed, at high extinction the main-sequence stars are more efficiently dimmed below the detection limit, leaving a population dominated by the intrinsically brighter giants at redder colors. In our definitions for the synthetic data, this means that $f$ should change with extinction, being larger at low extinctions and progressively smaller at high extinctions. As already hinted in Sect. \[sec:set3\], in set 3 of our synthetic data we do simulate intrinsic scatter, but the brightness of each star does not scale with its spectral type, which translates into having a constant $f$ throughout the entire extinction range. To test the effect of a varying amount of intrinsic scatter with extinction on the methods we applied them to actual observations of control fields (courtesy of C. Róman-Zúñiga), which we reddened in the same way as we did the synthetic data. We refer to these datasets as “real”, keeping the quotes to make clear that the actual observed data were then artificially modified. The first dataset contains data taken with the SOFI instrument at ESO’s New Technology Telescope, in the direction of the galactic disk ($10^{h}38^{m}12^{s}$, $-59^\circ12'02''$, J2000.0), on the night of March 31$^{st}$, 2006. This dataset, which we refer to as “disk dataset”, contains 548 stars with PSF photometry in the $J$, $H$ and $K_s$ filters. This is not the ideal dataset for two reasons: first, it contains few stars, and second, since this is a field in the galactic disk, it already has some extinction. The second dataset contains data also taken with the SOFI instrument at ESO’s New Technology Telescope on the night of June 22$^{nd}$, 2002, but in the direction of the galactic bulge ($17^{h}08^{m}10^{s}$, $-28^\circ03'03''$, J2000.0). This dataset, which we refer to as “bulge dataset”, contains 1071 stars with PSF photometry in the $J$, $H$ and $K_s$ filters. 5000 “science” subsets were drawn randomly from each of these datasets, and extinction was applied to each star from the extinction distribution (see Eq. , Figure \[fig:model-av\]). Similarly, 5000 “control field” subsets were drawn randomly from each dataset, and used as they were. 450 and 800 stars were drawn from the disk and the bulge datasets, respectively, in order to keep the possibility of the “control” and “science” subsets being made of different stars. For the effects of these experiments, the two sets differ in three aspects: the bulge dataset contains more stars, does not have extinction, and is made up mostly of giant stars, whereas the disk dataset contains fewer stars, has already some extinction, and is more likely to have a higher fraction of main-sequence objects. The $(J-H)$ [*vs.*]{} $(H-K)$ color-color diagrams of the two datasets are shown in Figures \[fig:disk-dataset\] and \[fig:bulge-dataset\], and illustrates these differences. In both figures, the left panels are the original datasets, and the middle and right panels show one realization of the extracted “control field” and “science” subsets used for the tests, respectively. ![image](fig5.eps){width="\textwidth"} ![image](fig6.eps){width="\textwidth"} The only caveat regarding these datasets is that, when reddening the stars to pose as science subsets, their magnitudes change, so their associated errors should change accordingly. However, since these are real observations and given the statistical nature of the errors, that adjustment is not possible. In practice, this means that there will be stars at high extinction with an underestimated associated error, but for the effect of our tests this is not critical, since there continues to be no clear dependence of the error with extinction, as is the case in real data subject to extinction. Parameters {#sec:synth-parameters} ---------- To test the robustness of the methods, the synthetic datasets were generated using a range of parameters, namely input slopes of the reddening vector, amount of intrinsic scatter, size of the sample, and other specific parameters only relevant to some of the methods. #### Input slope {#sec:input-slope} Each set was generated with seven values of input slope $\beta$ in the range $[-1.0, 3.0]$ to cover the range expected for an extinction law in the near- and mid-infrared. The methods were tested under ideal conditions of number of stars and $A_V$ coverage to test only the ability of the methods to deal with different values of $\beta$. The input slope was varied to guarantee that our conclusions are not only valid for one specific value of $\beta$. While varying the remaining parameters the input slope was fixed at 1.8. #### Magnitude limit {#sec:magnitude-limit} The magnitude limit was parameterized by $m_c$. It corresponds to setting a brightness limit in real data, and it was applied identically in $J$, $H$ and $K$ such that all the synthetic stars fainter than $m_c$ in any band after applying the extinction, are discarded from the fit (see rightmost panel in Fig. \[fig:gen-synth-data\]). A magnitude limit is naturally set in real data (detection limit), but is also something one might consider doing artificially to eliminate those stars with the largest photometric errors. Decreasing $m_c$ is equivalent to reducing the size of the sample, both in number and in range of extinction (the stars at high extinction will likely be fainter), while simultaneously reducing the range of errors in Sets 2 and 3 (fainter stars will have larger errors). By construction, a value of $m_c=25$ corresponds to allowing the largest errors to be 0.3 mag (see sect. \[sec:set2\]). $m_c$ $N_S$ $N_S^{\mathit{eff}}$ $r_{\mathit{sci}}$ [*[(H$-$K)]{}$_{\mathit{max}}$*]{} ------- -- ------- -- ---------------------- -- -------------------- ------------------------------------ -- 25 5000 4929 0.99 2.32 23 5000 4850 0.97 1.98 21 5000 4550 0.91 1.65 19 5000 2930 0.59 1.35 17 5000 680 0.14 1.04 : Effective number of stars as a function of magnitude cut.[]{data-label="tab:nstars_eff"} Table \[tab:nstars\_eff\] shows the average[^5] number of stars from Set 3 that survive each magnitude cut (effective number of stars, $N_S^{\mathit{eff}}$) from an initial sample of $N_S = 5000$ synthetic stars. The ratio $r_{\mathit{sci}}=N_S^{\mathit{eff}}/N_S$ reflects the functional form of the model distributions and, as such, would be the same for any other input number of stars $N_S$ for each magnitude cut. The table also shows the average $H-K$ color of the most heavily reddened datapoint attained for each magnitude cut, illustrating the loss in $A_V$ coverage with magnitude limitation. The same magnitude cuts were applied to the control field, but because the control field does not have extinction, the effect of the cut in the effective number of stars is much more subtle. The ratio of effective to initial number of stars for the control field is $1.00$ for $m_c \le 21$ mag, and drops to $0.90$ and $0.25$ for $m_c$ of 19 mag and 17 mag, respectively. #### Number of stars {#sec:number-stars} Generating fewer stars in the first place also changes the size of the sample. The (subtle) difference with respect to implementing magnitude cuts is that the sample with fewer stars and no magnitude cut will most likely have a broader range of $A_V$ than would a richer sample with magnitude cut, as some of the fewer stars that are generated and kept may still be faint and heavily extincted from the random drawing process. In real observations, generating fewer stars without imposing magnitude cuts would be comparable to observing a sparse region of the sky where the faint and highly reddened stars can be detected. Imposing a magnitude limit, on the other hand, would correspond to having shallow observations regardless of the richness of the observed field; in this case, the fainter and more reddened stars would not be detected. We tested the methods against varying number of stars in the range $[100, 5000]$, both in the science and the control field. #### Amount of intrinsic scatter {#sec:amount-intr-scatt} The amount of intrinsic scatter for Set 3 was parametrized by $f$, the fraction of stars in the main-sequence locus with respect to those in the giant locus, taking values of 0.01, 0.15 and 0.50. Given the shape of the loci, a larger scatter is obtained for larger values of $f$, although the giant locus alone still produces some intrinsic scatter. Larger values of $f$ would correspond to a large fraction of the stars behind the cloud being main-sequence in real data, as opposed to having mainly giants. When varying the remaining parameters, $f$ is fixed at 0.15 for Set 3 ($f$ is not a parameter in Sets 1 and 2). #### Number of control-field stars {#sec:number-control-field} The synthetic control field is only used for the LinES method, as the other methods rely solely on the science data. Unless explicitly stated, the control field was generated with the same number of stars and was subject to the same magnitude cuts as the synthetic “science” datasets. This means that, after applying a given magnitude cut, there will effectively be more stars in the control field sample than in the science sample, since a fraction of the stars in the science dataset will have been dimmed below the magnitude cut by the effect of extinction. This mimics real observations in that the control field should be obtained from a region of comparable background stellar density (same number of stars generated in the simulations) and using the same instrumental setup (same magnitude cut in the simulations) as the science field. To test the effect of a less than ideal control field, we tested the LinES method against varying numbers of stars also in the control-field datasets. Results from synthetic data {#sec:synth-results} --------------------------- Each method was applied to 5000 realizations of the synthetic data, producing an average value $\hat{\beta}$, and a dispersion $\sigma_{\hat{\beta}}$ around the average for each parameter within sets 1, 2, and 3. We define bias $b$ as the difference between $\hat{\beta}$ and the input value of the slope ($b=\hat{\beta}-\beta$), and consider a method unbiased if it produces an estimate within the 1-$\sigma$ dispersion of the input value ([ *i.e.*]{}, $b \le \sigma_{\hat{\beta}}$). We note that the quoted absolute values of the biases are formally only valid for the conditions of these simulations, namely for the magnitude of the error and the functional form of the error distribution with magnitude. The results are summarized below. Figures \[fig:results-ols\] through \[fig:results-cbces\] show the bias as a function of varied parameters. ### Ordinary least-squares {#sec:ols} ![image](fig7.eps){width="17cm"} The OLS method has long been known to be biased when there are errors in both coordinates. This was observed also in our simulations (Fig. \[fig:results-ols\]), with the method failing to recover the right value of the slope in the majority of the tests performed. It did provide an unbiased result for the following combinations of input slopes and datasets: $\beta=-0.5$ for Set 1, $\beta=-1.0$ and $\beta=-0.5$ for Set 2, and $\beta=1.0$ for Set 3, making this method unsuitable if one is trying to find precisely $\beta$. This method is also not robust against variations in $m_c$, the bias and the dispersion both increasing for brighter $m_c$ in the heteroscedastic sample, except for the brightest magnitude cut considered, where the bias is suddenly reduced. It also reacts, although to a lesser extent, to changes in the amount of intrinsic scatter in the heteroscedastic sample (Set 3), the bias increasing with $f$. This method is extremely robust against variations in the number of stars within the same magnitude cut, although the dispersion in the slope increases for fewer stars as would be expected from poor statistics, and the absolute value is biased. Overall, this method is not a reliable estimator of the extinction law. ### Weighed least-squares {#sec:wols} ![image](fig8.eps){width="17cm"} The WLS method performs well in homoscedastic or heteroscedastic data without intrinsic scatter (Fig. \[fig:results-wls\]), suggesting that, under these conditions, the method can deal properly with the presence of errors in both coordinates, and with them being correlated. For these samples the dispersion in the slope is remarkably small, making it a very accurate estimator. In the presence of intrinsic scatter, however, the method systematically fails to recover the input slope whatever its value in the range $[-1.0, 3.0]$, although it does come close around $\beta=2.5$. The bias as a function of input slope plot for this method and dataset (Fig. \[fig:results-wls\], blue line) suggests that there could be another unbiased value of $\beta$ between $-0.5$ and $0.5$, but tests suggest that there is instead a discontinuity around $\beta=0$. Although biased for Set 3, this method is robust against variations in the number of stars to within 1.5% in the considered range, but the dispersion increases steadily for fewer stars. Biased in the presence of even a small intrinsic scatter, this method is not a reliable estimator of the extinction law. ### Symmetrical methods {#sec:symm-methods-results} ![image](fig9.eps){width="17cm"} The three symmetrical methods returned biased results for all tests (Fig. \[fig:results-symm\]), except for input slopes of $-1.0$ and $1.0$ in datasets 1 and 2 (without intrinsic scatter). The bisector and geometric mean methods produce very similar results. The orthogonal regression method presents the largest biases of the three. Besides being biased, neither method is robust against variations in $m_c$ or $f$ in the presence of intrinsic scatter, the bias increasing for bright magnitude cuts and more intrinsic scatter. The methods are highly robust to variations in the number of stars, the dispersion increasing steadily for fewer stars. These methods are not reliable estimators of the extinction law. ### Binning in $(H-K)$ {#sec:binning-hk} ![image](fig10.eps){width="17cm"} This method is biased for most datasets and parameters tested; the exceptions are for Set 3 with the brightest magnitude cut ($m_c=17$ mag) or for $\beta=0.5$ and $1.0$, and for $\beta=-0.5$ and no intrinsic scatter (Fig. \[fig:results-binhk\]). The slope in mostly underestimated for all datasets, with the bias increasing for brighter $m_c$ for Set 1, and keeping relatively stable against varying $m_c$ for Sets 2 and 3. For Set 3 the bias slightly increases with $f$, as does the dispersion. This method reacts to the number of stars for a given magnitude cut, the bias increasing toward fewer stars. Overall, this is not a reliable estimator of the extinction law. ### Binning in $A_v$ {#sec:binning-a_v} ![image](fig11.eps){width="17cm"} This method is formally unbiased for all tested values of $\beta$ when applied to the three datasets (Fig. \[fig:results-binav\]). However, whereas the bias is always close to zero for the datasets without intrinsic scatter, it becomes slightly large for most values of $\beta$ when intrinsic scatter is introduced. The method is robust against variations in the amount of intrinsic scatter within the considered range, and it is stable against variations in magnitude cut, except for the brightest value of $m_c$ for Set 3, where the smaller number of stars and range in $A_V$ result in very few bins for the fit. The dispersion increases toward brighter magnitude cuts and amount of intrinsic scatter. Because it is based on binning, this method reacts significantly to the number of stars within the same magnitude cut, the bias and the dispersion increasing for fewer stars. Small variations occur when the size of the bin is varied, with the slope being increasingly overestimated for smaller bins, but the method continues to be unbiased within the dispersion. This method is a reliable estimator of the extinction law. ### BCES method {#sec:bces-method} ![image](fig12.eps){width="17cm"} This method is highly reliable and unbiased for homoscedastic data and for heteroscedastic data without intrinsic scatter (Sets 1 and 2). However, in the presence of intrinsic scatter (Set 3), it becomes biased for all input slope values except for $\beta=2.1$ (Fig. \[fig:results-bces\]) for $f=0.15$. Since the other parameters were tested using an input slope of 1.8 (very close to 2.1), the bias seems small in the $m_c$, $N_\mathit{stars}$ and $f$ plots, but we can nevertheless analyze the sensitivity of the method to these parameters. For Set 3, the bias is mildly sensitive to $m_c$, robust against variations in the number of stars, and only minimally sensitive to variations in the amount of intrinsic scatter $f$. Since real data will be similar, in essence, to Set 3, we do not consider this method a reliable estimator in the specific case of the extinction law. ### LinES {#sec:lines-method} ![image](fig13.eps){width="17cm"} This method is the most unbiased and robust of all presented here for homoscedastic or heteroscedastic data, with or without intrinsic scatter, for all tested values of the input slope (Fig. \[fig:results-cbces\]). The method is robust against variations in $m_c$ and $f$, number of stars in the science field, and number of stars in the control field, although the dispersion follows the same tendency as before: increases for brighter magnitude cuts, slightly increases with $f$, and decreases with number of science and/or control field stars. The bias is larger when there are simultaneously very few science and control field stars, and small $A_V$ coverage, but is nevertheless better than any of the other methods for the same conditions. The dispersion is significantly smaller than the next least unbiased method, the binning in $A_V$, in all cases, granting it more precision. Since LinES relies on the characterization of the data through the properties of a control field, we tested the stability of the method against variations in the number of stars in the control field. Given a reasonable number of stars in the science field, the method is robust against variations in the number of control field stars. However, if the science field itself does not have enough stars or $A_V$ coverage, the bias increases further for few control field stars. Invariably, the dispersion increases toward fewer control field stars. This method has proven to be robust as long as the control field is a good representation of the underlying population on the science field, even if containing a smaller number of stars. The excellent performance of this method while varying all relevant parameters validates the LinES method for our case study. In general, it will provide accurate results for observations of cores with either rich or poor background populations, regardless of their spread in spectral types, even for relatively shallow observations, as long as there is a reasonable spread in extinction and the control field is a good representation of the reddened, background population. #### Limitations {#sec:limitations} The simulations show that LinES is not reliable for distributions that do not cover a wide enough range of extinction, if the errors are too large. For reasonable errors, like those described for the simulations, LinES starts overestimating the slope by more than 10% for ranges in $x$-color (i.e., the color plotted on the $x$-axis) smaller than 0.25 to 0.45 mag for slopes between 0.6 and 3.0, respectively, and underestimating the slope by more than 10% for the same ranges in $x$-color for slopes between 0.3 and 0.5. This method should therefore not be applied to data that span less than these values in $x$-color. #### Error estimation {#sec:error-estimation} We used the bootstrap method [e.g., @Wall:2003uq] to estimate the uncertainty in the slope derived by LinES. This method consists in randomly dividing each sample in two equal-number subsets, and measuring the slope of the extinction law in each subset. This produces two values of $\beta$ from which we derive the standard deviation $\sigma_{\beta_i}$. This was repeated $N=1000$ times and the uncertainty in the slope was defined as: $$\label{eq:8} \sigma_\beta=\frac{1}{\sqrt{2}N}\displaystyle\sum_{i=1}^N \sigma_{\beta_i}$$ We applied this method on the synthetic data and compared the results with the dispersion $\sigma_{\hat{\beta}}$ from the simulations. We found that the bootstrap uncertainty is typically $80\%$ of $\sigma_{\hat{\beta}}$, the dispersion from the synthetic data, and the value we believe is a better estimate of the actual dispersion expected for real data. Given the consistency of the results against the variation of the different parameters, we take the uncertainty on the LinES result of our case study to be $1.25$ times the uncertainty derived by the bootstrap method described above. Results from “real” data {#sec:real-results} ------------------------ ![image](fig14.eps){width="17cm"} ![image](fig15.eps){width="17cm"} Figures \[fig:results-set4beta\] and \[fig:results-set4mc\] show the bias of each method [*vs.*]{} varying input slope and varying magnitude cut, respectively, for the “real” datasets described in Sec. \[sec:real-data\]. Except for the cases described below, these results follow those from the synthetic data, maintaining the same behavior and changing only the magnitude of the bias. The methods that deserve some further consideration are the binning in $A_V$ method, the BCES, and LinES. The BCES ([*bottom-middle panel*]{} in the figures) applied to these datasets shows the same type of behavior as with the synthetic data against varying magnitude cuts (Fig. \[fig:results-set4mc\]). The bias is slightly larger than before because there are fewer stars and more dispersion. For varying input slopes, however, the bias is considerably larger, showing that the method is severely biased for some ranges of $\beta$ when applied to less than ideal data. The binning in $A_V$ method, which we concluded was reasonably unbiased when applied to the synthetic data, shows a severe bias also for some values of the input slope $\beta$, in particular for the disk dataset ([*bottom-left panel in the figures*]{}). However, this happens because this dataset has very few stars, and a very limited range in $A_V$ (see Figure \[fig:disk-dataset\]), which translates into too few $A_V$ bins to constrain the extinction vector slope adequately. This presents a limitation of the method, away from which it can be used with relatively good precision. In the “real” datasets, LinES continues to perform beautifully. Except for the first point in Figure \[fig:results-set4mc\] ([*bottom-right panel*]{}), where the magnitude cut is such that limits the $A_V$ to a very narrow range, the bias is consistent with zero for all other magnitude cuts and for all values of the input slope we tested. Detecting a break in the extinction law {#sec:detect-break-extinct} ======================================= Grain growth at high densities has been proposed by a number of studies (see references in Sect. \[sec:introduction\]). If it does occur, then in dense cores the extinction law will become grayer toward higher extinctions, which should translate into a variation of the slope of the reddening vector with extinction, either smooth or abrupt depending on the nature of the transition between grain sizes. This break was detected and measured in the Trifid Nebula for an extinction of $A_V=20$ mag by @Cambresy11. We used the synthetic and “real” datasets to test whether we could detect such a break in the extinction law using LinES. The data was generated using the exact same method as described above for Set 3 with $f=0.15$ and for the “real” dataset (see Sect. \[sec:set3\], \[sec:real-data\]), but the reddening vector was made flatter at some value of $A_V$, or conversely, at some value of $(H-K)$ color, since the color scales linearly with $A_V$. This produced color-color diagrams similar to that of Fig. \[fig:chisq\_break\_ccd\], where the break is more or less obvious depending on the difference between the two slopes. For each realization, we divided the sample into low-extinction ($(H-K)$ less than a value $(H-K)_\mathit{limit}$), and high-extinction ($(H-K) > (H-K)_\mathit{limit}$), and determined the best fits to the reddening vector in the two groups using the LinES method, obtaining two slopes $\hat\beta_\mathit{low}$ and $\hat\beta_\mathit{high}$. This was done for increasing values of $(H-K)_\mathit{limit}$ in steps of 0.2, and starting at $(H-K)_\mathit{limit} = 0.4$. ![image](fig17.eps){width="15cm"} ![image](fig18.eps){width="15cm"} Figures \[fig:slopes\_break\_synth\] and \[fig:slopes\_break\_real\] show the behavior of $\hat\beta_\mathit{low}$ ([*black*]{}) and $\hat\beta_\mathit{high}$ ([*red*]{}) as a function of $(H-K)_\mathit{limit}$ for the synthetic dataset 3 and the “real” bulge dataset, respectively, when applied to 5000 realizations. The solid lines show the median curves, and the shaded regions represent the $1-\sigma$ scatter from the 5000 realizations. The [*upper panels*]{} are for a single-slope reddening vector with slopes 1.0 ([*left*]{}) and 1.5 ([*right*]{}). The [*bottom panels*]{} show the same distributions but for broken extinction laws, with slopes of 1.5 at low-extinction and 1.0 and high-extinction, and breaks located around $A_K=0.4$ mag ([*left*]{}) and $A_K=1.5$ mag ([*right*]{}). For completeness, we find the same results using the disk dataset, albeit with a larger dispersion. In the absence of a break, the two curves are indistinguishable except for the lowest and highest values of $(H-K)_\mathit{limit}$; this is because the subsets used for the fits in these two extremes contain too narrow ranges in $(H-K)$ to constrain the LinES method. When a break does exist, however, the two curves separate distinguishably, even if the break is at low extinction. This then provides a simple method to test whether the same extinction law applies to the full $A_V$ range of a given dataset, or if it would rather best be described as a two-segment law. Unfortunately this method does not allow for the determination of the actual value of the break, but the figures show that the slope of the extinction vector at high extinction can be determined with reasonable accuracy. In particular, the procedure of measuring the slopes in the low-$A_V$ and high-$A_V$ regimes provides a much better handle on the extinction law at high extinction ([*red line*]{} in the figures) than measuring the slope of the entire dataset as a whole ([*blue line*]{} in the figures). Summary {#sec:conclusions} ======= We tested several methods of linear regression associated with the problem of measuring the extinction law from photometric data. We found that many of the commonly used methods provide biased results caused by the presence of errors in both coordinates (which are colors), by the fact that they are correlated, and by the presence of scatter intrinsic to the underlying distribution. We adapted the BCES method of @Akritas:1996fk to allow a compensation for intrinsic scatter, using a control field to characterize the background, unreddened population. We called this method LinES ([**Lin**]{}ear regression with [**E**]{}rrors and [ **S**]{}catter). Using synthetic data, we showed this method provides unbiased and correct results, and that it is robust against the variation of all relevant parameters (at least) within reasonable limits, such as size of sample, range of extinction, and amount of intrinsic scatter. We found that dividing any subset in sliding values of $A_V$ and measuring the slopes of each subset can robustly differentiate between an extinction law characterized by a single slope and one with a break. These results can be applied to observations of background stars seen through dense cores of molecular clouds, or through regions that span a reasonable range of dust density. The characterization of the extinction law through deep, photometric data is a very useful tool to probe the properties of the dust grains in these regions, and a “cheap” one when compared with, for example, spectral analysis of many individual sources. We thank C. Róman-Zúñiga for kindly providing the SOFI data, and the referee, L. Cambresy, for helpful comments that contributed to making the paper more robust. J. Ascenso acknowledges financial support from FCT grant number SFRH/BPD/62983/2009. The research leading to these results has received funding from the European Community’s Seventh Framework Programme (/FP7/2007-2013/) under grant agreement No 229517. Support for this work was also provided by NASA through an award issued by JPL/Caltech, contract 1279166. [^1]: Based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere, Chile (ESO programmes 069.C-0426 and 074.C-0728.) [^2]: We will refer to near-infrared as the wavelength regime from 1 to 2.5 $\mu$m, and the mid-infrared from 3 to 8 $\mu$m. [^3]: A code for fitting data with LinES will be available online. [^4]: The exact value of these colors is not relevant for the tests. [^5]: Average obtained from 5000 realizations.
--- address: 'Department of Physics and Astronomy, University of Missouri,Columbia, MO 65211, USA' author: - Hongjian Feng title: 'Magnetism and electronic properties of BiFeO$_3$ under lower pressure' --- G-type AFM structure;Phase transition;Dzyaloshinskii-Moriya interaction(DMI) 75.30.Et,75.30.GW,71.15.Mb Introduction ============= Multiferroic materials have more than two of ferroelectric/antiferroelectric, ferromagnetic/ antiferromagnetic, and even ferroelastic/ antiferroelastic ordering in same phase [@1; @2; @3; @4]. This feature makes them have potential applications in information storage, spintronics,and sensors. The study on BiFeO$_3$, a multiferroic, which possess weak ferromagnetism and ferroelectric properties simultaneously at room temperature, has been recovered recently.[@5; @6; @7]. It has long been known to be ferroelectric with a Curie temperature of about 1103 K and G-type antiferromagnetic(AFM) with a Néel temperature of 643 K. The large difference between the magnetic and ferroelectric ordering temperature makes the linear magnetoelectric coefficient smaller. Moreover,the $R3c$ space group permits a spiral spin structure in which the AFM axis rotates through the crystal with a long-wavelength period of 620[Å]{}, which further reduces the observed G-type AFM magnetization. Fortunately, the decrease of magnetization can be compensated by doping in B-site of perovskites and fabricating thin film samples[@8; @9; @10]. The ferroelectricity is produced by the Bi-6s stereochemically active lone pair , which can only occur if the cation ionic site have broken inversion symmetry, while the weak magnetism is mainly attributed to Fe$^{3+}$ ions. The coupling between magnetic and ferroelectric parameter is weak because they are driven by different ionic sites, and this agrees with the fact of large difference between the Curie temperature and AFM Néel temperature. Through first-principles calculations, we have shown that the rotation of the oxygen octahedra(antiferrodistortive(AFD) distortion) couples with the weak ferromagnetism due to the Dzyaloshinskii-Moriya interaction(DMI), considering the spin-orbital(SO) coupling effect and the noncollinear spin configuration[@11]. The study of BiFeO$_3$ under pressure should give us much insight in the understanding of the magnetoelectric coupling because it involves interesting structural and magnetic changes under pressure. Another metal-insulator transition and magnetic anomaly have already been reported recently[@12; @13]. Meanwhile, A. J. Hatt and coworkers performed the first principles calculations on strain induced phase transition of BiFeO$_3$ film on the (001)-oriented substrate[@14; @15]. An isosymmetric phase transition accompanying with dramatic structural change has been found. The two isosymmetric phases have the same space-group symmetry due to the constraints caused by coherence and epitaxy. The isosymmetric transition also indicates the coexistence of the rhombohedral and tetragonal phases. The rhombohedral film tends to be expanded with large compressive strain. And the lattice expansion can be accommodated by the increase of tetragonal domain. Further theoretical calculations about the phase transition of BiFeO$_3$ need to be done to explain the detailed mechanism under pressure. We have performed first-principles calculations to investigate the magnetic and electronic properties of BiFeO$_3$ under lower pressure range corresponding to the experiments to shed light on the mechanism. The remainder of this article is structured as follows: In section 2, the computational details of our calculations are given. We discuss the results in section 3. Our conclusion are given in section 4. Computational details ====================== We performe calculations within the local spin density approximation(LSDA) to DFT using the ABINIT package[@16; @17]. The ion-electron interaction is modeled by the projector augmented wave (PAW) potentials [@18; @19] with energy cutoff of 500 eV. We treat Bi 5d, 6s, and 6p electrons, Fe 4s, 4p,and 3d electrons, and O 2s and 2p electrons as valence states. $10\times10\times10$ Monkhorst-Pack sampling of the Brillouin zone are used in calculations. LSDA+U method is introduced, where the strong coulomb repulsion between localized $d$ states has been considered by adding a Hubbard-like term to the effective potential[@20; @21; @22]. Results and discussion ======================= FM behavior ----------- According to the experimental results and previous calculations[@13; @23], we have constructed three structures, rhombohedral, monoclinic, and orthorhombic structures, with space group $R3c$, $Cm$, and $Pnma$, respectively. The lattice parameters are given in Table 1. The phase transition under pressure have been discussed elsewhere, and we suggest there exists a first-order phase transition from $R3c$ to $Cm$ structure around 9-10 GPa, and gradually to pure $Pnma$ structure around 12 GPa which is close to the experimental values[@13].We only report the magnetic and electronic behavior in this work. FM spin configuration for each structure are used to analyze the magnetization under pressure. The FM magnetization are shown in Fig.1. One can see the spin value for the monoclinic structure is generally larger than the other two structures, and it exhibits a decrease of spin magnetization with increasing pressure and leads to a flat curve after passing 9-10 GPa. The monoclinic and rhombohedral structures both have the same spin value at all pressure range, indicating they have same structure dependent spin configuration. Moreover the phase transition between these two can be much easier than others. The transition between these two structures happens at 9-10 GPa. That between rhombohedral and orthorhombic structures take places at higher pressure of 12 GPa. AFM-FM spin order transition ---------------------------- Meanwhile, for rhombohedral structure we set up three spin configurations, FM, AFM, and G-type AFM spin structures. The total energy per unit cell for different spin configuration have been calculated and shown in Fig. 2, as well as considering the on-site Coulomb interaction term, U. In LSDA+U calculations, $U$ and $J$ are defined as $$U=\frac{1}{(2l+1)^2}\sum_{m,m'}<m,m'|V_{ee}|m,m'>=F^0,$$ $$J=\frac{1}{2l(2l+1)}\sum_{m\neq m',m'}<m,m'|V_{ee}|m,m'>=\frac{F^2+F^4}{14},$$ where $V_{ee}$ are the screened Coulomb interaction among the $nl$ electrons. $F^0$, $F^2$, and$F^4$ are the radial Slater integrals for $d$ electrons in Fe. It is apparent that there are intersection points between the FM and AFM energy curves. As pressure approaches the intersection points, both LSDA+U and LSDA give lower AFM energy. However FM energy are much favorable after pressure exceeds the intersections, indicating an AFM-FM phase transition around the pressure value of 9-10 GPa. The LSDA+U can produce relatively higher energy. On-site coulomb interaction , $U$ is the energy needed to put two electrons in the same site. The value of $U$ among the electrons in transition-metal $d$ orbitals are one magnitude higher than the Stoner parameter. An appropriate band gap can be obtained in transition-metal oxides with properly choosing the $U$ parameter . In our calculation we choose $U$=2 eV as it is rightly under the critical value to preclude the DMI in G-type AFM spin configuration, where the Fe ions are arranged antiferromagnetically along $x$ direction[@24]. We take into account the non-collinear spin structure and spin-orbital(SO) interaction in G-type configuration. One can see that the G-type spin structure leads to a lower energy comparing with the FM and AFM one within LSDA+U scheme. Except the anomaly around the first transition pressure of 9-10 GPa, there is another one around 12 GPa. The relaxation results show the structure is changed into an orthorhombic phase at this pressure. Exchange interaction -------------------- Consideration the Heisenberg model, $$\Delta E=-1/2\sum_{i,j}J_{i,j}\mathbf{S_i} \cdot \mathbf{S_j},$$ for FM spin configuration, the total energy involving the spin exchange interaction can be written as, $$E_{FM}=E_t-Z_cJ_zS^2,$$ where $E_t$ is the total energy without the spin, $Z_c$ is the number of nearest neighboring Fe ions, and $J_z$ is the exchange integral. While the AFM total energy has the form, $$E_{AFM}=E_t+Z_cJ_zS^2.$$ Therefore, we can determine the exchange parameters from the energy difference between different spin configurations. We set up the FM structure with spin direction along $z$ axis while the G-type along $x$ direction. From Fig. 3, it is obvious that $R3c$ space group tends to produce a much favorable G-type structure. LSDA+U gives lower AFM exchange integral than LSDA. It is worth pointing out again that the anomaly takes place around the critical pressure value. Exchange integral even lies in the same level in these two schemes. G-type structure has lower exchange interaction under the whole pressure range. Two anomalies can be found at 9 and 12 GPa, respectively. It is consistent with energy calculations. When pressure exceeds the critical value, AFM exchange integral is changing into a positive value where it applies to FM spin configuration. This shows the AFM-FM transition occurs at the critical pressure accompanying with the structural phase transition which agrees well with the previous total energy calculations. The phase graph under pressure is shown in Fig. 4. We suggest the rhombohedral structure maintains before the critical pressure value of 9 GPa. A combination of these three structures exist between 9 and 12 GPa, accompanying with an AFM-FM spin transition. A pure orthorhombic phase will be found after 12 GPa while the FM spin structure remains. G-type AFM vectors ------------------ In G-type spin structure, we take into account the non-collinear and SO coupling effect. The AFM vectors under pressure are reported in Fig. 5. It is apparent that the AFM spin in $x$ direction cancel out and a resultant magnetization along $y$ and $z$ direction can be obtained due to the DMI. DMI is caused by the interaction of neighboring Fe sites which can be described by $$E_{Fe1,Fe2}^{(2)}=\mathbf{J}_{Fe1,Fe2}^{(2)}(\mathbf{S_1}\cdot\mathbf{S_2})+ \mathbf{D}_{Fe1,Fe2}^{(2)}(\mathbf{S_1}\times\mathbf{S_2})+\mathbf{S}(R)\cdot \Gamma_{Fe1,Fe2}^{(2)}\cdot \mathbf{S}_2,$$ in the second order perturbation calculation[@25; @26]. The first term on the right hand side of the Eq. (6) corresponds to the usual isotropic superexchange interaction, and the second term is the DMI. The Hamiltonian for the system reads, $$H_{BiFeO_3}=-2\sum_{<1i,2j>}\textbf{J}_{1i,2j}\mathbf{S}_{1i}\cdot\mathbf{S}_{2j}+\sum_{<1i,2j>}\textbf{D}_{1i,2j}\mathbf{S}_{1i}\times\mathbf{S}_{2j}.$$ The first term is the symmetric superexchange, and the second one is the antisymmetric DMI contribution. $\textbf{J}_{1i,2j}$ is a constant similar to the exchange interaction. **D** is the DMI constant and determined by the sense of rotation of the neighboring oxygen octahedra. **D** reads by the second order perturbation in the case of one electron per ion $$\textbf{D}_{Fe1,Fe2}^{(2)}=(4i/U)[b_{nn'}(Fe1-Fe2)C_{n'n}(Fe2-Fe1)-C_{nn'}(Fe1-Fe2)b_{n'n}(Fe2-Fe1)],$$ where $U$ is the energy required to transfer one electron from one site to its nearest neighbor, a parameter similar to on-site Coulomb interaction in our $Ab$ $initio$ computation, and inversely proportional to **D**. The spin value along $z$ direction is depressed after pressure exceeds the critical value while spin magnetization along $y$ direction has the opposite trend and increases with pressure increasing. Firstly, the net magnetization has components along $z$ and $y$ direction simultaneously and deviate away from the $z$ direction as pressure exceeds 12 GPa, resulting in zero component along $z$, while that along $y$ direction increases and maitains a constant value after 12 GPa. The magnetization per unit cell is calculated based on the LSDA+U method and it is underestimated in this way. Therefore greater value is expected in LSDA calculations[@24]. Electronic properties --------------------- In order to shed light on the electronic properties under pressure unambiguously, the total density of states(DOS) before and after exerting pressure are given in Fig. 6. The orbital resolved DOS(ODOS) for Fe $3d$ orbitals are given in Fig. 7 and Fig. 8, respectively. From Fig. 6, it can be seen that a semiconducting band gap of 1.7 eV is produced within LSDA+U while the band gap vanishes under pressure of 12 GPa, suggesting an obvious insulator-metal (IM) transition. The IM transition is mainly caused by the shift of the states of Fe $3d$ electrons in the vicinity of fermi energy. The finite DOS of these electrons cut through the fermi level and form a FM spin structure under 12 GPa. From Fig. 7, one can see all up-spin electrons are occupied and down-spin electrons are in conduction band. While almost all up and down spin states are partially filled under pressure in Fig. 8. Three-fold degenerate states $t_{2g}$ coming from $d_{xy}$, $d_{yz}$, and $d_{xz}$ orbitals remain degenerate, while $d_{z^2}$ orbital splits from the two-fold degenerate states $e_g$. It is this orbital that makes $e_g$-$e_g$ AFM interaction reduced. The splitting of the orbital under pressure is significant, suggesting the complete depression of AFM interaction and the occurrence of FM spin structure. This is also the reason for the decrease of magnetization of G-type AFM structure along $z$ direction under pressure. Meanwhile the down-spin Fe $3d$ states under pressure cut across the fermi energy and lead to the conducting behavior. Conclusion =========== The total energy, magnetic and electronic properties of BiFeO$_3$ under pressure are calculated based on the LSDA and LSDA+U scheme. Results show two anomalies can be found at 9-10 GPa and 12 GPa, respectively. The first one is the critical pressure for first-order phase transition accompanying with AFM-FM transition. Meanwhile the behavior under critical pressure also involves an IM transition. The second one does not involve further magnetic transition but structural transitions. The magnetization of the G-type AFM spin structure in $y$ direction increases while the $z$ direction component decreases, which can be explained by the splitting of the $d_{z^2}$ orbital from doubly degenerate $e_g$ states. [99]{} M. Gajek, M. Bibes, and S. Fusil, Nature Mater. 6, 296 (2007). Y. H. Chu, L. W. Martin, and M. B. Holcomb, Nature Mater. 7, 478 (2008). H. Schmid,Ferroelectrics 162, 317 (1994). M. Fiebig, J.Phys.D:Appl. Phys. 38, R123 (2005). J. Wang, J. B. Neaton, H. Zheng, V. Nagarajan, S. B. Ogale, B. Liu, D. Viehland, V. Vaithyanathan, D. G. Schlom, U. V. Waghmare, N. A. Spaldin, K. M. Rabe, M. Wuttig, and R. Ramesh, Science 299,1719 (2003). G. Catalan and J. F. Scott, Adv. Mater. 21, 2463 (2009). Manoj K. Singh, S. Dussan, W. Prellier, and R. S. Katiyar, J. Mag. Mag. Mater. 321, 1706 (2009). P. Baettig and N. A. Spaldin, Appl. Phys. Lett. 86, 012505 (2005). S. Kamba, D. Nuzhnyy, R. Nechache, K. Zaveta, D. Niznansky, E. Santava, C. Harnagea, and A. Pignolet, Phys. Rev. B 77, 104111(2008) H. Feng and F. Liu, Phys. Lett. A 372, 1904 (2008). H. Feng and F. Liu, Chin.Phys. Lett. 25, 671 (2008). Alexander G. Gavriliuk, Viktor V. Struzhkin, Igor S. Lyubutin, Sergey G. Ovchinnikov, Michael Y. Hu, and Paul Chow, Phys. Rev. 77, 155112 (2008) R. Haumont, P. Bouvier, A. Pashkin, K. Rabia, S. Frank, B. Dkhil, W. A. Crichton, C. A. Kuntcher, and J. Kreisel, Phys. Rev. B 79, 184110 (2009) O. E. Gonzalez Vazquez and Jorge Iniguez, Phys. Rev. B 79, 064102 (2009) Alison J. Hatt, Nicola A. Spaldin, and Claude Ederer, Phys. Rev. B 81, 054109 (2010) The ABINIT code is a common project of the Universite Catholique de Louvain, Corning, Inc., and other contributors(URL http://www.abinit.org). X. Gonze, J.-M. Beuken , R. Caracas, F. Detraux, M. Fuchs, G.-M. Rignanes, F. Jollet, M. Torrent, A. Roy, M. Mikami, P. Ghosez, J.-Y. Raty , and D. C. Alan, Comp. Mater. Sci. 25, 478 (2002). P. E. Blochl, Phys. Rev. B 50, 17953 (1994). G. Kresse and D. Joubert, Phys. Rev. B 59, 1758 (1999). V. I. Anisimov, J. Zaane, and O. K. Andersen, Phys. Rev. B 44, 943 (1991). V. I. Anisimov, I. V. Solovyev, and M. A. Korotin, Phys. Rev. B 48, 16929 (1993). V. I. Anisimovy, F. Aryasetiawanz, and A. I. Lichtenstein,J. Phys.: Condens. Matter. 9, 767 (1997). P. Ravindran, R. Vidya, A. Kjekshun, and H. Fjellvag, Phys. Rev. B 74, 224412 (2006) H. Feng, J. Magn. Magn. Mater. 322, 1765 (2010). P. W. Anderson, Phys. Rev. 115, 2 (1959). T. Moriya, Phys. Rev. Lett. 4, 228 (1960). $R3c$ $Cm$ $Pnma$ -------------- --------------------------- --------------------------- ---------------------------- a([Å]{}) 5.459 5.7900 5.5849 b([Å]{}) 5.6899 7.6597 c([Å]{}) 4.1739 5.3497 $\alpha$() 60.36 $\beta$() 91.99 V([Å$^3$]{}) 115.98 137.42 113.12 Bi (2a):0,0,0 (2a):0.9376,0,0.0685 (4c):0.0536,0.2500,0.9886 Fe (2a):0.2308,0.2308,0.2308 (2a):0.5110,0,0.4961 (4b):0,0,0.5 O (6b):0.5423,0.9428,0.3980 (2a):0.5626,0,0,9489 (4c):0.9750,0.2500,0.4060; (4b):0.7958,0.7603,0.4231 (8d):0.2000,0.9540,0.1945 : Current used lattice parameters for rhombohedral, monoclinic, and orthorhombic structures **Figure captions:** Fig.1 FM magnetization for three phases of BiFeO$_3$ under pressure. Fig.2 The total energy as functions of pressures. Fig.3 The exchange integrals for different spin structures as functions of pressure. Fig. 4 The phase graph of BiFeO$_3$ under pressure. Fig. 5 G-type AFM vectors variations with respect to pressure. Fig. 6 Total DOS under ambient and transition pressure. Fig. 7 ODOS for Fe $d_{xy},d_{yz},d_{z^2},d_{xz}$, and $d_{x^2-y^2}$ orbitals under ambient pressure. Fig. 8 ODOS for Fe $d_{xy},d_{yz},d_{z^2},d_{xz}$, and $d_{x^2-y^2}$ orbitals under transition pressure. ![FM magnetization for three phases of BiFeO$_3$ under pressure.](figure1.eps) ![ The total energy as functions of pressures.](figure2) ![The exchange integrals for different spin structures as functions of pressure.](figure3){width="8cm"} ![The phase graph of BiFeO$_3$ under pressure.](figure4) ![G-type AFM vectors variations with respect to pressure.](figure5) ![Total DOS under ambient and transition pressure.](figure6) ![ODOS for Fe $d_{xy},d_{yz},d_{z^2},d_{xz}$, and $d_{x^2-y^2}$ orbitals under ambient pressure.](figure7) ![ODOS for Fe $d_{xy},d_{yz},d_{z^2},d_{xz}$, and $d_{x^2-y^2}$ orbitals under transition pressure.](figure8)
--- abstract: 'We formulate and analyze a graphical model selection method for inferring the conditional independence graph of a high-dimensional nonstationary Gaussian random process (time series) from a finite-length observation. The observed process samples are assumed uncorrelated over time and having a time-varying marginal distribution. The selection method is based on testing conditional variances obtained for small subsets of process components. This allows to cope with the high-dimensional regime, where the sample size can be (drastically) smaller than the process dimension. We characterize the required sample size such that the proposed selection method is successful with high probability.' address: 'Dept. of Computer Science, Aalto University, Finland; [email protected]\' bibliography: - '/Users/ajung/work/LitAJ\_ITC.bib' - '/Users/ajung/work/tf-zentral.bib' title: 'Learning Conditional Independence Structure for High-dimensional Uncorrelated Vector Processes' --- Sparsity, graphical model selection, conditional variance testing, high-dimensional statistics. Introduction {#sec_intro} ============ Consider a zero-mean, ${d}$-dimensional Gaussian discrete-time random process (time series) $${{\mathbf x}}[{n}] \!{:=}\! \big(x_{1}[{n}],\ldots,x_{{d}}[{n}]\big)^{T} \!\in\! \mathbb{R}^{{d}}\mbox{, for } {n}= 1,\ldots,{N}. \vspace*{-2mm}$$ Based on the observation of a single process realization of length ${N}$, we are interested in learning the conditional independence graph (CIG) [@Dahlhaus2000; @DahlhausEichler2003; @BachJordan04; @PHDEichler] of ${{\mathbf x}}[{n}]$. The learning method shall cope with the *high-dimensional regime*, where the number ${d}$ of process components is (much) l arger than the number ${N}$ of observed vector samples [@ElKaroui08; @Santhanam2012; @RavWainLaff2010; @Nowak2011; @Bento2010; @MeinBuhl2006; @FriedHastieTibsh2008]. In this regime, accurate estimation of the CIG is only possible under structural assumptions on the process ${{\mathbf x}}[{{n}}]$. In this work, we will consider processes whose CIGs are *sparse* in the sense of containing relatively few edges. This problem is relevant, e.g., in the analysis of medical diagnostic data (EEG) [@Nowak2011], climatology [@EbertUphoff2012] and genetics [@DavidsonLevin2005]. Most of the existing approaches to graphical model selection (GMS) for Gaussian vector processes are based on modelling the observed data either as i.i.d.  samples of a single random vector, or as samples of a stationary random process. For nonstationary processes, the problem of inferring time-varying graphical models has been considered [@KolarXing; @ZhouLafferty2008]. By contrast, we assume one single CIG representing the correlation structure for all samples ${{\mathbf x}}[{n}]$, which are assumed uncorrelated but having diifferent marginal distributions, which are determined by the covaraince matrix $\mathbf{C}[{n}]$. #### Contributions: {#contributions .unnumbered} Our main conceptual contribution resides in the formulation of a simple GMS method for unorrelated nonstationary Gaussian processes, which is based on conditional variance tests. For processes having a sparse CIG, these tests involve only small subsets of process components. We provide a lower bound on the sample size which guarantees that the correct CIG is selected by our GMS method with high probability. This lower bound depends only logarithmically on the process dimension and polynomially on the maximum degree of the true CIG. Moreover, our analysis reveals that the crucial parameter determining the required sample size is the minimum partial correlation of the process. #### Outline: {#outline .unnumbered} The remainder of this paper is organized as follows. In Section \[SecProblemFormulation\], we formalize the considered process model and the notion of a CIG. In particular, we will state four assumptions on the class of processes that will be considered in the following. Section \[sec\_GMS\_via\_cond\_var\_testing\] presents a GMS method based on conditional variance testing. There, we also state and discuss a lower bound on the sample size guaranteeing success of our GMS method with high probability. #### Notation: {#notation .unnumbered} Given a ${d}$-deminsional process ${{\mathbf x}}[1],\ldots,{{\mathbf x}}[{N}]$ or length ${N}$ , we denote a scalar component process as $\mathbf{x}_{i}[\cdot] {:=}\big(x_{i}[1],\ldots,x_{i}[{N}]\big)^{T} \in \mathbb{R}^{{N}}$ for $i\in \{1,\ldots,d\}$. The Kronecker-delta is denoted $\delta_{n,n'}$ with $\delta_{n,n'}=1$ if $n=n'$ and $\delta_{n,n'}=0$ else. By ${\mathfrak{S}^{r}_{{s_{\rm max}}}}$, we denote all subsets of $\{1,\ldots,d\}$ of size at most ${s_{\rm max}}$ and which do not contain $r$. We denote by ${{{\mathbf A}}_{\{\mathcal{A}, \mathcal{B}\}}}$ the submatrix with rows indexed by $\mathcal{A}$ and columns indexed by $\mathcal{B}$. Given a matrix ${{\mathbf A}}$, we define its infinity norm as ${\ensuremath{{\|{{\mathbf A}}\|}_{\infty}}} {:=}\max_{i} \sum_{j} {|A_{i,j}|}$. The minimum and maximum eigenvalues of a positive semidefinite (psd) matrix $\mathbf{C}$ are denoted $\lambda_{\rm min}({{\mathbf C}})$ and $\lambda_{\rm max}({{\mathbf C}})$, respectively. Problem Formulation {#SecProblemFormulation} =================== Let ${{\mathbf x}}[{n}]$, for ${n}\in \{1, \ldots, {N}\}$, be a zero-mean ${d}$-dimensional, real-valued Gaussian random process of length ${N}$. We model the time samples ${{\mathbf x}}[{n}]$ as uncorrelated, and therefore independent due to Gaussianity. The probability distribution of the Gaussian process ${{\mathbf x}}[{n}]$ is fully specified by the covariance matrices ${{{\mathbf C}}[{{n}}]}$ which might vary with ${n}$. To summarize, in what follows we only consider processes conforming to the model $$\begin{aligned} \label{equ_proc_model} & \{ {{\mathbf x}}[{{n}}] \}_{{{n}}=1}^{{N}} \mbox{ jointly Gaussian zero-mean with } \nonumber \\Ê & \expect \{ {{\mathbf x}}[{n}] {{\mathbf x}}^{T} [{n}'] \} = \delta_{{n},{n}'} {{{\mathbf C}}[{{n}}]}. \vspace*{-3mm}\end{aligned}$$ The process model is relevant for applications facing weakly dependent time series, so that samples which are sufficiently separated in time can be effectively considered as uncorrelated [@Hwang2014]. Moreover, the process model can be used as an approximation for the discrete Fourier transform of stationary processes with limited correlation width or fast decay of the autocovariance function [@JuHeck2014; @HannakJung2014conf; @JungGaphLassoSPL; @CSGraphSelJournal]. Another setting where the model is useful are vector-valued locally stationary processes, where a suitable local cosine basis yields approximately uncorrelated vector processes [@Don96]. For our analysis we assume a known range within which the eigenvalues of the covariance matrices ${{{\mathbf C}}[{n}]}$ are guaranteed to fall. \[aspt\_eig\_val\] The eigenvalues of the psd covariance matrices ${{{\mathbf C}}[{{n}}]}$ are bounded as $$\label{equ_unif_bound_eig_val_CMX} 0 < \alpha[{n}] \leq \lambda_{\rm min}({{{\mathbf C}}[{{n}}]}) \leq \lambda_{\rm max}({{{\mathbf C}}[{{n}}]}) \leq \beta[{n}] \vspace*{-3mm}$$ with known bounds $\beta[{n}] \geq \alpha[{n}] > 0$. It will be notationally convenient to associate with the observed samples ${{\mathbf x}}[1],\ldots,{{\mathbf x}}[{N}]$ the the “time-wise” stacked vector $${{\mathbf x}}= ({{{\mathbf x}}[1]}^T,\ldots, {{{\mathbf x}}[{N}]}^T)^T \in \mathbb{R}^{{N}{d}} \nonumber \vspace*{-1mm}$$ and the “component-wise” stacked vector $${\tilde{{{\mathbf x}}}}{:=}({{{\mathbf x}}_{1}[\cdot]}^T,\ldots, {{{\mathbf x}}_{{d}}[\cdot]}^T)^T \in \mathbb{R}^{{N}{d}}. \nonumber \vspace*{-2mm}$$ We have, for some permutation matrix ${{\mathbf P}}\in \{0,1\}^{{N}{d}\times {N}{d}}$, $$\label{equ_permutation_relation} \tilde{{{\mathbf x}}} = \mathbf{P} {{\mathbf x}}. \vspace*{-2mm}$$ For data samples ${{\mathbf x}}[{n}]$ conforming to , the associated vectors ${{\mathbf x}}$ and $\tilde{{{\mathbf x}}}$ are zero-mean Gaussian random vectors, with covariance matrices $${{\mathbf C}}_{x}\! = \!\expect\{ {{\mathbf x}}{{\mathbf x}}^{T} \}\\ \! = \! \begin{pmatrix} {{{\mathbf C}}[1]} & \cdots & \mathbf{0} \\ \vdots &\ddots &\vdots \\ \mathbf{0}& \cdots & {{{\mathbf C}}[{N}]} \\ \end{pmatrix}, \label{equ_big_cov_matrix_x}$$ and $${{\mathbf C}}_{\tilde{x}}\!=\!\expect \{ {\tilde{{{\mathbf x}}}}{\tilde{{{\mathbf x}}}}^{T} \}\!=\! \begin{pmatrix} {{{\mathbf C}}_{\tilde{x}}[1,1]} & \cdots & {{{\mathbf C}}_{\tilde{x}}[1,{d}]} \\ \vdots &\ddots &\vdots \\ {{{\mathbf C}}_{\tilde{x}}[{d},1]}&\cdots & {{{\mathbf C}}_{\tilde{x}}[{d},{d}]} \\ \end{pmatrix}, \label{equ_m_C_tilde_x}$$ respectively. Due to , we have $${{\mathbf C}}_{\tilde{x}} = {{\mathbf P}}{{\mathbf C}}_{x} {{\mathbf P}}^{T}. \label{equ_rel_permuted_vecs_cov} \vspace*{-3mm}$$ Since the permutation matrix ${{\mathbf P}}$ is orthogonal ($\mathbf{P}^{T} = \mathbf{P}^{-1}$), the precision matrix ${{\mathbf K}}_{x} {:=}{{\mathbf C}}_{x}^{-1}$ is also block diagonal with diagonal blocks ${{{\mathbf K}}[{n}]} = ({{{\mathbf C}}[{n}]})^{-1}$. As can be verified easily, the $(a,b)$th ${N}\times {N}$ block ${{{\mathbf K}}_{\tilde{x}}[a,b]}$ of the matrix ${{\mathbf K}}_{\tilde{x}}= {{\mathbf P}}{{\mathbf K}}_{x} {{\mathbf P}}^{T}$ is diagonal: $${{{\mathbf K}}_{\tilde{x}}[a,b]} = \begin{pmatrix} {\big({{\mathbf K}}[1]\big)_{\{a, b\}}}& \cdots & \mathbf{0} \\ \vdots & \ddots &\vdots \\ \mathbf{0} &\cdots & {\big({{\mathbf K}}[{N}]\big)_{\{a, b\}}} \\ \end{pmatrix}. \label{equ_submatrix_L_a_b}$$ Conditional Independence Graph ------------------------------ We now define the CIG of a ${d}$-dimensional Gaussian process ${{\mathbf x}}[{{n}}] \in \mathbb{R}^{{d}}$ as an undirected simple graph ${\mathcal{G}}=({{\mathcal{V}}},{{\mathcal{E}}})$ with node set ${{\mathcal{V}}}= \{1, 2, \ldots, {d}\}$. Node $j \in {{\mathcal{V}}}$ represents the process component ${{{\mathbf x}}_{j}[\cdot]}=(x_{j}[1],\ldots,x_{j}[{N}])^{T}$. An edge is absent between nodes $a$ and $b$, i.e., $(a,b) \notin {{\mathcal{E}}}$ if the corresponding process components ${{\mathbf x}}_{a}[\cdot]$ and ${{\mathbf x}}_{b}[\cdot]$ are conditionally independent, given the remaining components $\{ {{{\mathbf x}}_{r}[\cdot]} \}_{r \in \mathcal{V} \setminus \{a,b\}}$. Since we model the process ${{\mathbf x}}[{n}]$ as Gaussian (cf. ), this conditional independence can be read off conveniently from the inverse covariance (precision) matrix $\mathbf{K}_{\tilde{x}} {:=}\mathbf{C}_{\tilde{x}}^{-1}$. In particular, ${{\mathbf x}}_{a}[\cdot]$ are ${{\mathbf x}}_{b}[\cdot]$ are conditionally independent, given $\{ {{{\mathbf x}}_{r}[\cdot]} \}_{r \in \mathcal{V} \setminus \{a,b\}}$ if and only if ${{{\mathbf K}}_{\tilde{x}}[a,b]} =\mathbf{0}$ [@Brockwell91 Prop. 1.6.6.]. Thus, we have the following characterization of the CIG ${\mathcal{G}}$ associated with the process ${{\mathbf x}}[{{n}}]$: $$\label{equ_edge_absent_corr_operator} (a,b) \notin {{\mathcal{E}}}\mbox{ if and only if } {{{\mathbf K}}_{\tilde{x}}[a,b]} =\mathbf{0}.$$ Inserting into yields, in turn, $$\label{equ_charac_CIG_indpendent_not_identical_case_12} \hspace*{-2mm}(a,b) \!\notin\! {{\mathcal{E}}}\mbox{ if and only if } {\big({{{\mathbf K}}[{n}]}\big)_{\{a, b\}}} \!=\! 0 \mbox{ for all } {{n}}\!\in\! [{N}].$$ We highlight the coupling in the CIG characterization : An edge is absent, i.e., $(a,b) \notin {{\mathcal{E}}}$ only if the precision matrix entry ${\big({{\mathbf K}}[{{n}}]\big)_{\{a, b\}}}$ is zero *for all* ${{n}}\in \{1,\ldots,{N}\}$. We will also need a measure for the strength of a connection between process components ${{\mathbf x}}_{a}[\cdot]$ and ${{\mathbf x}}_{b}[\cdot]$ for $(a,b) \in {{\mathcal{E}}}$. To this end, we define the *partial correlation* between ${{\mathbf x}}_{a}[\cdot]$ and ${{\mathbf x}}_{b}[\cdot]$ as $$\begin{aligned} \rho_{a,b} & {:=}(1/{N}) \sum_{n=1}^{{N}} \alpha[{{n}}] \big[ \big( {{\mathbf K}}[{{n}}] \big)_{a,b} / \big( {{\mathbf K}}[{{n}}] \big)_{a,a} \big]^{2}. \label{equ_partial_correlation_def} \vspace*{-4mm}\end{aligned}$$ Inserting into shows that $(a,b) \!\notin\! \mathcal{E}$ implies $\rho_{a,b}\!=\!0$. Accurate estimation of the CIG for finite sample size ${N}$ (incuring unavoidable sampling noise) is only possible for sufficiently large partial correlations $\rho_{a,b}$ for $(a,b) \in \mathcal{E}$. \[aspt\_minimum\_par\_cor\] For any edge $(a,b) \in \mathcal{E}$, the partial correlation $\rho_{a,b}$ (cf. )is lower bounded by a constant $\rho_{\rm min}$, i.e., $$(a,b) \in {{\mathcal{E}}}\Rightarrow \rho_{a,b} \geq \rho_{\rm min}. \vspace*{-2mm}$$ The CIG ${\mathcal{G}}$ of a vector-process ${{\mathbf x}}[{n}]$ is fully characterized by the neighborhoods $\mathcal{N}(r) = \{ t \in \mathcal{V}: (r,t) \in \mathcal{E} \}$ of all nodes $r \in \mathcal{V}$. Many applications involve processes with these neighborhoods being small compared to the overall process dimension ${d}$. The CIG is then called *sparse* since it contains few edges compared to the complete graph. \[aspt\_cig\_sparse\] The size of any neighborhood $\mathcal{N}(r)$, i.e., the degree of node $r$ is upper bounded as $$\label{equ_sparsity_max_degree} |\mathcal{N}(r)| \leq {s_{\rm max}}, \vspace*{-3mm}$$ where typically ${s_{\rm max}}\ll {d}$. Slowly Varying Covariance ------------------------- For several practically relevant settings, such as stationary processes with limited correlation width [@JuHeck2014; @HannakJung2014conf; @JungGaphLassoSPL; @CSGraphSelJournal] or underspread nonstationary processes [@GM_spectra02], the observed processes can be well approximated by the model with the additional property of a *slowly varying* covariance matrix ${{\mathbf C}}[{{n}}]$ [@KolarXing; @ZhouLafferty2008]. \[aspt\_slow\_change\] For a (small) positive constant $\kappa$, $$\label{equ_covariance_difference} {\ensuremath{{\|{{{\mathbf C}}[{n}_{1}]} - {{{\mathbf C}}[{n}_{2}]}\|}_{\infty}}} \leq {\kappa}(|{n}_{2} - {n}_{1}| / {N}).$$ In view of , for some $n_{0} \!\in\! \{1,\ldots,{N}-{L}\}$ and blocklength ${L}$ such that $\kappa ({L}/{N}) \ll 1$, we may approximate ${L}$ consecutive samples ${{{\mathbf x}}[{n}_{0}]}, {{{\mathbf x}}[{n}_{0}+1]}, \ldots, {{{\mathbf x}}[{n}_{0}+ {L}-1]}$ as being i.i.d. zero-mean Gaussian vectors with covariance matrix $\mathbf{C}=(1/{L}) \sum_{{n}={n}_{0}}^{{n}_{0}+{L}-1} {{{\mathbf C}}[{n}]}$. This suggests to partition the observed samples evenly into length-${L}$ blocks ${\mathcal{B}_{{b}}}= \{({b}\!-\!1){L}\!+\!1, \ldots, {b}{L}\}$ for $b={1,\ldots,{B}={N}/{L}}$.[^1] We can approximate the covariance matrix of the samples within block $\mathcal{B}_{{b}}$ using the sample covariance matrix $$\label{equ_sample_cov_matrix_block} \widehat{{{\mathbf C}}}[{b}] = (1/{L}) \sum_{{{n}}\in {\mathcal{B}_{{b}}}} {{{\mathbf x}}[{{n}}]} {{\mathbf x}}^{T}[{n}]. \vspace*{-3mm}$$ GMS via Conditional Variance Testing {#sec_GMS_via_cond_var_testing} ==================================== We will now formulate and analyze a GMS method for a nonstationary process ${{\mathbf x}}[{n}]$ conforming to the model . To this end, we will first show how the CIG of ${{\mathbf x}}[{n}]$ can be characterzed in terms of conditional variance tests. The GMS method implements these conditional variance tests using the covariance matrix estimate $\widehat{{{\mathbf C}}}[{b}]$ (cf. ). Conditional Variance Testing ---------------------------- The characterization for the CIG ${\mathcal{G}}$ of the process ${{\mathbf x}}[{{n}}]$ seems convenient: We just have to determine the non-zero pattern of the precision matrices ${{{\mathbf K}}[{n}]}$ and immediatly can estimate the edge set of the CIG ${\mathcal{G}}$. However, the problem is in estimating the precision matrix $\PMTX{{n}}$ in the high-dimensional regime where typically ${N}\ll {d}$. In particular, in the high-dimensional regime, any reasonable a estimator $\widehat{{{\mathbf C}}}_{\tilde{x}}$ for the covariance matrix ${{{\mathbf C}}[{n}]}$ is singular, preventing the use of the inverse $\widehat{{{\mathbf C}}}_{\tilde{x}}^{-1}[{n}]$ as an estimate for ${{{\mathbf K}}[{n}]}$. In order to cope with the high-dimensional regime, we will now present an approach to GMS via determining the neighborhoods $\mathcal{N}(r)$ for all nodes $r$ which exploits the sparsity of the CIG (cf. Assumption \[aspt\_cig\_sparse\]). Our strategy for determining the neighborhoods $\mathcal{N}(r)$ will be based on evaluating the conditional variance $$\label{equ_def_cond_variance} {V_{x}^{(r, \mathcal{T})}} {:=}(1/{N}) {\rm Tr} \{ {\mathbf{V}_{x}^{(r, \mathcal{T})}} \},$$ with the conditional covariance matrix $$\label{equ_cond_cov_matrix_V_r_T} {\mathbf{V}_{x}^{(r, \mathcal{T})}} {:=}{ \rm cov} \big\{ {{{\mathbf x}}_{r}[\cdot]} \big| \{ {{{\mathbf x}}_{t}[\cdot]}\}_{t \in \mathcal{T}} \big\}.$$ Here, $\mathcal{T} \subseteq \mathcal{V} \setminus \{r\}$ is a subset of at most ${s_{\rm max}}$ nodes, i.e., $|\mathcal{T}| \leq {s_{\rm max}}$. We can express the conditional covariance matrix ${\mathbf{V}_{x}^{(r, \mathcal{T})}}$ in terms of the covariance matrix ${{\mathbf C}}_{{\tilde{{{\mathbf x}}}}}$ (cf. ) as [@Lapidoth09 Thm. 23.7.4.] $$\label{equ_cond_var_expression_111} {\mathbf{V}_{x}^{(r, \mathcal{T})}} = {{{\mathbf C}}_{\tilde{x}}[r,r]} - {{{\mathbf C}}_{\tilde{x}}[r,\mathcal{T}]} {{\big ( {{\mathbf C}}_{\tilde{x}}[\mathcal{T},\mathcal{T}] \big)^{-1}}} {{{\mathbf C}}_{\tilde{x}}[\mathcal{T},r]}.$$ Note that the conditional covariance matrix ${\mathbf{V}_{x}^{(r, \mathcal{T})}}$ depends only on a (small) submatrix of ${{\mathbf C}}_{{\tilde{{{\mathbf x}}}}}$ constituted by the ${N}\times {N}$ blocks ${{{\mathbf C}}_{\tilde{x}}[i,j]}$ for $i,j \in \mathcal{T} \cup \{r\}$. Using the block diagonal structure of ${{\mathbf C}}_{x}$ (cf. ), we can simplify to obtain the following representation for the conditional variance: The conditional variance ${V_{x}^{(r, \mathcal{T})}}$ satisfies $${V_{x}^{(r, \mathcal{T})}} = (1/{N}) \sum_{{{n}}= 1}^{{N}} \frac{1}{{({({{\mathbf C}}[n])_{\{{\mathcal{T}}', {\mathcal{T}}'\}}})^{-1}}_{\{r,r\}}} \label{equ_conditional_covariance_matrix}, \vspace*{-2mm}$$ with $\mathcal{T}' {:=}\{ r \} \cup \mathcal{T}$. Consider the subset ${\mathcal{T}}= \{t_1, t_2, \ldots, t_k\}$, let ${{\mathbf x}}_{{\mathcal{T}}}[{{n}}] = (x_{t_1}[{{n}}], \ldots, x_{t_k}[{{n}}])^T$ and ${{\mathbf P}}_{{\mathcal{T}}}$ be the permutation matrix transforming ${{\mathbf x}}_{{\mathcal{T}}} {:=}\big ( ({{\mathbf x}}_{{\mathcal{T}}}[1])^T,\ldots, {{\mathbf x}}_{{\mathcal{T}}}[{N}])^T \big )^T$ into ${\tilde{{{\mathbf x}}}}_{{\mathcal{T}}} {:=}({{{\mathbf x}}_{t_1}[\cdot]}^T,\ldots, {{{\mathbf x}}_{t_k}[\cdot]}^T)^T$, i.e., ${\tilde{{{\mathbf x}}}}_{{\mathcal{T}}} = {{\mathbf P}}_{{\mathcal{T}}}{{\mathbf x}}_{{\mathcal{T}}}$. The covariance matrix for ${\tilde{{{\mathbf x}}}}_{{\mathcal{T}}}$ is obtained as ${{{\mathbf C}}_{\tilde{x}}[{\mathcal{T}},{\mathcal{T}}]}= {{\mathbf P}}_{{\mathcal{T}}} {{\mathbf C}}_{{\mathcal{T}}} {{\mathbf P}}_{{\mathcal{T}}}^T$, and, in turn since ${{\mathbf P}}_{{\mathcal{T}}}^{-1} = {{\mathbf P}}_{{\mathcal{T}}}^{T}$, ${{\big ( {{\mathbf C}}_{\tilde{x}}[\mathcal{T},\mathcal{T}] \big)^{-1}}} = {{\mathbf P}}_{{\mathcal{T}}} ({{\mathbf C}}_{{\mathcal{T}}})^{-1} {{\mathbf P}}_{{\mathcal{T}}}^T$. The conditional variance ${V_{x}^{(r, {\mathcal{T}})}}$ is then given as $$\begin{aligned} \label{equ_condtional_variance_inid} & \hspace*{-3mm}(1/{N}) \operatorname*{Tr}\big \{ {{{\mathbf C}}_{\tilde{x}}[r,r]} \!-\! {{{\mathbf C}}_{\tilde{x}}[r,{\mathcal{T}}]} {{\big ( {{\mathbf C}}_{\tilde{x}}[{\mathcal{T}},{\mathcal{T}}] \big)^{-1}}} {{{\mathbf C}}_{\tilde{x}}[{\mathcal{T}},r]} \big \} \\ & \!=\! (1/{N}) \operatorname*{Tr}\big \{ {{{\mathbf C}}_{\tilde{x}}[r,r]} \!-\! {{{\mathbf C}}_{\tilde{x}}[r,{\mathcal{T}}]} {{\mathbf P}}_{{\mathcal{T}}} ({{\mathbf C}}_{{\mathcal{T}}})^{-1} {{\mathbf P}}_{{\mathcal{T}}}^T {{{\mathbf C}}_{\tilde{x}}[{\mathcal{T}},r]} \big \}. \nonumber \vspace*{-3mm}\end{aligned}$$ Due to the block-diagonal structure of ${{\mathbf C}}_{\tilde{x}}$ (cf. ), $$\begin{aligned} \hspace*{-5mm}{{{\mathbf C}}_{\tilde{x}}[r,{\mathcal{T}}]} {{\mathbf P}}_{{\mathcal{T}}} \!=\! \begin{pmatrix} {\big({{\mathbf C}}[1]\big)_{\{r, {\mathcal{T}}\}}} & \cdots & \mathbf{0} \\ \vdots &\ddots &\vdots \\ \mathbf{0} & \cdots & {\big({{\mathbf C}}[{N}]\big)_{\{r, {\mathcal{T}}\}}} \\ \end{pmatrix}. \label{equ_block_diagonal_product_C_P_T}\end{aligned}$$ Inserting into , yields further $$\label{equ_condtional_variance_inid_final} \begin{split} {V_{x}^{(r, \mathcal{T})}} & = (1/{N}) \sum_{{{n}}= 1}^{{N}} \bigg ( {({{\mathbf C}}[n])_{\{r, r\}}} -\\[-3mm] &{\big({{\mathbf C}}[{{n}}]\big)_{\{r, {\mathcal{T}}\}}} \big ({\big({{\mathbf C}}[{{n}}]\big)_{\{{\mathcal{T}}, {\mathcal{T}}\}}} \big )^{-1} {\big({{\mathbf C}}[{{n}}]\big)_{\{{\mathcal{T}}, r\}}} \bigg ). \end{split}$$ The expression for the conditional variance follows then from using the matrix inversion lemma for block matrices [@BishopBook Ex. 2.2.4.]. Using the conditional variance ${V_{x}^{(r, \mathcal{T})}}$, we can characterize the neighborhoods $\mathcal{N}(r)$ in the CIG ${\mathcal{G}}$ as: \[thm\_cond\_variance\_properties\] For any set $\mathcal{T} \in {\mathfrak{S}^{r}_{{s_{\rm max}}}}$: - If $\mathcal{N}(r) \setminus \mathcal{T} \neq \emptyset$, we have $$\label{equ_relation_cond_v_N_r_case1} {V_{x}^{(r, \mathcal{T})}} \geq \rho_{\rm min} + (1/{N}) {\rm Tr} \big \{ {{\big ( {{\mathbf K}}_{\tilde{x}}[r,r] \big)^{-1}}} \big \}. \vspace*{-3mm}$$ - For $\mathcal{N}(r) \subseteq \mathcal{T}$, we obtain $$\label{equ_relation_cond_v_N_r} {V_{x}^{(r, \mathcal{T})}} = (1/{N}) {\rm Tr} \big \{ {{\big ( {{\mathbf K}}_{\tilde{x}}[r,r] \big)^{-1}}} \big \}. \vspace*{-2mm}$$ see Appendix. As an immediate consequece of Theorem \[thm\_cond\_variance\_properties\], we can determine the neighborhood $\mathcal{N}(r)$ by a simple conditional variance test procedure: $$\label{equ_char_neigborhood_penalized_cond_var_one} \mathcal{N}(r) = \operatorname*{arg\;min}_{\mathcal{T} \in {\mathfrak{S}^{r}_{{s_{\rm max}}}}} {V_{x}^{(r, \mathcal{T})}} + \rho_{\rm min} |\mathcal{T}|.$$ The GMS method -------------- We now turn the procedure into a practical GMS method by replacing ${V_{x}^{(r, \mathcal{T})}}$ in with the estimate $$\label{equ_con_var_emp_test_neighbor} \widehat{{V_{x}^{(r, \mathcal{T})}}} = (1/{B}) \sum_{{b}=1}^{{B}} \frac{1}{{({(\widehat{{{\mathbf C}}}[{b}])_{\{{\mathcal{T}}', {\mathcal{T}}'\}}})^{-1}}_{\{r,r\}}} \vspace{-3mm}$$ using the sample covariance matrix $\widehat{{{\mathbf C}}}[{b}]$ (cf. ) and $\mathcal{T}' {:=}\{ r \} \cup \mathcal{T}$. **Input:** ${{{\mathbf x}}[1]},\ldots, {{{\mathbf x}}[{N}]}$, $\rho_{\rm min}$, ${s_{\rm max}}$, blocklength $L$ $\widehat{\mathcal{N}}(r) {:=}\operatorname*{arg\;min}_{\mathcal{T} \in {\mathfrak{S}^{r}_{{s_{\rm max}}}}} \hspace*{-3mm}\widehat{{V_{x}^{(r, \mathcal{T})}}} + |\mathcal{T}| \rho_{\rm min}$ (cf. ) combine estimates $\widehat{\mathcal{N}}(r)$ by “OR-” or “AND rule” - OR: $(i,j) \!\in\! \widehat{\mathcal{E}}$ if either $(i,j) \!\in\! \widehat{\mathcal{N}}(i)$ or $(i,j) \!\in\! \widehat{\mathcal{N}}(j)$ - AND: $(i,j) \!\in\! \widehat{\mathcal{E}}$ if $(i,j) \!\in\! \widehat{\mathcal{N}}(i)$ and $(i,j) \!\in\! \widehat{\mathcal{N}}(j)$ **Output:** CIG estimate $\widehat{\mathcal{G}} = (\mathcal{V},\widehat{\mathcal{E}})$ For a sufficiently large sample size ${N}$, the CIG estimate $\widehat{\mathcal{G}}$ delivered by Alg. \[alg:main\_alg\] coincides with the true CIG ${\mathcal{G}}$ with high probability. \[thm\_sample\_complexity\] There are constants $c_{1}, c_{2}$ depending only on $\{ \alpha[{n}],\beta[{n}]\}_{{n}\in \{1,\ldots,{N}\}}$ such that for a sample size $$\label{equ_thm_samplesize_bound} {N}\geq c_{1} \frac{{s_{\rm max}}^{5/2}}{\rho_{\rm min}^{3}}(\log\frac{\kappa {s_{\rm max}}^{7/2}}{\delta \rho^3_{\rm min}} + {s_{\rm max}}\log d) \vspace*{-3mm} $$ Alg. \[alg:main\_alg\] used with blocklength $${L}= c_{2} \frac{{s_{\rm max}}^2}{\rho_{\rm min}^{2}}(\log\frac{\kappa {s_{\rm max}}^{7/2}}{\delta \rho^3_{\rm min}} + {s_{\rm max}}\log d), \nonumber \vspace*{-3mm}$$ delivers the correct CIG with prob. at least $1\!-\!\delta$, i.e., $\prob\{ \widehat{\mathcal{G}}={\mathcal{G}}\} \!\geq\! 1\!-\!\delta$ A detailed proof of Theorem \[thm\_sample\_complexity\] is omitted for space restrictions and will be provided in a follow up journal publication. However, the high-level idea is straightforward: If the maximum deviation $$E=\max_{r \in \mathcal{V}, \mathcal{T} \in {\mathfrak{S}^{r}_{{s_{\rm max}}}}}|\widehat{{V_{x}^{(r, \mathcal{T})}}}-{V_{x}^{(r, \mathcal{T})}}| \nonumber \vspace*{-2mm}$$ is less than $\rho_{\rm min}/2$, Alg. \[alg:main\_alg\] is guaranteed to select the correct neighorhoods, i.e., $\widehat{\mathcal{N}}(r) = \mathcal{N}(r)$ for all nodes $r \in \mathcal{V}$, implying the selection of the correct CIG, i.e., ${\mathcal{G}}= \widehat{{\mathcal{G}}}$. For controlling the probability of the event $E \geq \rho_{\rm min}/2$, we apply a large deviation characterization for Gaussian quadratic forms [@CSGraphSelJournal Lemma F.1]. The lower bound on sample size ${N}$ stated by Theorem \[thm\_sample\_complexity\], depends only logarithmically on the process dimension ${d}$ and polynomially on the maximum node degree ${s_{\rm max}}$. Thus, for processes having a sufficiently sparse CIG (small ${s_{\rm max}}$), the GMS method in Alg. \[alg:main\_alg\] delivers the correct CIG even in scenarios where the process dimension is exponentially larger than the available sample size. Moreover, the bounds depends inversely on the minimum partial correlation $\rho_{\rm min}$, which is reasonable as a smaller partial correlation is more difficult to detect. Note that the quantity $\rho_{\rm min}$ occuring in represents the average (over ${n}$) of the marginal conditional correlations between the process components. Appendix: Proof of Theorem \[thm\_cond\_variance\_properties\] {#appendix-proof-of-theorem-thm_cond_variance_properties .unnumbered} ============================================================== We detail the proof only for the neighborhood $\mathcal{N}(1)$ of the particular node $1$. The generalization to an arbitrary node is then straightforward. Let us introduce the weight matrices $\mathbf{L}_{1,r} {:=}{{\big ( {{\mathbf K}}_{\tilde{x}}[1,1] \big)^{-1}}} {{{\mathbf K}}_{\tilde{x}}[1,r]}$. According to we have $\mathbf{L}_{1,r} = 0$ for $ r \notin \mathcal{N}(1)$. Using elementary properties of multivariate normal distributions (cf. [@Brockwell91 Prop. 1.6.6.]), we have the decomposition $$\label{equ_innov_repr_comp_1} {{{\mathbf x}}_{1}[\cdot]} = \sum_{r \in \mathcal{N}(1)} \mathbf{L}_{1, r} {{{\mathbf x}}_{r}[\cdot]} + {\bm \varepsilon}_{1} \vspace*{-5mm}$$ with the zero-mean “error term” ${\bm \varepsilon}_{1} \sim \mathcal{N}(\mathbf{0},{\mathbf{V}_{x}^{(1, \mathcal{N}(1))}})$ whose covariance matrix is ${\mathbf{V}_{x}^{(1, \mathcal{N}(1))}}= {{\big ( {{\mathbf K}}_{\tilde{x}}[1,1] \big)^{-1}}}$. The identity is then obtained as $${\mathbf{V}_{x}^{(1, \mathcal{T})}} \stackrel{\eqref{equ_innov_repr_comp_1},\mathcal{N}(1)\subseteq \mathcal{T}}{=} {\mathbf{V}_{x}^{(1, \mathcal{N}(1))}} = {{\big ( {{\mathbf K}}_{\tilde{x}}[1,1] \big)^{-1}}}. \vspace*{-2mm}$$ Moreover, by the projection property of conditional expectations (cf. [@Brockwell91 Sec. 2.7]), the error term ${\bm \varepsilon}_{1}$ in is uncorrelated (and hence independent) with (of) the process components $\{ {{{\mathbf x}}_{r}[\cdot]} \}_{r \in \{2,\ldots,{d}\}}$, i.e., $$\label{equ_error_uncorr_T} \expect \{ {{{\mathbf x}}_{r}[\cdot]} {\bm \varepsilon}_{1}^{T} \} = \mathbf{0} \mbox{ for all } r \in \{2,\ldots,{d}\}. \vspace*{-5mm}$$ Let us now focus on the conditional variance ${V_{x}^{(1, \mathcal{T})}}$ for a subset $\mathcal{T} \in {\mathfrak{S}^{1}_{{s_{\rm max}}}}$ with $\mathcal{N}(1) \setminus \mathcal{T} \neq \emptyset$, i.e., there is an index $j \in \mathcal{N}(1) \setminus \mathcal{T}$. We use the shorthands $\mathcal{P} {:=}\mathcal{T} \cup \mathcal{N}(1)$ and $\mathcal{Q} {:=}\mathcal{P} \setminus \{j\}$. Note that $\mathcal{T} \subseteq \mathcal{Q}$. For the conditional mean $\widehat{{{{\mathbf x}}_{j}[\cdot]}}{:=}\expect \big \{ {{{\mathbf x}}_{1}[\cdot]} \big | \{ {{{\mathbf x}}_{r}[\cdot]} \}_{r \in \mathcal{Q}} \big \}$, we have the decomposition $$\label{equ_repr_comp_j} {{{\mathbf x}}_{j}[\cdot]} = \widehat{{{{\mathbf x}}_{j}[\cdot]}} + {\bm \varepsilon}_{j}. \vspace*{-2mm}$$ with the zero-mean “error term” ${\bm \varepsilon}_{j} \sim \mathcal{N}(\mathbf{0},\mathbf{C}_{e,j})$ being uncorrelated with the components $\{ {{{\mathbf x}}_{r}[\cdot]} \}_{r \in \mathcal{Q}}$, i.e., $$\label{equ_error_uncorr_Q} \expect \{ {{{\mathbf x}}_{r}[\cdot]} {\bm \varepsilon}_{j}^{T} \} = \mathbf{0} \mbox{ for all } r \in \mathcal{Q}. \vspace*{-3mm}$$ Moreover, the inverse covariance of ${\bm \varepsilon}_{j}$ satisfies $$\label{equ_error_cov_j_e} {{\mathbf C}}^{-1}_{e,j} = {{\mathbf K}}[j,j], \vspace*{-3mm}$$ with ${{\mathbf K}}= {\big ( {{\mathbf C}}_{\tilde{x}}[{\mathcal{T}}',{\mathcal{T}}'] \big)^{-1}}$, where ${\mathcal{T}}' = {\mathcal{T}}\cup \{j\}$. Since the blocks ${{\mathbf C}}_{\tilde{x}}[a,b]$ of the matrix ${{\mathbf C}}_{\tilde{x}}$ (cf. ), the matrix ${{\mathbf K}}[j,j]$ is diagonal with main-diagonal given by the values $\frac{1}{{({({{\mathbf C}}[n])_{\{{\mathcal{T}}', {\mathcal{T}}'\}}})^{-1}}_{\{1,1\}}}$ which, together with Assumption \[equ\_unif\_bound\_eig\_val\_CMX\], yields $$\label{equ_bound_C_e_j_alpha} {{\mathbf C}}_{e,j} \succeq \operatorname*{diag}\{ \alpha[{n}] \}_{{n}=1,\ldots,{N}}. \vspace*{-3mm}$$ Inserting into yields $$\begin{aligned} \label{equ_innov_repr_comp_2} {{{\mathbf x}}_{1}[\cdot]} & = \hspace*{-2mm}\sum_{r \in \mathcal{N}(1) \setminus \{j\}} \hspace*{-2mm}\mathbf{L}_{1, r} {{{\mathbf x}}_{r}[\cdot]} + \mathbf{L}_{1, j} \widehat{{{{\mathbf x}}_{j}[\cdot]}} + \mathbf{L}_{1, j}{\bm \varepsilon}_{j} + {\bm \varepsilon}_{1} \nonumber \\ & = \sum_{r \in \mathcal{Q}} \mathbf{M}_{r} {{{\mathbf x}}_{r}[\cdot]}+ \mathbf{L}_{1, j}{\bm \varepsilon}_{j} + {\bm \varepsilon}_{1} . \\[-8mm]\nonumber\end{aligned}$$ Due to and , the terms $\mathbf{L}_{1, j}{\bm \varepsilon}_{j}$ and $ {\bm \varepsilon}_{1}$ are both uncorrelated (and therefore independent due to Gaussianity) to all the components $\{ {{{\mathbf x}}_{r}[\cdot]} \}_{r \in \mathcal{Q}}$ and moreover are also mutually uncorrelated, i.e., $\expect \{ {\bm \varepsilon}_{r} \big({\bm \varepsilon}^{T}_{1},{\bm \varepsilon}^{T}_{j})\} = \mathbf{0}$ for all $r \in \mathcal{Q}$ and $\expect \{ {\bm \varepsilon}_{j} {\bm \varepsilon}^{T}_{1}\} = \mathbf{0}$. According to the law of total variance [@BillingsleyProbMeasure] and since $\mathcal{T} \subseteq \mathcal{Q}$, we have ${V_{x}^{(1, \mathcal{T})}} \geq {V_{x}^{(1, \mathcal{Q})}}$. Therefore, we obtain the lower bound: $$\begin{aligned} \label{equ_proof_cond_v_second_case} {V_{x}^{(1, \mathcal{T})}} &\geq {V_{x}^{(1, \mathcal{Q})}} \stackrel{\eqref{equ_def_cond_variance}}{=} (1/{N}) {\rm Tr} \{ {\rm cov}\{ {{{\mathbf x}}_{1}[\cdot]} | \{ {{{\mathbf x}}_{r}[\cdot]}\}_{r\in \mathcal{Q}} \} \nonumber \\ & \hspace*{-10mm}\stackrel{\eqref{equ_innov_repr_comp_2}}{=} (1/{N}) {\rm Tr} \{ \mathbf{L}_{1, j} {{\mathbf C}}_{e,j} \mathbf{L}^{T}_{1, j} +{\mathbf{V}_{x}^{(1, \mathcal{N}(1))}} \} \\Ê & \hspace*{-10mm}\stackrel{\eqref{equ_bound_C_e_j_alpha},\eqref{equ_submatrix_L_a_b}}\geq (1/{N}) \sum_{n=1}^{{N}} \alpha[{{n}}] \big[ \big( {{\mathbf K}}[{{n}}] \big)_{a,b} / \big( {{\mathbf K}}[{{n}}] \big)_{a,a} \big]^{2}+ {V_{x}^{(1, \mathcal{N}(1))}} \nonumber \vspace*{-3mm}\end{aligned}$$ valid for any $\mathcal{T} \in {\mathfrak{S}^{1}_{{s_{\rm max}}}}$ with $\mathcal{T} \neq \mathcal{N}(1)$. We obtain by combining with Asspt. \[aspt\_minimum\_par\_cor\]. [^1]: For ease of notation and without essential loss of generality, we assume the sample size ${N}$ to be a multiple of the blocklength ${L}$.
--- abstract: 'We show a lower bound on expected communication cost of interactive entanglement assisted quantum state redistribution protocols and a slightly better lower bound for its special case, quantum state transfer. Our bound implies that the expected communication cost of interactive protocols is not significantly better than worst case communication cost, in terms of scaling of error. Furthermore, the bound is independent of the number of rounds. This is in contrast with the classical case, where protocols with expected communication cost significantly better than worst case communication cost are known.' author: - bibliography: - 'references.bib' title: '**[A lower bound on expected communication cost of quantum state redistribution]{}**\' --- Introduction {#sec:intro} ============ A fundamental task in quantum information theory is that of quantum state redistribution (various quantities appearing in this section have been described in Section \[sec:preliminaries\]): **Quantum state-redistribution** : A pure state $\Psi_{RBCA}$ is shared between Alice (A,C), Bob(B) and Referee(R). For a given ${\varepsilon}> 0$, which we shall henceforth identify as ‘error’, Alice needs to transfer the system $C$ to Bob, such that the final state $\Psi'_{RBC_0A}$ (where register $C_0\equiv C$ is with Bob), satisfies $\P(\Psi'_{RBC_0A},\Psi_{RBC_0A})\leq {\varepsilon}$. Here, $\P(.,.)$ is the purified distance. This task has been well studied in literature in asymptotic setting ([@Devatakyard; @oppenheim08; @YeBW08; @YardD09]) giving an operational interpretation to the quantum conditional mutual information. Recent results have obtained one-shot versions of this task ( [@Oppenheim14; @Berta14; @jain14]), with application to bounded-round entanglement assisted quantum communication complexity ([@Dave14]). The following upper bound has been obtained in [@Dave14], developing upon the work in [@Berta14], on worst case communication cost of quantum state redistribution, with error ${\varepsilon}$: $$\frac{50\cdot{{{\ensuremath{ \mathrm{I} {\: \! \!}{\ensuremath{ \left( R {\: \!}: {\: \!}C {\: \!}\middle\vert {\: \!}B \right) }} }}}}_{\Psi_{RABC}}}{2{\varepsilon}^2} + \frac{100}{{\varepsilon}^2} + 15.$$ An important application of this bound is a direct sum theorem for communication cost of bounded-round entanglement assisted quantum communication complexity, which is the main result of [@Dave14]: \[dave\] Let $C$ be the quantum communication complexity of the best entanglement assisted protocol for computing a relation $f$ with error $\rho$ on inputs drawn from a distribution $\mu$. Then any $r$ round protocol computing $f^{\otimes n}$ on the distribution $\mu^{\otimes n}$ with error $\rho-{\varepsilon}$ must involve at least $\Omega(n((\frac{{\varepsilon}}{r})^2\cdot C - r))$ quantum communication. Direct sum results for single-round entanglement assisted quantum communication complexity had earlier been obtained in  [@Jain:2005; @Jain:2008; @AnshuJMSY2014]. A special case of quantum state redistribution is **quantum state merging**, in which the register $A$ is absent. It was introduced in [@horodecki07] as a quantum counterpart to the classical Slepian-Wolf protocol [@slepianwolf]. A one-shot quantum state merging was introduced by Berta [@Berta09]. A one-shot version of classical Slepian-Wolf protocol was obtained by Braverman and Rao[@bravermanrao11], in the form of the following task: Alice is given a probability distribution $P$, Bob is given a probability distribution $Q$. Bob must output a distribution $P'$, with the property that $\|P-P'\|_1\leq {\varepsilon}$. They exhibited an interactive communication protocol achieving this task with *expected communication cost* $${{\ensuremath{ \mathrm{D} {\: \! \!}{\ensuremath{ \left( P \middle\| Q \right) }} }}} + \mathcal{O}(\sqrt{{{\ensuremath{ \mathrm{D} {\: \! \!}{\ensuremath{ \left( P \middle\| Q \right) }} }}}})+2\log(\frac{1}{{\varepsilon}}).$$ Considering expected communication cost, instead of worst case communication cost, allowed them to obtain the following direct sum result for bounded round classical communication complexity: \[brarao\] Let $C$ be the communication complexity of the best protocol for computing a relation $f$ with error $\rho$ on inputs drawn from a distribution $\mu$. Then any $r$ round protocol computing $f^{\otimes n}$ on the distribution $\mu^{\otimes n}$ with error $\rho-{\varepsilon}$ must involve at least $\Omega(n(C - r\cdot\log(\frac{1}{{\varepsilon}}) - O(\sqrt{C\cdot r})))$ communication. This result has better dependence on number of rounds $r$ in comparison to theorem \[dave\]. Thus, in order to obtain a stronger direct sum result for bounded-round quantum communication complexity, a possible approach would be to bound the expected communication cost of quantum state redistribution by $\approx {{{\ensuremath{ \mathrm{I} {\: \! \!}{\ensuremath{ \left( R {\: \!}: {\: \!}C {\: \!}\middle\vert {\: \!}B \right) }} }}}}_{\Psi_{RABC}}+\mathcal{O}(\log(\frac{1}{{\varepsilon}}))$. A special case of quantum state merging is **quantum state transfer**, in which register $B$ is trivial. Asymptotic version of quantum state transfer is the Schumacher compression [@Schumacher95]. In the corresponding classical setting, when $\Psi_{RA}$ is a classical probability distribution, Alice can send register $A$ to Bob with expected communication cost $S(\Psi_A) + \mathcal{O}(1)$, using a one-way protocol based on Huffman coding [@CoverT91]. In fact, one can make the error arbitrarily small, at the cost of arbitrarily large worst case communication. Our results {#our-results .unnumbered} ----------- In this work, we show that expected communication cost for entanglement assisted quantum protocols (which we formally define in section \[sec:cohtrans\]) is not significantly better than the worst case communication cost. Our main theorem is the following. \[thm:main\] Fix a $p<1$ and an ${\varepsilon}\in [0,(\frac{1}{70})^{\frac{4}{1-p}}]$. There exists a pure state $\Psi_{RBCA}$ (that depends on ${\varepsilon}$) such that, any interactive entanglement assisted communication protocol for its quantum state redistribution with error ${\varepsilon}$ requires expected communication cost at least ${{{\ensuremath{ \mathrm{I} {\: \! \!}{\ensuremath{ \left( R {\: \!}: {\: \!}C {\: \!}\middle\vert {\: \!}B \right) }} }}}}_{\Psi}\cdot(\frac{1}{{\varepsilon}})^{p}$. For quantum state transfer, we obtain a similar result with slightly better constants. \[thm:main2\] Fix a $p<1$ and any ${\varepsilon}\in [0, (\frac{1}{2})^{\frac{15}{1-p}}]$. There exists a pure state $\Psi_{RC}$ (that depends on ${\varepsilon}$) such that, any interactive entanglement assisted communication protocol for its quantum state transfer with error ${\varepsilon}$ requires expected communication cost at least $S(\Psi_R)\cdot(\frac{1}{{\varepsilon}})^p$. Notice that theorem \[thm:main2\] does imply theorem \[thm:main\], as quantum state transfer is a special case of quantum state redistribution. But the quantum state $\Psi_{RBCA}$ that we consider in theorem \[thm:main\] has all registers $R,A,B,C$ non-trivial and correlated with each other. Thus, a quantum state redistribution of $\Psi_{RBCA}$ cannot be reduced to the sub-cases of quantum state merging or quantum state transfer by any local operation, giving robustness to the bound. Our technique and organization {#our-technique-and-organization .unnumbered} ------------------------------ We discuss our technique for the case of quantum state transfer. For some $\beta>1$, we choose the pure state $\Psi_{RC}$ in such a way that its smallest eigenvalue is $\frac{1}{d\beta}$ and entropy of $\Psi_R$ is at most $\frac{2\log(d)}{\beta}$ ($d$ being dimension of register $R$). Let $\omega_{RC}$ be a maximally entangled state defined as ${\ensuremath{ \left| \omega \right\rangle }}_{RC} = \frac{\Psi_R^{-\frac{1}{2}}}{\sqrt{d}}{\ensuremath{ \left| \Psi \right\rangle }}_{RC}$. For any interactive protocol $\mathcal{P}$ for quantum state transfer of $\Psi_{RC}$ with error ${\varepsilon}$ and expected communication cost $C$, we obtain an expression that serves as a *transcript* of the protocol, encoding the unitaries applies by Alice and Bob and the probabilities of measurement outcomes (Corollary \[cohequation\]). This expression is obtained by employing a technique of *convex-split*, introduced in [@jain14] for one-way quantum state redistribution protocols. Then, crucially relying on the fact that $\Psi_{RC}$ is a pure state, we construct a new interactive protocol $\mathcal{P'}$ which achieves quantum state transfer of the state $\omega_{RC}$ with error $\sqrt{\beta{\varepsilon}}+\sqrt{\mu}$ (for any $\mu<1$) and worst case quantum communication cost at most $\frac{C}{\mu}$. Suitably choosing the parameters ${\varepsilon},\beta$ and $\mu$ and using known lower bound on worst case communication cost for state transfer of $\omega_{RC}$, we obtain the desired result. Same technique also extends to quantum state redistribution. Details appear in section \[sec:lowerbound\]. In section \[sec:preliminaries\] we present some notions and facts that are needed for our proofs. In section \[sec:cohtrans\] we give a description of interactive protocols for quantum state redistribution and obtain the aforementioned expression that serves as a *transcript* of a given protocol. Section \[sec:lowerbound\] is devoted to refinement of this expression and proof of main theorem. We present some discussion related to our approach and conclude in Section \[sec:conclusion\]. Preliminaries {#sec:preliminaries} ============= In this section we present some notations, definitions, facts and lemmas that we will use in our proofs. Information theory {#information-theory .unnumbered} ------------------ For a natural number $n$, let $[n]$ represent the set $\{1,2, \dots, n\}$. For a set $S$, let $|S|$ be the size of $S$. A *tuple* is a finite collection of positive integers, such as $(i_1,i_2\ldots i_r)$ for some finite $r$. We let $\log$ represent logarithm to the base $2$ and $\ln$ represent logarithm to the base $\mathrm{e}$. The $\ell_1$ norm of an operator $X$ is ${{\ensuremath{ {\ensuremath{ \left\| X \right\| }}_{1} }}}{\ensuremath{ \stackrel{\mathrm{def}}{=} }}{\ensuremath{ \mathrm{Tr} }}\sqrt{X^{\dag}X}$ and $\ell_2$ norm is ${\ensuremath{ \left\| X \right\| }}_2{\ensuremath{ \stackrel{\mathrm{def}}{=} }}\sqrt{{\ensuremath{ \mathrm{Tr} }}XX^{\dag}}$. A quantum state (or just a state) is a positive semi-definite matrix with trace equal to $1$. It is called [*pure*]{} if and only if the rank is $1$. Let ${\ensuremath{ \left| \psi \right\rangle }}$ be a unit vector. We use $\psi$ to represent the state and also the density matrix ${{\ensuremath{ \left| \psi \middle\rangle \middle\langle \psi \right| }}}$, associated with ${\ensuremath{ \left| \psi \right\rangle }}$. A sub-normalized state is a positive semidefinite matrix with trace less than or equal to $1$. A [*quantum register*]{} $A$ is associated with some Hilbert space $\H_A$. Define $|A| {\ensuremath{ \stackrel{\mathrm{def}}{=} }}\dim(\H_A)$. We denote by $\mathcal{D}(A)$, the set of quantum states in the Hilbert space $\H_A$ and by $\mathcal{D}_{\leq}(A)$, the set of all subnormalized states on register $A$. State $\rho$ with subscript $A$ indicates $\rho_A \in \mathcal{D}(A)$. For two quantum states $\rho$ and $\sigma$, $\rho\otimes\sigma$ represents the tensor product (Kronecker product) of $\rho$ and $\sigma$. Composition of two registers $A$ and $B$, denoted $AB$, is associated with Hilbert space $\H_A \otimes \H_B$. If two registers $A,B$ are associated with the same Hilbert space, we shall denote it by $A\equiv B$. Let $\rho_{AB}$ be a bipartite quantum state in registers $AB$. We define $$\rho_{{\ensuremath{ \mathnormal{B} }}} {\ensuremath{ \stackrel{\mathrm{def}}{=} }}{{\ensuremath{ {\ensuremath{ \mathrm{Tr} }}_{{\ensuremath{ \mathnormal{A} }}} {\: \! \!}{\ensuremath{ \left( \rho_{AB} \right) }} }}} {\ensuremath{ \stackrel{\mathrm{def}}{=} }}\sum_i ({\ensuremath{ \left\langle i \right| }} \otimes {\ensuremath{\mathds{1}}}_{{\ensuremath{\mathnormal{B}}}}) \rho_{AB} ({\ensuremath{ \left| i \right\rangle }} \otimes {\ensuremath{\mathds{1}}}_{{\ensuremath{\mathnormal{B}}}}) ,$$ where ${\ensuremath{ \left\lbrace {\ensuremath{ \left| i \right\rangle }} \right\rbrace }}_i$ is an orthonormal basis for the Hilbert space ${\ensuremath{\mathnormal{A}}}$ and ${\ensuremath{\mathds{1}}}_{{\ensuremath{\mathnormal{B}}}}$ is the identity matrix in space ${\ensuremath{\mathnormal{B}}}$. The state $\rho_B$ is referred to as the marginal state of $\rho_{AB}$ in register $B$. Unless otherwise stated, a missing register from subscript in a state will represent partial trace over that register. A quantum map $\E: A\rightarrow B$ is a completely positive and trace preserving (CPTP) linear map (mapping states from $\mathcal{D}(A)$ to states in $\mathcal{D}(B)$). A completely positive and trace non-increasing linear map $\tilde{\E}:A\rightarrow B$ maps quantum states to sub-normalised states. The identity operator in Hilbert space $\H_A$ (and associated register $A$) is denoted $I_A$. A [*unitary*]{} operator $U_A:\H_A \rightarrow \H_A$ is such that $U_A^{\dagger}U_A = U_A U_A^{\dagger} = I_A$. An [*isometry*]{} $V:\H_A \rightarrow \H_B$ is such that $V^{\dagger}V = I_A$ and $VV^{\dagger} = I_B$. The set of all unitary operations on register $A$ is denoted by $\mathcal{U}(A)$. We shall consider the following information theoretic quantities. Let $\varepsilon \geq 0$. 1. [**generalized fidelity**]{} For $\rho,\sigma \in \mathcal{D}_{\leq}(A)$, $$\F(\rho,\sigma){\ensuremath{ \stackrel{\mathrm{def}}{=} }}{{\ensuremath{ {\ensuremath{ \left\| \sqrt{\rho}\sqrt{\sigma} \right\| }}_{1} }}} + \sqrt{(1-{\ensuremath{ \mathrm{Tr} }}(\rho))(1-{\ensuremath{ \mathrm{Tr} }}(\sigma))}.$$ 2. [**purified distance**]{} For $\rho,\sigma \in \mathcal{D}_{\leq}(A)$, $$\P(\rho,\sigma) = \sqrt{1-\F^2(\rho,\sigma)}.$$ 3. [**$\varepsilon$-ball**]{} For $\rho_A\in \mathcal{D}(A)$, $${{\ensuremath{ \mathcal{B}^{{\varepsilon}} {\: \! \!}{\ensuremath{ \left( \rho_A \right) }} }}} {\ensuremath{ \stackrel{\mathrm{def}}{=} }}\{\rho'_A\in \mathcal{D}(A)|~\P(\rho_A,\rho'_A) \leq \varepsilon\}.$$ 4. [**entropy**]{} For $\rho_A\in \mathcal{D}(A)$, $${{\ensuremath{ \mathrm{H} {\: \! \!}{\ensuremath{ \left( A \right) }} }}}_{\rho} {\ensuremath{ \stackrel{\mathrm{def}}{=} }}- {\ensuremath{ \mathrm{Tr} }}(\rho_A\log\rho_A) .$$ 5. [**relative entropy**]{} For $\rho_A,\sigma_A\in \mathcal{D}(A)$, $${{\ensuremath{ \mathrm{D} {\: \! \!}{\ensuremath{ \left( \rho_A \middle\| \sigma_A \right) }} }}} {\ensuremath{ \stackrel{\mathrm{def}}{=} }}{\ensuremath{ \mathrm{Tr} }}(\rho_A\log\rho_A) - {\ensuremath{ \mathrm{Tr} }}(\rho_A\log\sigma_A) .$$ 6. [**max-relative entropy**]{} For $\rho_A,\sigma_A\in \mathcal{D}(A)$, $${{\ensuremath{ \mathrm{D}_{\max} {\: \! \!}{\ensuremath{ \left( \rho_A \middle\| \sigma_A \right) }} }}} {\ensuremath{ \stackrel{\mathrm{def}}{=} }}\inf \{ \lambda \in \mathbb{R} : 2^{\lambda} \sigma_A \geq \rho_A \} .$$ 7. [**mutual information**]{} For $\rho_{AB}\in \mathcal{D}(AB)$,$${{\ensuremath{ \mathrm{I} {\: \! \!}{\ensuremath{ \left( A {\: \!}: {\: \!}B \right) }} }}}_{\rho} {\ensuremath{ \stackrel{\mathrm{def}}{=} }}{{\ensuremath{ \mathrm{D} {\: \! \!}{\ensuremath{ \left( \rho_{AB} \middle\| \rho_A\otimes\rho_B \right) }} }}}= {{\ensuremath{ \mathrm{H} {\: \! \!}{\ensuremath{ \left( A \right) }} }}}_{\rho} + {{\ensuremath{ \mathrm{H} {\: \! \!}{\ensuremath{ \left( B \right) }} }}}_{\rho} - {{\ensuremath{ \mathrm{H} {\: \! \!}{\ensuremath{ \left( AB \right) }} }}}_{\rho}.$$ 8. [**conditional mutual information**]{} For $\rho_{ABC}\in \mathcal{D}(ABC)$, $${{{\ensuremath{ \mathrm{I} {\: \! \!}{\ensuremath{ \left( A {\: \!}: {\: \!}B {\: \!}\middle\vert {\: \!}C \right) }} }}}}_{\rho} {\ensuremath{ \stackrel{\mathrm{def}}{=} }}{{\ensuremath{ \mathrm{I} {\: \! \!}{\ensuremath{ \left( A {\: \!}: {\: \!}BC \right) }} }}}_{\rho} - {{\ensuremath{ \mathrm{I} {\: \! \!}{\ensuremath{ \left( A {\: \!}: {\: \!}C \right) }} }}}_{\rho} = {{\ensuremath{ \mathrm{I} {\: \! \!}{\ensuremath{ \left( B {\: \!}: {\: \!}AC \right) }} }}}_{\rho} - {{\ensuremath{ \mathrm{I} {\: \! \!}{\ensuremath{ \left( B {\: \!}: {\: \!}C \right) }} }}}_{\rho} .$$ 9. [**max-information**]{} For $\rho_{AB}\in \mathcal{D}(AB)$, $${{\ensuremath{ \mathrm{I}_{\max} {\: \! \!}{\ensuremath{ \left( A {\: \!}: {\: \!}B \right) }} }}}_{\rho} {\ensuremath{ \stackrel{\mathrm{def}}{=} }}\inf_{\sigma_{B}\in \mathcal{D}(B)}{{\ensuremath{ \mathrm{D}_{\max} {\: \! \!}{\ensuremath{ \left( \rho_{AB} \middle\| \rho_{A}\otimes\sigma_{B} \right) }} }}} .$$ 10. [**smooth max-information**]{} For $\rho_{AB}\in \mathcal{D}(AB)$, $${{\ensuremath{ \mathrm{I}^{\varepsilon}_{\max} {\: \! \!}{\ensuremath{ \left( A {\: \!}: {\: \!}B \right) }} }}}_{\rho} {\ensuremath{ \stackrel{\mathrm{def}}{=} }}\inf_{\rho'\in {{\ensuremath{ \mathcal{B}^{{\varepsilon}} {\: \! \!}{\ensuremath{ \left( \rho \right) }} }}}} {{\ensuremath{ \mathrm{I}_{\max} {\: \! \!}{\ensuremath{ \left( A {\: \!}: {\: \!}B \right) }} }}}_{\rho'} .$$ 11. [**conditional min-entropy**]{} For $\rho_{AB}\in \mathcal{D}(AB)$, $${{\ensuremath{ \mathrm{H}_{\min} {\: \! \!}{\ensuremath{ \left( A \middle | B \right) }} }}}_{\rho} {\ensuremath{ \stackrel{\mathrm{def}}{=} }}- \inf_{\sigma_B\in \mathcal{D}(B)}{{\ensuremath{ \mathrm{D}_{\max} {\: \! \!}{\ensuremath{ \left( \rho_{AB} \middle\| I_{A}\otimes\sigma_{B} \right) }} }}} .$$ 12. [**conditional max-entropy**]{} For $\rho_{AB}\in \mathcal{D}(AB)$, $${{\ensuremath{ \mathrm{H}_{\max} {\: \! \!}{\ensuremath{ \left( A \middle | B \right) }} }}}_{\rho_{AB}} {\ensuremath{ \stackrel{\mathrm{def}}{=} }}- {{\ensuremath{ \mathrm{H}_{\min} {\: \! \!}{\ensuremath{ \left( A \middle | R \right) }} }}}_{\rho_{AR}},$$ where $\rho_{ABR}$ is a purification of $\rho_{AB}$ for some system $R$. 13. [**smooth conditional min-entropy**]{} For $\rho_{AB}\in \mathcal{D}(AB)$, $${{\ensuremath{ \mathrm{H}^{\varepsilon}_{\min} {\: \! \!}{\ensuremath{ \left( A \middle | B \right) }} }}}_{\rho} {\ensuremath{ \stackrel{\mathrm{def}}{=} }}\sup_{\rho^{'} \in {{\ensuremath{ \mathcal{B}^{{\varepsilon}} {\: \! \!}{\ensuremath{ \left( \rho \right) }} }}}} {{\ensuremath{ \mathrm{H}_{\min} {\: \! \!}{\ensuremath{ \left( A \middle | B \right) }} }}}_{\rho^{'}} .$$ 14. [**smooth conditional max-entropy**]{} For $\rho_{AB}\in \mathcal{D}(AB)$, $${{\ensuremath{ \mathrm{H}^{\varepsilon}_{\max} {\: \! \!}{\ensuremath{ \left( A \middle | B \right) }} }}}_{\rho} {\ensuremath{ \stackrel{\mathrm{def}}{=} }}\inf_{\rho^{'} \in {{\ensuremath{ \mathcal{B}^{{\varepsilon}} {\: \! \!}{\ensuremath{ \left( \rho \right) }} }}}} {{\ensuremath{ \mathrm{H}_{\max} {\: \! \!}{\ensuremath{ \left( A \middle | B \right) }} }}}_{\rho^{'}} .$$ \[def:infquant\] We will use the following facts. \[fact:trianglepurified\] For states $\rho^1_A, \rho^2_A, \rho^3_A \in \mathcal{D}(A)$, $$\P(\rho^1_A,\rho^3_A) \leq \P(\rho^1_A,\rho^2_A) + \P(\rho^2_A,\rho^3_A) .$$ \[fact:purifiedtrace\] For subnormalized states $\rho_1,\rho_2$ $$\frac{1}{2}\|\rho_1-\rho_2\|_1\leq \P(\rho_1,\rho_2) \leq \sqrt{\|\rho_1-\rho_2\|_1}.$$ \[uhlmann\] Let $\rho_A,\sigma_A\in \mathcal{D}(A)$. Let ${\ensuremath{ \left| \rho \right\rangle }}_{AB}$ be a purification of $\rho_A$ and ${\ensuremath{ \left| \sigma \right\rangle }}_{AC}$ be a purification of $\sigma_A$. There exists an isometry $V: \H_C \rightarrow \H_B$ such that, $$\F({{\ensuremath{ \left| \theta \middle\rangle \middle\langle \theta \right| }}}_{AB}, {{\ensuremath{ \left| \rho \middle\rangle \middle\langle \rho \right| }}}_{AB}) = \F(\rho_A,\sigma_A) ,$$ where ${\ensuremath{ \left| \theta \right\rangle }}_{AB} = (I_A \otimes V) {\ensuremath{ \left| \sigma \right\rangle }}_{AC}$. \[fact:monotonequantumoperation\] For states $\rho$, $\sigma$, and quantum operation $\E(\cdot)$, $${{\ensuremath{ {\ensuremath{ \left\| \E(\rho) - \E(\sigma) \right\| }}_{1} }}} \leq {{\ensuremath{ {\ensuremath{ \left\| \rho - \sigma \right\| }}_{1} }}} , \P(\rho,\sigma)\leq \P(\E(\rho),\E(\sigma)) \text{ and } \F(\rho,\sigma) \leq \F(\E(\rho),\E(\sigma)) .$$ In particular, for a trace non-increasing completely positive map $\tilde{\E}(\cdot)$, $$\P(\rho,\sigma)\leq \P(\tilde{\E}(\rho),\tilde{\E}(\sigma)).$$ \[fact:fidelityconcave\] Given quantum states $\rho_1,\rho_2\ldots\rho_k,\sigma_1,\sigma_2\ldots\sigma_k \in \mathcal{D}(A)$ and positive numbers $p_1,p_2\ldots p_k$ such that $\sum_ip_i=1$. Then $$\F(\sum_ip_i\rho_i,\sum_ip_i\sigma_i)\geq \sum_ip_i\F(\rho_i,\sigma_i).$$ \[scalarpurified\] Let $\rho,\sigma \in \mathcal{D}(A)$ be quantum states. Let $\alpha<1$ be a positive real number. If $\P(\alpha\rho,\alpha\sigma)\leq {\varepsilon}$, then $$\P(\rho,\sigma)\leq {\varepsilon}\sqrt{\frac{2}{\alpha}}.$$ $\P(\alpha\rho,\alpha\sigma)\leq {\varepsilon}$ implies $\F(\alpha\rho,\alpha\sigma)\geq \sqrt{1-{\varepsilon}^2}\geq 1-{\varepsilon}^2$. But, $\F(\alpha\rho,\alpha\sigma)= \alpha\|\sqrt{\rho}\sqrt{\sigma}\|_1+(1-\alpha)$. Thus, $$\F(\rho,\sigma)=\|\sqrt{\rho}\sqrt{\sigma}\|_1 \geq 1-\frac{{\varepsilon}^2}{\alpha}.$$ Thus, $\P(\rho,\sigma)\leq \sqrt{1-(1-\frac{{\varepsilon}^2}{\alpha})^2}\leq \sqrt{\frac{2{\varepsilon}^2}{\alpha}}$. \[fact:fannes\] Given quantum states $\rho_1,\rho_2\in \mathcal{D}(A)$, such that $|A|=d$ and $\P(\rho_1,\rho_2)= {\varepsilon}\leq \frac{1}{2\mathrm{e}}$, $$|S(\rho_1)-S(\rho_2)|\leq {\varepsilon}\log(d)+1.$$ \[subadditive\] For a quantum state $\rho_{AB}\in \mathcal{D}(AB)$, $|S(\rho_A)-S(\rho_B)|\leq S(\rho_{AB})\leq S(\rho_A)+S(\rho_B)$. \[entropyconcave\] For quantum states $\rho_1,\rho_2\ldots \rho_n$, and positive real numbers $\lambda_1,\lambda_2\ldots \lambda_n$ satisfying $\sum_i \lambda_i=1$, $$S(\sum_i \lambda_i\rho_i)\geq \sum_i\lambda_iS(\rho_i).$$ \[informationbound\] For a quantum state $\rho_{ABC}$, it holds that $${{\ensuremath{ \mathrm{I} {\: \! \!}{\ensuremath{ \left( A {\: \!}: {\: \!}C \right) }} }}}_{\rho}\leq 2S(\rho_C),$$ $${{{\ensuremath{ \mathrm{I} {\: \! \!}{\ensuremath{ \left( A {\: \!}: {\: \!}C {\: \!}\middle\vert {\: \!}B \right) }} }}}}_{\rho}\leq {{\ensuremath{ \mathrm{I} {\: \! \!}{\ensuremath{ \left( AB {\: \!}: {\: \!}C \right) }} }}}_{\rho}\leq 2S(\rho_C).$$ From Fact \[subadditive\], ${{\ensuremath{ \mathrm{I} {\: \! \!}{\ensuremath{ \left( A {\: \!}: {\: \!}C \right) }} }}}_{\rho} = S(\rho_A)+S(\rho_C)-S(\rho_{AC}) \leq 2S(\rho_{C})$. \[fact:imaxhmin\] For a bipartite quantum state $\rho_{AB}$, ${{\ensuremath{ \mathrm{I}^{\varepsilon}_{\max} {\: \! \!}{\ensuremath{ \left( A {\: \!}: {\: \!}B \right) }} }}}_{\rho}\geq -{{\ensuremath{ \mathrm{H}^{\varepsilon}_{\min} {\: \! \!}{\ensuremath{ \left( A \middle | B \right) }} }}}_{\rho}$. Let $\sigma_B$ be the state achieved in infimum in the definition of ${{\ensuremath{ \mathrm{I}_{\max} {\: \! \!}{\ensuremath{ \left( A {\: \!}: {\: \!}B \right) }} }}}_{\rho}$. Let $\lambda{\ensuremath{ \stackrel{\mathrm{def}}{=} }}{{\ensuremath{ \mathrm{I}_{\max} {\: \! \!}{\ensuremath{ \left( A {\: \!}: {\: \!}B \right) }} }}}_{\rho}$. Consider, $$\rho_{AB}\leq 2^{\lambda}\rho_A\otimes\sigma_B\leq 2^{\lambda}I_A\otimes\sigma_B.$$ Thus, we have $$-{{\ensuremath{ \mathrm{H}_{\min} {\: \! \!}{\ensuremath{ \left( A \middle | B \right) }} }}}_{\rho}=\inf_{\sigma'_B\in \mathcal{D}(B)}{{\ensuremath{ \mathrm{D}_{\max} {\: \! \!}{\ensuremath{ \left( \rho_{AB} \middle\| I_A\otimes\sigma'_B \right) }} }}} \leq {{\ensuremath{ \mathrm{D}_{\max} {\: \! \!}{\ensuremath{ \left( \rho_{AB} \middle\| I_A\otimes\sigma_B \right) }} }}} \leq \lambda = {{\ensuremath{ \mathrm{I}_{\max} {\: \! \!}{\ensuremath{ \left( A {\: \!}: {\: \!}B \right) }} }}}_{\rho}.$$ This gives, $$\inf_{\rho'_{AB}\in{{\ensuremath{ \mathcal{B}^{{\varepsilon}} {\: \! \!}{\ensuremath{ \left( \rho_{AB} \right) }} }}}}-{{\ensuremath{ \mathrm{H}_{\min} {\: \! \!}{\ensuremath{ \left( A \middle | B \right) }} }}}_{\rho'} \leq {{\ensuremath{ \mathrm{I}^{\varepsilon}_{\max} {\: \! \!}{\ensuremath{ \left( A {\: \!}: {\: \!}B \right) }} }}}_{\rho}.$$ \[fact:cqimax\] For a *classical-quantum* state $\rho_{AB}$ of the form $\rho_{AB}=\sum_j p(j){{\ensuremath{ \left| j \middle\rangle \middle\langle j \right| }}}_A\otimes \sigma^j_B$, it holds that ${{\ensuremath{ \mathrm{I}_{\max} {\: \! \!}{\ensuremath{ \left( A {\: \!}: {\: \!}B \right) }} }}}_{\rho}\leq \log(|B|)$. By definition, ${{\ensuremath{ \mathrm{I}_{\max} {\: \! \!}{\ensuremath{ \left( A {\: \!}: {\: \!}B \right) }} }}}_{\rho}\leq {{\ensuremath{ \mathrm{D}_{\max} {\: \! \!}{\ensuremath{ \left( \rho_{AB} \middle\| \rho_A\otimes\frac{\text{I}_B}{|B|} \right) }} }}}$. Also, $$\rho_{AB}=\sum_j p(j){{\ensuremath{ \left| j \middle\rangle \middle\langle j \right| }}}_A\otimes \sigma^j_B \leq |B|\sum_j p(j){{\ensuremath{ \left| j \middle\rangle \middle\langle j \right| }}}_A\otimes \frac{\text{I}_B}{|B|} = |B| \rho_A\otimes \frac{\text{I}_B}{|B|}.$$ Thus, the fact follows. \[cqmutinf\] For a *classical-quantum* state $\rho_{ABC}=\sum_j p(j){{\ensuremath{ \left| j \middle\rangle \middle\langle j \right| }}}_A\otimes \rho^j_{BC}$, it holds that ${{\ensuremath{ \mathrm{I} {\: \! \!}{\ensuremath{ \left( AB {\: \!}: {\: \!}C \right) }} }}}_{\rho}\geq \sum_j p(j){{\ensuremath{ \mathrm{I} {\: \! \!}{\ensuremath{ \left( B {\: \!}: {\: \!}C \right) }} }}}_{\rho^j}$ Consider, $$\begin{aligned} {{\ensuremath{ \mathrm{I} {\: \! \!}{\ensuremath{ \left( AB {\: \!}: {\: \!}C \right) }} }}}_{\rho}&=& S(\rho_{AB}) + S(\rho_C) - S(\rho_{ABC})\\ &=& S(\sum_j p(j){{\ensuremath{ \left| j \middle\rangle \middle\langle j \right| }}}_A\otimes\rho^j_B) + S(\sum_j p(j)\rho^j_C) - S(\sum_j p(j){{\ensuremath{ \left| j \middle\rangle \middle\langle j \right| }}}_A\otimes\rho^j_{BC}) \\&=& \sum_j p(j)S(\rho^j_B)+S(\sum_j p(j)\rho^j_C) - \sum_j p(j)S(\rho^j_{BC}) \\&\geq& \sum_j p(j)S(\rho^j_B)+\sum_j p(j)S(\rho^j_C) - \sum_j p(j)S(\rho^j_{BC}) \quad (\text{Fact \ref{entropyconcave}})\\ &=& \sum_j p(j){{\ensuremath{ \mathrm{I} {\: \! \!}{\ensuremath{ \left( B {\: \!}: {\: \!}C \right) }} }}}_{\rho^j}\end{aligned}$$ \[lowentropy\] Fix a $\beta \geq 1$ and an integer $d>1$. There exists a probability distribution $\mu=\{e_1,e_2\ldots e_d\}$, with $e_1\geq e_2 \ldots \geq e_d$, such that $e_d = \frac{1}{d\beta}$ and entropy $S(\mu)\leq 2\frac{\log(d)}{\beta}$ Set $e_2=e_3=\ldots e_d = \frac{1}{d\beta}$. Then $e_1=1-\frac{d-1}{d\beta}$. Using $x\log(\frac{1}{x})\leq \frac{\log(e)}{e} < 1$ for all $x>0$, we can upper bound the entropy of the distribution as $$\sum_i e_i\log(\frac{1}{e_i}) = (1-\frac{d-1}{d\beta})\log(\frac{1}{1-\frac{d-1}{d\beta}}) + \frac{d-1}{d\beta}\log(d\beta) < 2 + \frac{\log(d)}{\beta}\leq 2\frac{\log(d)}{\beta}.$$ Interactive protocol for quantum state redistribution {#sec:cohtrans} ===================================================== In this section, we describe general structure of an interactive protocol for quantum state redistribution and its *expected communication cost*. Let quantum state ${\ensuremath{ \left| \Psi \right\rangle }}_{RBCA}$ be shared between Alice $(A,C)$, Bob $(B)$ and Referee $(R)$. Alice and Bob have access to shared entanglement $\theta_{E_AE_B}$ in registers $E_A$ (with Alice) and $E_B$ (with Bob). Using quantum teleportation, we can assume without loss of generality that Alice and Bob communicate classical messages, which involves performing a projective measurement on registers they respectively hold, and sending the outcome of measurement to other party. This allows for the notion of *expected communication cost*. A $r$-round interactive protocol $\mathcal{P}$ (where $r$ is an odd number) with error ${\varepsilon}$ and expected communication cost $C$ is as follows. **Input:** A quantum state ${\ensuremath{ \left| \Psi \right\rangle }}_{RBCA}$, error parameter ${\varepsilon}<1$. **Shared entanglement:** ${\ensuremath{ \left| \theta \right\rangle }}_{E_AE_B}$. - Alice performs a projective measurement $\M=\{M^1_{ACE_A},M^2_{ACE_A}\ldots \}$. Probability of outcome $i_1$ is $p_{i_1}{\ensuremath{ \stackrel{\mathrm{def}}{=} }}{\ensuremath{ \mathrm{Tr} }}(M^{i_1}_{ACE_A}\Psi_{CA}\otimes\theta_{E_A})$. Let $\phi^{i_1}_{RBACE_AE_B}$ be the global normalized quantum state, conditioned on this outcome. She sends message $i_1$ to Bob. - Upon receiving the message $i_1$ from Alice, Bob performs a projective measurement $$\M^{i_1}=\{M^{1,i_1}_{BE_B},M^{2,i_1}_{BE_B}\ldots\}.$$ Probability of outcome $i_2$ is $p_{i_2|i_1}{\ensuremath{ \stackrel{\mathrm{def}}{=} }}{\ensuremath{ \mathrm{Tr} }}(M^{i_2,i_1}_{BE_B}\phi^{i_1}_{BE_B})$. Let $\phi^{i_2,i_1}_{RBACE_AE_B}$ be the global normalized quantum state conditioned on this outcome $i_2$ and previous outcome $i_1$. Bob sends message $i_2$ to Alice. - Consider any odd round $1<k\leq r$. Let the measurement outcomes in previous rounds be $i_1,i_2\ldots i_{k-1}$ and global normalized state be $\phi^{i_{k-1},i_{k-2}\ldots i_1}_{RBACE_AE_B}$. Alice performs the projective measurement $\M^{i_{k-1},i_{k-2}\ldots i_2,i_1}=\{M^{1,i_{k-1},i_{k-2}\ldots i_2,i_1}_{ACE_A},M^{2,i_{k-1},i_{k-2}\ldots i_2,i_1}_{ACE_A}\ldots\}$ and obtains outcome $i_k$ with probability $p_{i_k|i_{k-1},i_{k-2}\ldots i_2,i_1}{\ensuremath{ \stackrel{\mathrm{def}}{=} }}{\ensuremath{ \mathrm{Tr} }}(M^{i_k,i_{k-1},i_{k-2}\ldots i_2,i_1}_{ACE_A}\phi^{i_{k-1},i_{k-2}\ldots i_1}_{AXE_A})$. Let the global normalized state after outcome $i_k$ be $\phi^{i_k,i_{k-1},i_{k-2}\ldots i_1}_{RBACE_BE_A}$. Alice sends the outcome $i_k$ to Bob. - Consider an even round $2<k\leq r$. Let the measurement outcomes in previous rounds be $i_1,i_2\ldots i_{k-1}$ and global normalized state be $\phi^{i_{k-1},i_{k-2}\ldots i_1}_{RBACE_AE_B}$. Bob performs the measurement $$\M^{i_{k-1},i_{k-2}\ldots i_2,i_1}=\{M^{1,i_{k-1},i_{k-2}\ldots i_2,i_1}_{BE_B},M^{2,i_{k-1},i_{k-2}\ldots i_2,i_1}_{BE_B}\ldots\}$$ and obtains outcome $i_k$ with probability $$p_{i_k|i_{k-1},i_{k-2}\ldots i_2,i_1}{\ensuremath{ \stackrel{\mathrm{def}}{=} }}{\ensuremath{ \mathrm{Tr} }}(M^{i_k,i_{k-1},i_{k-2}\ldots i_2,i_1}_{BE_B}\phi^{i_{k-1},i_{k-2}\ldots i_1}_{BE_B}).$$ Let the global normalized state after outcome $i_k$ be $\phi^{i_k,i_{k-1},i_{k-2}\ldots i_1}_{RBACE_BE_A}$. Bob sends the outcome $i_k$ to Alice. - After receiving message $i_r$ from Alice at the end of round $r$, Bob applies a unitary $U^b_{i_r,i_{r-1}\ldots i_1}:BE_B\rightarrow BC_0T_B$ such that $E_B\equiv C_0T_B$ and $C_0\equiv C$. Alice applies a unitary $U^a_{i_r,i_{r-1}\ldots i_1}:ACE_A\rightarrow ACE_A$. Let $U_{i_r,i_{r-1}\ldots i_1}{\ensuremath{ \stackrel{\mathrm{def}}{=} }}U^a_{i_r,i_{r-1}\ldots i_1}\otimes U^b_{i_r,i_{r-1}\ldots i_1}$. Define $${\ensuremath{ \left| \tau^{i_r,i_{r-1}\ldots i_1} \right\rangle }}_{RBACC_0T_BE_A}{\ensuremath{ \stackrel{\mathrm{def}}{=} }}U_{i_r,i_{r-1}\ldots i_1}{\ensuremath{ \left| \phi^{i_r,i_{r-1}\ldots i_1} \right\rangle }}_{RBACE_BE_A}.$$ - For every $k\leq r$, define $$p_{i_1,i_2\ldots i_k}{\ensuremath{ \stackrel{\mathrm{def}}{=} }}p_{i_1}\cdot p_{i_2|i_1}\cdot p_{i_3|i_2,i_1}\ldots p_{i_k|i_{k-1},i_{k-2}\ldots i_1}.$$ The joint state in registers $RBC_0A$, after Alice and Bob’s final unitaries and averaged over all messages is $\Psi'_{RBC_0A}{\ensuremath{ \stackrel{\mathrm{def}}{=} }}\sum_{i_r,i_{r-1}\ldots i_1}p_{i_1,i_2\ldots i_r}\tau^{i_r,i_{r-1}\ldots i_1}_{RBC_0A}$. It satisfies $\P(\Psi'_{RBC_0A},\Psi_{RBC_0A})\leq {\varepsilon}$. The expected communication cost is as follows. \[expcost\] Expected communication cost of $\mathcal{P}$ is $$\sum_{i_1,i_2\ldots i_r}p_{i_1,i_2\ldots i_r}\log(i_1\cdot i_2\ldots i_r)$$ The expected communication cost is the expected length of the messages over all probability outcomes. It can be evaluated as $$\sum_{i_1}p_{i_1}\log(i_1) + \sum_{i_1,i_2}p_{i_1}p_{i_2|i_1}\log(i_2)+\ldots \sum_{i_1,i_2\ldots i_r}p_{i_1,i_2\ldots i_{r-1}}p_{i_r|i_{r-1},i_{r-2}\ldots i_1}\log(i_r)$$$$= \sum_{i_1,i_2\ldots i_r}p_{i_1,i_2\ldots i_r}(\log(i_1)+\log(i_1)+\ldots \log(i_r)).$$ This allows us to define \[def:commweight\] **Communication weight** of a probability distribution $\{p_1,p_2\ldots p_m\}$ is $\sum_{i=1}^m p_i\log(i)$. The following lemma is a coherent representation of above protocol. \[cohlemma\] For every $k\leq r$, let $\O_k$ represent the set of all tuples $(i_1,i_2\ldots i_k)$ which satisfy: $\{i_1,i_2\ldots i_k\}$ is a sequence of measurement outcomes that occurs with non-zero probability upto $k$-th round of $\mathcal{P}$. There exist registers $M_1,M_2\ldots M_r$ and isometries $$\{U_{i_{k-1},i_{k-2}\ldots i_2,i_1}: ACE_A\rightarrow ACE_AM_k| k >1, k \text{ odd }, (i_1,i_2\ldots i_{k-1})\in \O_{k-1}\},$$ $$\{U_{i_{k-1},i_{k-2}\ldots i_2,i_1}: BE_B\rightarrow BE_BM_k| k \text{ even }, (i_1,i_2\ldots i_{k-1})\in \O_{k-1}\}$$ and $U: ACE_A\rightarrow ACE_AM_1$, such that $${\ensuremath{ \left| \Psi \right\rangle }}_{RBCA}{\ensuremath{ \left| \theta \right\rangle }}_{E_AE_B} = U^{\dagger}\sum_{i_1,i_2\ldots i_r}\sqrt{p_{i_1,i_2\ldots i_r}}U^{\dagger}_{ i_1}U^{\dagger}_{ i_2,i_1}\ldots U^{\dagger}_{i_r,i_{r-1}\ldots i_1}{\ensuremath{ \left| \tau^{i_r,i_{r-1}\ldots i_1} \right\rangle }}_{RBCAC_0T_BE_A}{\ensuremath{ \left| i_r \right\rangle }}_{M_r}\ldots{\ensuremath{ \left| i_1 \right\rangle }}_{M_1}.$$ Fix an odd $k>1$. Let the messages prior to $k-$th round be $(i_1,i_2\ldots i_{k-1})$. As defined in protocol $\mathcal{P}$, global quantum state before $k$-th round is $\phi^{i_{k-1},i_{k-2}\ldots i_1}_{RBCAE_AE_B}$. Alice performs the measurement $$\{M^{1,i_{k-1},i_{k-2}\ldots i_2,i_1}_{ACE_A},M^{2,i_{k-1},i_{k-2}\ldots i_2,i_1}_{AXE_A}\ldots\}.$$ This leads to a *convex-split* (introduced in [@jain14]): $$\begin{aligned} \label{roundconvsplit} \phi^{i_{k-1},i_{k-2}\ldots i_1}_{RBE_B} &=& \sum_{i_k} {\ensuremath{ \mathrm{Tr} }}_{ACE_A}(M^{i_k,i_{k-1},i_{k-2}\ldots i_2,i_1}_{ACE_A}\phi^{i_{k-1},i_{k-2}\ldots i_1}_{RBCAE_BE_A}) \nonumber\\&=& \sum_{i_k} p_{i_k|i_{k-1},i_{k-2}\ldots i_2,i_1}\frac{{\ensuremath{ \mathrm{Tr} }}_{ACE_A}(M^{i_k,i_{k-1},i_{k-2}\ldots i_2,i_1}_{ACE_A}\phi^{i_{k-1},i_{k-2}\ldots i_1}_{RBCAE_BE_A}M^{i_k,i_{k-1},i_{k-2}\ldots i_2,i_1}_{ACE_A})}{p_{i_k|i_{k-1},i_{k-2}\ldots i_2,i_1}}\nonumber\\&=& \sum_{i_k} p_{i_k|i_{k-1},i_{k-2}\ldots i_2,i_1}\phi^{i_k,i_{k-1},i_{k-2}\ldots i_2,i_1}_{RBE_B}\end{aligned}$$ A purification of $\phi^{i_{k-1},i_{k-2}\ldots i_1}_{RBE_B}$ on registers $RBCAE_BE_A$ is $\phi^{i_{k-1},i_{k-2}\ldots i_1}_{RBCAE_BE_A}$. Introduce a register $M_{k}$ (of sufficiently large dimension) and consider the following purification of $$\sum_{i_k} p_{i_k|i_{k-1},i_{k-2}\ldots i_2,i_1}\phi^{i_k,i_{k-1},i_{k-2}\ldots i_2,i_1}_{RBE_B}$$ on register $RBCAE_BE_AM_k$ : $$\sum_{i_k}\sqrt{p_{i_k|i_{k-1},i_{k-2}\ldots i_2,i_1}}{\ensuremath{ \left| \phi^{i_k,i_{k-1},i_{k-2}\ldots i_2,i_1} \right\rangle }}_{RBCAE_BE_A}{\ensuremath{ \left| i_k \right\rangle }}_{M_k}.$$ By Uhlmann’s theorem \[uhlmann\], there exists an isometry $U_{i_{k-1},i_{k-2}\ldots i_2,i_1}: ACE_A\rightarrow ACE_AM_k$ such that $$\label{aliceunitary} U_{i_{k-1},i_{k-2}\ldots i_2,i_1}{\ensuremath{ \left| \phi^{i_{k-1},i_{k-2}\ldots i_1} \right\rangle }}_{RBCAE_BE_A} = \sum_{i_k}\sqrt{p_{i_k|i_{k-1},i_{k-2}\ldots i_2,i_1}}{\ensuremath{ \left| \phi^{i_k,i_{k-1},i_{k-2}\ldots i_2,i_1} \right\rangle }}_{RBCAE_BE_A}{\ensuremath{ \left| i_k \right\rangle }}_{M_k}$$ For $k=1$, introduce register $M_1$ of sufficiently large dimension. Similar argument implies that there exists an isometry $U: ACE_A\rightarrow ACE_AM_1$ such that $$\label{aliceunitary1} U{\ensuremath{ \left| \Psi \right\rangle }}_{RBACE_BE_A} = \sum_{i_1}\sqrt{p_{i_1}}{\ensuremath{ \left| \phi^{i_1} \right\rangle }}_{RBACE_BE_A}{\ensuremath{ \left| i_1 \right\rangle }}_{M_1}$$ For $k$ even, introduce a register $M_k$ of sufficiently large dimension. Again by similar argument, there exists an isometry $U_{i_{k-1},i_{k-2}\ldots i_2,i_1}: BE_B\rightarrow BE_BM_k$ such that $$\label{bobunitary} U_{i_{k-1},i_{k-2}\ldots i_2,i_1}{\ensuremath{ \left| \phi^{i_{k-1},i_{k-2}\ldots i_1} \right\rangle }}_{RBCAE_BE_A} = \sum_{i_k}\sqrt{p_{i_k|i_{k-1},i_{k-2}\ldots i_2,i_1}}{\ensuremath{ \left| \phi^{i_k,i_{k-1},i_{k-2}\ldots i_2,i_1} \right\rangle }}_{RBCAE_BE_A}{\ensuremath{ \left| i_k \right\rangle }}_{M_k}$$ Now, we recursively use equations \[aliceunitary\], \[aliceunitary1\] and \[bobunitary\]. Consider, $$\begin{aligned} &&{\ensuremath{ \left| \Psi \right\rangle }}_{RBCA}{\ensuremath{ \left| \theta \right\rangle }}_{E_AE_B} = U^{\dagger}\sum_{i_1}\sqrt{p_{i_1}}{\ensuremath{ \left| \phi^{i_1} \right\rangle }}_{RBCAE_BE_A}{\ensuremath{ \left| i_1 \right\rangle }}_{M_1} \\ &=& U^{\dagger}\sum_{i_1}\sqrt{p_{i_1}}U^{\dagger}_{i_1}\sum_{i_2}\sqrt{p_{i_2|i_1}}{\ensuremath{ \left| \phi^{i_2,i_1} \right\rangle }}_{RBCAE_BE_A}{\ensuremath{ \left| i_2 \right\rangle }}_{M_2}{\ensuremath{ \left| i_1 \right\rangle }}_{M_1}\\ &=& U^{\dagger}\sum_{i_1,i_2}\sqrt{p_{i_1,i_2}}U_{i_1}^{\dagger}{\ensuremath{ \left| \phi^{i_2,i_1} \right\rangle }}_{RBCAE_BE_A}{\ensuremath{ \left| i_2 \right\rangle }}_{M_2}{\ensuremath{ \left| i_1 \right\rangle }}_{M_1} \\&=& U^{\dagger}\sum_{i_1,i_2\ldots i_r}\sqrt{p_{i_1,i_2\ldots i_r}}U^{\dagger}_{i_1}U^{\dagger}_{i_2,i_1}\ldots U^{\dagger}_{i_r,i_{r-1}\ldots i_1}{\ensuremath{ \left| \tau^{i_r,i_{r-1}\ldots i_1} \right\rangle }}_{RBCAB_0T_BE_A}{\ensuremath{ \left| i_r \right\rangle }}_{M_r}\ldots{\ensuremath{ \left| i_1 \right\rangle }}_{M_1}\end{aligned}$$ Last equality follows by recursion. This completes the proof. \[shortunitaries\] We introduce the following useful definitions. - Let $k>1$ be odd. Isometry $U_k: ACE_AM_1M_2\ldots M_{k-1}\rightarrow ACE_AM_1M_2\ldots M_{k-1}M_k$, $$U_k {\ensuremath{ \stackrel{\mathrm{def}}{=} }}\sum_{i_1,i_2\ldots i_{k-1}} {{\ensuremath{ \left| i_1 \middle\rangle \middle\langle i_1 \right| }}}_{M_1}\otimes {{\ensuremath{ \left| i_2 \middle\rangle \middle\langle i_2 \right| }}}_{M_2}\otimes\ldots{{\ensuremath{ \left| i_{k-1} \middle\rangle \middle\langle i_{k-1} \right| }}}_{M_{k-1}}\otimes U_{i_{k-1},i_{k-2}\ldots i_2,i_1}.$$ - For $k$ even, Isometry $U_k: BE_BM_1M_2\ldots M_{k-1}\rightarrow BE_BM_1M_2\ldots M_{k-1}M_k$, $$U_k {\ensuremath{ \stackrel{\mathrm{def}}{=} }}\sum_{i_1,i_2\ldots i_{k-1}} {{\ensuremath{ \left| i_1 \middle\rangle \middle\langle i_1 \right| }}}_{M_1}\otimes {{\ensuremath{ \left| i_2 \middle\rangle \middle\langle i_2 \right| }}}_{M_2}\otimes\ldots{{\ensuremath{ \left| i_{k-1} \middle\rangle \middle\langle i_{k-1} \right| }}}_{M_{k-1}}\otimes U_{i_{k-1},i_{k-2}\ldots i_2,i_1}.$$ - Unitary $U^a_{r+1}: ACE_AM_1M_2\ldots M_r \rightarrow ACE_AM_1M_2\ldots M_r$, $$U^a_{r+1} {\ensuremath{ \stackrel{\mathrm{def}}{=} }}\sum_{i_1,i_2\ldots i_r} {{\ensuremath{ \left| i_1 \middle\rangle \middle\langle i_1 \right| }}}_{M_1}\otimes {{\ensuremath{ \left| i_2 \middle\rangle \middle\langle i_2 \right| }}}_{M_2}\otimes\ldots{{\ensuremath{ \left| i_r \middle\rangle \middle\langle i_r \right| }}}_{M_r}\otimes U^a_{i_r,i_{r-1}\ldots i_1}.$$ - Unitary $U^b_{r+1}: BE_BM_1M_2\ldots M_r \rightarrow BC_0T_BM_1M_2\ldots M_r$, $$U^b_{r+1} {\ensuremath{ \stackrel{\mathrm{def}}{=} }}\sum_{i_1,i_2\ldots i_r} {{\ensuremath{ \left| i_1 \middle\rangle \middle\langle i_1 \right| }}}_{M_1}\otimes {{\ensuremath{ \left| i_2 \middle\rangle \middle\langle i_2 \right| }}}_{M_2}\otimes\ldots{{\ensuremath{ \left| i_r \middle\rangle \middle\langle i_r \right| }}}_{M_r}\otimes U^b_{i_r,i_{r-1}\ldots i_1}.$$ - Unitary $U_{r+1}:ACE_ABE_BM_1M_2\ldots M_r \rightarrow ACE_ABC_0T_BM_1M_2\ldots M_r$, $$U_{r+1} {\ensuremath{ \stackrel{\mathrm{def}}{=} }}\sum_{i_1,i_2\ldots i_r} {{\ensuremath{ \left| i_1 \middle\rangle \middle\langle i_1 \right| }}}_{M_1}\otimes {{\ensuremath{ \left| i_2 \middle\rangle \middle\langle i_2 \right| }}}_{M_2}\otimes\ldots{{\ensuremath{ \left| i_r \middle\rangle \middle\langle i_r \right| }}}_{M_r}\otimes U_{i_r,i_{r-1}\ldots i_1}.$$ This leads to a more convenient representation of lemma \[cohlemma\]. \[cohequation\] It holds that $${\ensuremath{ \left| \Psi \right\rangle }}_{RBCA}{\ensuremath{ \left| \theta \right\rangle }}_{E_AE_B}=U^{\dagger}U_2^{\dagger}\ldots U_{r+1}^{\dagger} \sum_{i_1,i_2\ldots i_r}\sqrt{p_{i_1,i_2\ldots i_r}}{\ensuremath{ \left| \tau^{i_r,i_{r-1}\ldots i_1} \right\rangle }}_{RBCAC_0T_BE_A}{\ensuremath{ \left| i_r \right\rangle }}_{M_r}\ldots{\ensuremath{ \left| i_1 \right\rangle }}_{M_1}.$$ and $$\P(\Psi_{RBC_0A},\sum_{i_1,i_2\ldots i_r}p_{i_1,i_2\ldots i_r}\tau^{i_r,i_{r-1}\ldots i_1}_{RBC_0A})\leq {\varepsilon}.$$ The corollary follows immediately using Definition \[shortunitaries\] and lemma \[cohlemma\]. Lower bound on expected communication cost {#sec:lowerbound} ========================================== In this section, we obtain a lower bound on expected communication cost of quantum state redistribution and quantum state transfer, by considering a class of states defined below. Let register $R$ be composed of two registers $R_A,R'$, such that $R\equiv R_AR'$. Let $d_a$ be the dimension of registers $R_A$ and $A$. Let $d$ be the dimension of registers $R',C$ and $B$. Consider, \[staterediststate\] ${\ensuremath{ \left| \Psi \right\rangle }}_{RBCA}{\ensuremath{ \stackrel{\mathrm{def}}{=} }}\frac{1}{\sqrt{d_a}}\sum_{a=1}^{d_a}{\ensuremath{ \left| a \right\rangle }}_{R_A}{\ensuremath{ \left| a \right\rangle }}_A{\ensuremath{ \left| \psi^a \right\rangle }}_{R'BC}$. Let ${\ensuremath{ \left| \psi^a \right\rangle }}_{R'BC}=\sum_{j=1}^d\sqrt{e_j}{\ensuremath{ \left| u_j \right\rangle }}_{R'}{\ensuremath{ \left| v_j(a) \right\rangle }}_B{\ensuremath{ \left| w_j(a) \right\rangle }}_C$ where $e_1\geq e_2\geq \ldots e_d>0$, $\sum_{i=1}^d e_i = 1$ and $\{{\ensuremath{ \left| u_1 \right\rangle }},\ldots{\ensuremath{ \left| u_d \right\rangle }}\}$, $\{{\ensuremath{ \left| v_1(a) \right\rangle }},\ldots{\ensuremath{ \left| v_d(a) \right\rangle }}\}$, $\{{\ensuremath{ \left| w_1(a) \right\rangle }},\ldots{\ensuremath{ \left| w_d(a) \right\rangle }}\}$ form an orthonormal basis (second and third bases may depend arbitrarily on $a$) in their respective Hilbert spaces. For quantum state transfer, we consider a pure state $\tilde{\Psi}_{RC}$ with Schmidt decomposition $\sum_{j=1}^d \sqrt{e_j}{\ensuremath{ \left| u_j \right\rangle }}_R{\ensuremath{ \left| w_j \right\rangle }}_C$. Given the state $\psi^a_{R'BC}$ from definition \[staterediststate\], we define a ‘GHZ state’ corresponding to it: ${\ensuremath{ \left| \omega^a \right\rangle }}_{R'BC}{\ensuremath{ \stackrel{\mathrm{def}}{=} }}\frac{1}{\sqrt{d}}\sum_{j=1}^d{\ensuremath{ \left| u_j \right\rangle }}_{R'}{\ensuremath{ \left| v_j(a) \right\rangle }}_B{\ensuremath{ \left| w_j(a) \right\rangle }}_C$. Using this, we define $\omega_{RBCA}{\ensuremath{ \stackrel{\mathrm{def}}{=} }}\frac{1}{\sqrt{d_a}}\sum_{a=1}^{d_a}{\ensuremath{ \left| a \right\rangle }}_{R_A}{\ensuremath{ \left| a \right\rangle }}_A{\ensuremath{ \left| \omega^a \right\rangle }}_{R'BC}$. Similarly, given the bipartite state $\tilde{\Psi}_{RC}$, we define a maximally entangled state $\omega'_{RC}{\ensuremath{ \stackrel{\mathrm{def}}{=} }}\frac{1}{\sqrt{d}}\sum_{j=1}^d{\ensuremath{ \left| u_j \right\rangle }}_R{\ensuremath{ \left| w_j \right\rangle }}_C$. Following two relations are easy to verify. $$\label{psiandomega} {\ensuremath{ \left| \omega \right\rangle }}_{RBCA} = \frac{1}{\sqrt{d_a\cdot d}}\Psi_R^{-\frac{1}{2}}{\ensuremath{ \left| \Psi \right\rangle }}_{RBCA} \text{ and } {\ensuremath{ \left| \omega' \right\rangle }}_{RC} = \frac{1}{\sqrt{d}}(\tilde{\Psi}_R)^{-\frac{1}{2}}{\ensuremath{ \left| \tilde{\Psi} \right\rangle }}_{RC}$$ As noted in section \[sec:cohtrans\], the protocol $\mathcal{P}$ achieves quantum state redistribution of $\Psi_{RBCA}$ with error ${\varepsilon}$ and expected communication cost $C$. Following lemma is a refined form of corollary \[cohequation\], and is also applicable to state $\Psi_{RBCA}$ not of the form given in definition \[staterediststate\]. \[goodcoh\] There exists a probability distribution $\{p'_{i_1,i_2\ldots i_r}\}$ and pure states $\kappa^{i_r,i_{r-1}\ldots i_1}_{CE_AT_B}$ such that $$\P(\Psi_{RBCA}\otimes\theta_{E_AE_B},U^{\dagger}U_2^{\dagger}\ldots U_{r+1}^{\dagger}\sum_{i_1,i_2\ldots i_r}\sqrt{p'_{i_1,i_2\ldots i_r}}\Psi_{RBC_0A}\otimes\kappa^{i_r,i_{r-1}\ldots i_1}_{CE_AT_B}{\ensuremath{ \left| i_r \right\rangle }}_{M_r}\ldots{\ensuremath{ \left| i_1 \right\rangle }}_{M_1})\leq 2\sqrt{{\varepsilon}},$$ and the communication weight of $p'_{i_1,i_2\ldots i_r}$ is at most $\frac{C}{1-{\varepsilon}}$. Let $\B$ be the set of tuples $(i_1,i_2\ldots i_r)$ for which $\F^2(\Psi_{RBC_0A},\tau^{i_r,i_{r-1}\ldots i_1}_{RBC_0A})\leq 1-{\varepsilon}$. Let $\G$ be remaining set of tuples. From corollary \[cohequation\] and purity of $\Psi_{RBC_0A}$, it holds that $$\sum_{i_1,i_2\ldots i_r}p_{i_1,i_2\ldots i_r}\F^2(\Psi_{RBC_0A},\tau^{i_r,i_{r-1}\ldots i_1}_{RBC_0A})\geq 1-{\varepsilon}^2.$$ Thus, $$(1-{\varepsilon})\sum_{(i_1,i_2\ldots i_r)\in\B}p_{i_1,i_2\ldots i_r}+ \sum_{(i_1,i_2\ldots i_r)\in\G}p_{i_1,i_2\ldots i_r} \geq 1-{\varepsilon}^2,$$ which implies $\sum_{(i_1,i_2\ldots i_r)\in\B}p_{i_1,i_2\ldots i_r}\leq {\varepsilon}$. Thus we have $\sum_{(i_1,i_2\ldots i_r)\in \G}p_{i_1,i_2\ldots i_r}\geq 1-{\varepsilon}$. Define $p'_{i_1,i_2\ldots i_r}{\ensuremath{ \stackrel{\mathrm{def}}{=} }}\frac{p_{i_1,i_2\ldots i_r}}{\sum_{{i_1,i_2\ldots i_r}\in \G} p_{i_1,i_2\ldots i_r}}$, if $(i_1,i_2\ldots i_r) \in \G$ and $p'_{i_1,i_2\ldots i_r}{\ensuremath{ \stackrel{\mathrm{def}}{=} }}0$ if $(i_1,i_2\ldots i_r)\in \B $. For all $(i_1,i_2\ldots i_r)\in \G$, $\F^2(\Psi_{RBC_0A},\tau^{i_r,i_{r-1}\ldots i_1}_{RBC_0A})\geq 1-{\varepsilon}$. Thus by Fact \[uhlmann\], there exists a pure state $\kappa^{i_r,i_{r-1}\ldots i_1}_{CE_AT_B}$ such that $$\label{goodproperty} \F^2(\Psi_{RBC_0A}\otimes\kappa^{i_r,i_{r-1}\ldots i_1}_{CE_AT_B},\tau^{i_r,i_{r-1}\ldots i_1}_{RBCAC_0T_BE_A})\geq 1-{\varepsilon}$$ Consider, $$\begin{aligned} &&\P(\sum_{i_1,i_2\ldots i_r}\sqrt{p_{i_1,i_2\ldots i_r}}\tau^{i_r,i_{r-1}\ldots i_1}_{RBCAC_0T_BE_A}{\ensuremath{ \left| i_r \right\rangle }}_{M_r}\ldots{\ensuremath{ \left| i_1 \right\rangle }}_{M_1},\sum_{i_1,i_2\ldots i_r}\sqrt{p'_{i_1,i_2\ldots i_r}}\tau^{i_r,i_{r-1}\ldots i_1}_{RBCAC_0T_BE_A}{\ensuremath{ \left| i_r \right\rangle }}_{M_r}\ldots{\ensuremath{ \left| i_1 \right\rangle }}_{M_1})\nonumber \\&=& \sqrt{1-(\sum_{i_1,i_2\ldots i_r}\sqrt{p_{i_1,i_2\ldots i_r}p'_{i_1,i_2\ldots i_r}})^2} = \sqrt{1-(\sum_{i_1,i_2\ldots i_r\in \G}p_{i_1,i_2\ldots i_r})}\leq \sqrt{{\varepsilon}} \end{aligned}$$ and $$\begin{aligned} &&\P(\sum_{i_1,i_2\ldots i_r}\sqrt{p'_{i_1,i_2\ldots i_r}}\tau^{i_r,i_{r-1}\ldots i_1}_{RBCAC_0T_BE_A}{\ensuremath{ \left| i_r \right\rangle }}_{M_r}\ldots{\ensuremath{ \left| i_1 \right\rangle }}_{M_1}, \sum_{i_1,i_2\ldots i_r}\sqrt{p'_{i_1,i_2\ldots i_r}}\Psi_{RBC_0A}\otimes\kappa^{i_r,i_{r-1}\ldots i_1}_{CE_AT_B}{\ensuremath{ \left| i_r \right\rangle }}_{M_r}\ldots{\ensuremath{ \left| i_1 \right\rangle }}_{M_1}) \\&=& \sqrt{1-(\sum_{i_1,i_2\ldots i_r}p'_{i_1,i_2\ldots i_r}\F(\tau^{i_r,i_{r-1}\ldots i_1}_{RBCAC_0T_BE_A},\Psi_{RBC_0A}\otimes\kappa^{i_r,i_{r-1}\ldots i_1}_{CE_AT_B}))^2} \leq \sqrt{{\varepsilon}} \quad (\text{Equation \ref{goodproperty}})\end{aligned}$$ These together imply, using triangle inequality for purified distance (Fact \[fact:trianglepurified\]), $$\begin{aligned} &&\P(\sum_{i_1,i_2\ldots i_r}\sqrt{p_{i_1,i_2\ldots i_r}}\tau^{i_r,i_{r-1}\ldots i_1}_{RBCAC_0T_BE_A}{\ensuremath{ \left| i_r \right\rangle }}_{M_r}\ldots{\ensuremath{ \left| i_1 \right\rangle }}_{M_1},\sum_{i_1,i_2\ldots i_r}\sqrt{p'_{i_1,i_2\ldots i_r}}\Psi_{RBC_0A}\otimes\kappa^{i_r,i_{r-1}\ldots i_1}_{CE_AT_B}{\ensuremath{ \left| i_r \right\rangle }}_{M_r}\ldots{\ensuremath{ \left| i_1 \right\rangle }}_{M_1})\\&\leq& 2\sqrt{{\varepsilon}} \end{aligned}$$ Thus, from corollary \[cohequation\], we have $$\P(\Psi_{RBCA}\otimes\theta_{E_AE_B},U^{\dagger}U_2^{\dagger}\ldots U_{r+1}^{\dagger}\sum_{i_1,i_2\ldots i_r}\sqrt{p'_{i_1,i_2\ldots i_r}}\Psi_{RBC_0A}\otimes\kappa^{i_r,i_{r-1}\ldots i_1}_{CE_AT_B}{\ensuremath{ \left| i_r \right\rangle }}_{M_r}\ldots{\ensuremath{ \left| i_1 \right\rangle }}_{M_1})\leq 2\sqrt{{\varepsilon}}.$$ The communication weight of $p'_{i_1,i_2\ldots i_r}$ is $$\begin{aligned} \sum_{i_1,i_2\ldots i_r} p'_{i_1,i_2\ldots i_r}\log(i_1\cdot i_2\ldots i_r) &\leq& \frac{1}{1-{\varepsilon}}\sum_{{i_1,i_2\ldots i_r}\in \G}p_{i_1,i_2\ldots i_r}\log(i_1\cdot i_2\ldots i_r) \\ &\leq& \frac{1}{1-{\varepsilon}}\sum_{i_1,i_2\ldots i_r}p_{i_1,i_2\ldots i_r}\log(i_1\cdot i_2\ldots i_r) =\frac{C}{1-{\varepsilon}}. \end{aligned}$$ This completes the proof. We now use Lemma \[goodcoh\] to prove the following for the state $\omega_{RBCA}$. Recall that $e_d$ is the smallest eigenvalue of $\psi^a_{R'}$, independent of $a$. \[convepr\] It holds that $$\P(\omega_{RBCA}\otimes\theta_{E_AE_B},U^{\dagger}U_2^{\dagger}\ldots U_{r+1}^{\dagger}\sum_{i_1,i_2\ldots i_r}\sqrt{p'_{i_1,i_2\ldots i_r}}\omega_{RBC_0A}\otimes\kappa^{i_r,i_{r-1}\ldots i_1}_{CE_AT_B}{\ensuremath{ \left| i_r \right\rangle }}_{M_r}\ldots{\ensuremath{ \left| i_1 \right\rangle }}_{M_1})\leq \sqrt{\frac{8{\varepsilon}}{e_d\cdot d}}.$$ Communication weight of distribution $p'_{i_1,i_2\ldots i_r}$ is $\frac{C}{1-{\varepsilon}}$. Define a completely positive map $\tilde{\E}:R\rightarrow R$ as $ \tilde{\E}(\rho){\ensuremath{ \stackrel{\mathrm{def}}{=} }}\frac{e_d}{d_a}(\Psi^{-\frac{1}{2}}_R\rho\Psi^{-\frac{1}{2}}_R)$, which is trace non-increasing since $\Psi^{-1}_R \leq \frac{d_a}{e_d}\text{I}_R$. Using equation \[psiandomega\], observe that $$\tilde{\E}(\Psi_{RBCA}) = e_d\cdot d\cdot\omega_{RBCA}.$$ Consider, $$\begin{aligned} 2\sqrt{{\varepsilon}} &\geq& \P(\Psi_{RBCA}\otimes\theta_{E_AE_B},U^{\dagger}U_2^{\dagger}\ldots U_{r+1}^{\dagger}\sum_{i_1,i_2\ldots i_r}\sqrt{p'_{i_1,i_2\ldots i_r}}\Psi_{RBC_0A}\otimes\kappa^{i_r,i_{r-1}\ldots i_1}_{CE_AT_B}{\ensuremath{ \left| i_r \right\rangle }}_{M_r}\ldots{\ensuremath{ \left| i_1 \right\rangle }}_{M_1})\\ &&\text{(Lemma \ref{goodcoh})}\\&\geq& \P(\tilde{\E}(\Psi_{RBCA})\otimes\theta_{E_AE_B},U^{\dagger}U_2^{\dagger}\ldots U_{r+1}^{\dagger}\sum_{i_1,i_2\ldots i_r}\sqrt{p'_{i_1,i_2\ldots i_r}}\tilde{\E}(\Psi_{RBC_0A})\otimes\kappa^{i_r,i_{r-1}\ldots i_1}_{CE_AT_B}{\ensuremath{ \left| i_r \right\rangle }}_{M_r}\ldots{\ensuremath{ \left| i_1 \right\rangle }}_{M_1})\\ && (\text{Fact \ref{fact:monotonequantumoperation}}) \\ &=& \P(d\cdot e_d\cdot\omega_{RBCA}\otimes\theta_{E_AE_B},d\cdot e_d\cdot U^{\dagger}U_2^{\dagger}\ldots U_{r+1}^{\dagger}\sum_{i_1,i_2\ldots i_r}\sqrt{p'_{i_1,i_2\ldots i_r}}\omega_{RBC_0A}\otimes\kappa^{i_r,i_{r-1}\ldots i_1}_{CE_AT_B}{\ensuremath{ \left| i_r \right\rangle }}_{M_r}\ldots{\ensuremath{ \left| i_1 \right\rangle }}_{M_1})\end{aligned}$$ Using Fact \[scalarpurified\], we thus obtain $$\P(\omega_{RBCA}\otimes\theta_{E_AE_B}, U^{\dagger}U_2^{\dagger}\ldots U_{r+1}^{\dagger}\sum_{i_1,i_2\ldots i_r}\sqrt{p'_{i_1,i_2\ldots i_r}}\omega_{RBC_0A}\otimes\kappa^{i_r,i_{r-1}\ldots i_1}_{CE_AT_B}{\ensuremath{ \left| i_r \right\rangle }}_{M_r}\ldots{\ensuremath{ \left| i_1 \right\rangle }}_{M_1})\leq \sqrt{\frac{8{\varepsilon}}{d\cdot e_d}}.$$ Furthermore, there is no change in communication weight. This completes the proof. Similarly for quantum state transfer, we have the following corollary \[conveprmerge\] It holds that $$\P(\omega'_{RC}\otimes\theta_{E_AE_B},U^{\dagger}U_2^{\dagger}\ldots U_{r+1}^{\dagger}\sum_{i_1,i_2\ldots i_r}\sqrt{p'_{i_1,i_2\ldots i_r}}\omega'_{RC_0}\otimes\kappa^{i_r,i_{r-1}\ldots i_1}_{CE_AT_B}{\ensuremath{ \left| i_r \right\rangle }}_{M_r}\ldots{\ensuremath{ \left| i_1 \right\rangle }}_{M_1})\leq \sqrt{\frac{8{\varepsilon}}{e_d\cdot d}}.$$ Communication weight of distribution $p'_{i_1,i_2\ldots i_r}$ is $\frac{C}{1-{\varepsilon}}$. Now we exhibit an interactive entanglement assisted communication protocol for state-redistribution of $\omega_{RBCA}$ with suitably upper bounded worst case communication cost. \[exptoworst\] Fix an error parameter $\mu>0$. There exists an entanglement assisted $r$-round quantum communication protocol for state redistribution of $\omega_{RBCA}$ with worst case quantum communication cost at most $\frac{2C}{\mu(1-{\varepsilon})}$ and error at most $\sqrt{\frac{8{\varepsilon}}{e_d\cdot d}}+\sqrt{\mu}$. From lemma \[convepr\], we have that $$\P(\omega_{RBCA}\otimes\theta_{E_AE_B},U^{\dagger}U_2^{\dagger}\ldots U_{r+1}^{\dagger}\sum_{i_1,i_2\ldots i_r}\sqrt{p'_{i_1,i_2\ldots i_r}}\omega_{RBC_0A}\otimes\kappa^{i_r,i_{r-1}\ldots i_1}_{CE_AT_B}{\ensuremath{ \left| i_r \right\rangle }}_{M_r}\ldots{\ensuremath{ \left| i_1 \right\rangle }}_{M_1})\leq \sqrt{\frac{8{\varepsilon}}{a_d\cdot d}},$$ and $$\sum_{i_1,i_2\ldots i_r}p'_{i_1,i_2\ldots i_r}\log(i_1\cdot i_2\ldots i_r)\leq \frac{C}{1-{\varepsilon}}.$$ Consider the set of tuples $(i_1,i_2\ldots i_r)$ which satisfy $i_1\cdot i_2\ldots i_r>2^{\frac{C}{(1-{\varepsilon})\mu}}$. Let this set be $\B'$ and $\G'$ be the set of rest of the tuples. Then $$\frac{C}{(1-{\varepsilon})} > \sum_{i_1,i_2\ldots i_r\in \B'}p'_{i_1,i_2\ldots i_r}\log(i_1\cdot i_2\ldots i_r) > \frac{C}{(1-{\varepsilon})\mu}\sum_{i_1,i_2\ldots i_r\in \B'}p'_{i_1,i_2\ldots i_r}.$$ This implies $\sum_{i_1,i_2\ldots i_r\in \B'}p'_{i_1,i_2\ldots i_r} < \mu$. Define a new probability distribution $q_{i_1,i_2\ldots i_r}{\ensuremath{ \stackrel{\mathrm{def}}{=} }}\frac{p'_{i_1,i_2\ldots i_r}}{\sum_{(i_1,i_2\ldots i_r)\in \G'}p'_{i_1,i_2\ldots i_r}}$ for all $(i_1,i_2\ldots i_r)\in \G'$ and $q_{i_1,i_2\ldots i_r}=0$ for all $(i_1,i_2\ldots i_r)\in \B'$. Consider, $$\P(\sum_{i_1,i_2\ldots i_r}\sqrt{p'_{i_1,i_2\ldots i_r}}\omega_{RBC_0A}\otimes\kappa^{i_r,i_{r-1}\ldots i_1}_{CE_AT_B}{\ensuremath{ \left| i_r \right\rangle }}_{M_r}\ldots{\ensuremath{ \left| i_1 \right\rangle }}_{M_1},\sum_{i_1,i_2\ldots i_r}\sqrt{q_{i_1,i_2\ldots i_r}}\omega_{RBC_0A}\otimes\kappa^{i_r,i_{r-1}\ldots i_1}_{CE_AT_B}{\ensuremath{ \left| i_r \right\rangle }}_{M_r}\ldots{\ensuremath{ \left| i_1 \right\rangle }}_{M_1})$$ $$= \sqrt{1-(\sum_{i_1,i_2\ldots i_r}\sqrt{p'_{i_1,i_2\ldots i_r}q_{i_1,i_2\ldots i_r}})^2} = \sqrt{1-\sum_{(i_1,i_2\ldots i_r) \in \G'}p'_{i_1,i_2\ldots i_r}}\leq \sqrt{\mu}.$$Thus, triangle inequality for purified distance (Fact \[fact:trianglepurified\]) implies $$\begin{aligned} &&\P(\omega_{RBCA}\otimes\theta_{E_AE_B},U^{\dagger}U_2^{\dagger}\ldots U_{r+1}^{\dagger}\sum_{i_1,i_2\ldots i_r}\sqrt{q_{i_1,i_2\ldots i_r}}\omega_{RBC_0A}\otimes\kappa^{i_r,i_{r-1}\ldots i_1}_{CE_AT_B}{\ensuremath{ \left| i_r \right\rangle }}_{M_r}\ldots{\ensuremath{ \left| i_1 \right\rangle }}_{M_1})\\&\leq& \sqrt{\frac{8{\varepsilon}}{e_d\cdot d}}+\sqrt{\mu}\end{aligned}$$ Defining $\pi_{RBCAE_AE_B}{\ensuremath{ \stackrel{\mathrm{def}}{=} }}U^{\dagger}U_2^{\dagger}\ldots U_{r+1}^{\dagger}\sum_{i_1,i_2\ldots i_r\in \G'}\sqrt{q_{i_1,i_2\ldots i_r}}\omega_{RBC_0A}\otimes\kappa^{i_r,i_{r-1}\ldots i_1}_{CE_AT_B}{\ensuremath{ \left| i_r \right\rangle }}_{M_r}\ldots{\ensuremath{ \left| i_1 \right\rangle }}_{M_1}$, we have $$\label{eq:closegoodstate} \P(\omega_{RBCA}\otimes\theta_{E_AE_B},\omega'_{RBCE_AE_B})\leq \sqrt{\frac{8{\varepsilon}}{e_d\cdot d}}+\sqrt{\mu}$$ Let $\T$ be the set of all tuples $(i_1,i_2\ldots i_k)$ (with $k\leq r$) that satisfy the following property: there exists a set of positive integers $\{i_{k+1},i_{k+2}\ldots i_r\}$ such that $(i_1,i_2\ldots i_k,i_{k+1}\ldots i_r)\in \G'$. Consider the following protocol $\mathcal{P'}$. **Input:** A quantum state in registers $RBCAE_AE_B$. - Alice applies the isometry $U:ACE_A\rightarrow ACE_AM_1$ (definition \[shortunitaries\]). She introduces a register $M'_1\equiv M_1$ in the state ${\ensuremath{ \left| 0 \right\rangle }}_{M'_1}$ and performs the following unitary $W_1: M_1M'_1\rightarrow M_1M'_1$: $$W_1{\ensuremath{ \left| i \right\rangle }}_{M_1}{\ensuremath{ \left| 0 \right\rangle }}_{M'_1}={\ensuremath{ \left| i \right\rangle }}_{M_1}{\ensuremath{ \left| i \right\rangle }}_{M'_1} \quad \text{if } (i)\in \T \quad \text{and}\quad W_1{\ensuremath{ \left| i \right\rangle }}_{M_1}{\ensuremath{ \left| 0 \right\rangle }}_{M'_1}={\ensuremath{ \left| i \right\rangle }}_{M_1}{\ensuremath{ \left| 0 \right\rangle }}_{M'_1} \quad \text{if } (i)\notin \T.$$ She sends $M'_1$ to Bob. - Bob introduces a register $M'_2\equiv M_2$ in the state ${\ensuremath{ \left| 0 \right\rangle }}_{M'_2}$. If he receives ${\ensuremath{ \left| 0 \right\rangle }}_{M'_1}$ from Alice, he performs no operation. Else he applies the isometry $U_2: BE_BM'_1\rightarrow BE_BM'_1M_2$ and then performs the following unitary $W_2: M'_1M_2M'_2\rightarrow M'_1M_2M'_2$: $$W_1{\ensuremath{ \left| i \right\rangle }}_{M'_1}{\ensuremath{ \left| j \right\rangle }}_{M_2}{\ensuremath{ \left| 0 \right\rangle }}_{M'_2}={\ensuremath{ \left| i \right\rangle }}_{M'_1}{\ensuremath{ \left| j \right\rangle }}_{M_2}{\ensuremath{ \left| j \right\rangle }}_{M'_2} \quad \text{if } (i,j)\in \T$$ and $$W_1{\ensuremath{ \left| i \right\rangle }}_{M'_1}{\ensuremath{ \left| j \right\rangle }}_{M_2}{\ensuremath{ \left| 0 \right\rangle }}_{M'_2}={\ensuremath{ \left| i \right\rangle }}_{M'_1}{\ensuremath{ \left| j \right\rangle }}_{M_2}{\ensuremath{ \left| 0 \right\rangle }}_{M'_2} \quad \text{if }(i,j)\notin \T.$$ He sends $M'_2$ to Alice. - For every odd round $k>1$, Alice introduces a register $M'_k\equiv M_k$ in the state ${\ensuremath{ \left| 0 \right\rangle }}_{M'_k}$. If she receives ${\ensuremath{ \left| 0 \right\rangle }}_{M'_{k-1}}$ from Bob, she performs no further operation. Else, she applies the isometry $$U_k: ACE_AM_1M'_2M_3\ldots M'_{k-1}\rightarrow ACE_AM_1M'_2M_3\ldots M'_{k-1}M_k$$ and performs the following unitary $W_k: M_1M'_2\ldots M'_{k-1}M_kM'_k\rightarrow M_1M'_2\ldots M'_{k-1}M_kM'_k$: $$W_k{\ensuremath{ \left| i_1 \right\rangle }}_{M_1}{\ensuremath{ \left| i_2 \right\rangle }}_{M'_2}\ldots{\ensuremath{ \left| i_k \right\rangle }}_{M_k}{\ensuremath{ \left| 0 \right\rangle }}_{M'_k}= {\ensuremath{ \left| i_1 \right\rangle }}_{M_1}{\ensuremath{ \left| i_2 \right\rangle }}_{M'_2}\ldots{\ensuremath{ \left| i_k \right\rangle }}_{M_k}{\ensuremath{ \left| i_k \right\rangle }}_{M'_k} \quad \text{if } (i_1,i_2\ldots i_k)\in \T$$ and $$W_k{\ensuremath{ \left| i_1 \right\rangle }}_{M_1}{\ensuremath{ \left| i_2 \right\rangle }}_{M'_2}\ldots{\ensuremath{ \left| i_k \right\rangle }}_{M_k}{\ensuremath{ \left| 0 \right\rangle }}_{M'_k}= {\ensuremath{ \left| i_1 \right\rangle }}_{M_1}{\ensuremath{ \left| i_2 \right\rangle }}_{M'_2}\ldots{\ensuremath{ \left| i_k \right\rangle }}_{M_k}{\ensuremath{ \left| 0 \right\rangle }}_{M'_k} \quad \text{if }(i_1,i_2\ldots i_k)\notin \T.$$ She sends $M'_k$ to Bob. - For every even round $k>2$, Bob introduces a register $M'_k\equiv M_k$ in the state ${\ensuremath{ \left| 0 \right\rangle }}_{M'_k}$. If he receives ${\ensuremath{ \left| 0 \right\rangle }}_{M'_{k-1}}$ from Alice, he performs no further operation.. Else, he applies the isometry $U_k: BE_BM'_1M_2M'_3\ldots M'_{k-1}\rightarrow BE_BM'_1M_2M'_3\ldots M'_{k-1}M_k$ and performs the following unitary $W_k: M'_1M_2\ldots M'_{k-1}M_kM'_k\rightarrow M'_1M_2\ldots M'_{k-1}M_kM'_k$: $$W_k{\ensuremath{ \left| i_1 \right\rangle }}_{M'_1}{\ensuremath{ \left| i_2 \right\rangle }}_{M_2}\ldots{\ensuremath{ \left| i_k \right\rangle }}_{M_k}{\ensuremath{ \left| 0 \right\rangle }}_{M'_k}= {\ensuremath{ \left| i_1 \right\rangle }}_{M'_1}{\ensuremath{ \left| i_2 \right\rangle }}_{M_2}\ldots{\ensuremath{ \left| i_k \right\rangle }}_{M_k}{\ensuremath{ \left| i_k \right\rangle }}_{M'_k} \quad \text{if } (i_1,i_2\ldots i_k)\in \T$$ and $$W_k{\ensuremath{ \left| i_1 \right\rangle }}_{M'_1}{\ensuremath{ \left| i_2 \right\rangle }}_{M_2}\ldots{\ensuremath{ \left| i_k \right\rangle }}_{M_k}{\ensuremath{ \left| 0 \right\rangle }}_{M'_k}= {\ensuremath{ \left| i_1 \right\rangle }}_{M'_1}{\ensuremath{ \left| i_2 \right\rangle }}_{M_2}\ldots{\ensuremath{ \left| i_k \right\rangle }}_{M_k}{\ensuremath{ \left| 0 \right\rangle }}_{M'_k} \quad \text{if }(i_1,i_2\ldots i_k)\notin \T.$$ He sends $M'_k$ to Alice. - After round $r$, if Bob receives ${\ensuremath{ \left| 0 \right\rangle }}_{M'_r}$ from Alice, he performs no further operation. Else he applies the unitary $U^b_{r+1}: BE_BM'_1M_2M'_3\ldots M'_r\rightarrow BC_0T_BM'_1M_2M'_3\ldots M'_r$. Alice applies the unitary $U^a_{r+1}: ACE_AM_1M'_2M_3\ldots M_r\rightarrow ACE_AM_1M'_2M_3\ldots M_r$. They trace out all of their registers except $A,B,C_0$. Let $\E: RBCAE_AE_B\rightarrow RBC_0A$ be the quantum map generated by $\mathcal{P'}$. For any $k$, if any of the parties receive the state ${\ensuremath{ \left| 0 \right\rangle }}_{M'_k}$, let this event be called *abort*. We show the following claim. It holds that $\E(\pi_{RBCAE_AE_B})=\omega_{RBC_0A}$ We argue that the protocol never aborts when acting on $\pi_{RBCAE_AE_B}$. Consider the first round of the protocol. Define the projector $\Pi{\ensuremath{ \stackrel{\mathrm{def}}{=} }}\sum_{i: (i)\notin \T}{{\ensuremath{ \left| i \middle\rangle \middle\langle i \right| }}}_{M_1}$. From definition \[shortunitaries\], it is clear that the isometry $U^{\dagger}_2U^{\dagger}_3\ldots U^{\dagger}_{r+1}$ is of the form $\sum_i {{\ensuremath{ \left| i \middle\rangle \middle\langle i \right| }}}_{M_1}\otimes V_i$, for some set of isometries $\{V_i\}$ . Thus, from the definition of $\pi_{RBCAE_AE_B}$ (in which the summation is only over the tuples $(i_1,i_2\ldots i_r)\in \G'$), it holds that $$\Pi U\pi_{RBCAE_AE_B}=0.$$ This implies that Bob does not receive the state ${\ensuremath{ \left| 0 \right\rangle }}_{M'_1}$ and hence he does not aborts. Same argument applies to other rounds, which implies that the protocol never aborts. Thus, the state at the end of the protocol is $${\ensuremath{ \mathrm{Tr} }}_{CE_AT_B}(U_{r+1}U_{r}\ldots U_2U\pi_{RBCAE_AE_B}) = \omega_{RBC_0A}.$$ Thus, from equation \[eq:closegoodstate\], it holds that $$\P(\E(\omega_{RBCA}\otimes\theta_{E_AE_B}),\omega_{RBC_0A})\leq \sqrt{\frac{8{\varepsilon}}{e_d\cdot d}}+\sqrt{\mu}.$$ Quantum communication cost of the protocol is at most $$\text{max}_{(i_1,i_2\ldots i_r)\in \G'}(\log((i_1+1)\cdot (i_2+1)\ldots (i_r+1)) \leq 2\cdot\text{max}_{(i_1,i_2\ldots i_r)\in \G'}(\log(i_1\cdot i_2\ldots i_r)\leq \frac{2C}{(1-{\varepsilon})\mu}.$$ This completes the proof. Similarly, we have the corollary for quantum state transfer. \[exptoworstmerge\] Fix an error parameter $\mu>0$. There exists a $r$-round communication protocol for state transfer of $\omega'_{RC}$ with worst case quantum communication cost atmost $\frac{2C}{\mu(1-{\varepsilon})}$ and error at most $\sqrt{\frac{8{\varepsilon}}{e_d\cdot d}}+\sqrt{\mu}$. Next two lemmas obtain lower bound on worst case quantum communication cost of quantum state redistribution of $\omega_{RBCA}$ and quantum state transfer of $\omega'_{RC}$. \[redistworstcase\] Let $d$, the local dimension of register $B$, be such that $d>2^{18}$. Then worst case quantum communication cost of any interactive entanglement assisted quantum state redistribution protocol of the state $\omega_{RBCA}$, with error $\delta < \frac{1}{6}$, is at least $\frac{1}{6}\log(d)$. Following lower bound on worst case quantum communication cost for interactive quantum state redistribution of the state $\omega_{RBCA}$, with error $\delta$, has been shown ([@Berta14], Section $5$, Proposition $2$): $$\frac{1}{2}({{\ensuremath{ \mathrm{I}^{\delta}_{\max} {\: \! \!}{\ensuremath{ \left( R {\: \!}: {\: \!}BC \right) }} }}}_{\omega}-{{\ensuremath{ \mathrm{I}_{\max} {\: \! \!}{\ensuremath{ \left( R {\: \!}: {\: \!}B \right) }} }}}_{\omega}).$$ Recall, from definition \[staterediststate\], that $\omega_{RBC}=\frac{1}{d_a}\sum_{a=1}^{d_a} {{\ensuremath{ \left| a \middle\rangle \middle\langle a \right| }}}_{R_A}\otimes\omega^a_{R'BC}$ is a *classical-quantum* state. Consider, $$\begin{aligned} {{\ensuremath{ \mathrm{I}^{\delta}_{\max} {\: \! \!}{\ensuremath{ \left( R {\: \!}: {\: \!}BC \right) }} }}}_{\omega} &\geq& \inf_{\rho_{RBC}\in {{\ensuremath{ \mathcal{B}^{\delta} {\: \! \!}{\ensuremath{ \left( \omega_{RBC} \right) }} }}}}{{\ensuremath{ \mathrm{I} {\: \! \!}{\ensuremath{ \left( R {\: \!}: {\: \!}BC \right) }} }}}_{\rho}\\ &\geq& \inf_{\rho_{R}\in {{\ensuremath{ \mathcal{B}^{\delta} {\: \! \!}{\ensuremath{ \left( \omega_{R} \right) }} }}}}S(\rho_R) + \inf_{\rho_{BC}\in {{\ensuremath{ \mathcal{B}^{\delta} {\: \! \!}{\ensuremath{ \left( \omega_{BC} \right) }} }}}}S(\rho'_{BC}) - \sup_{\rho_{RBC}\in {{\ensuremath{ \mathcal{B}^{\delta} {\: \! \!}{\ensuremath{ \left( \omega_{RBC} \right) }} }}}}S(\rho_{RBC}) \\ &\geq& {{\ensuremath{ \mathrm{I} {\: \! \!}{\ensuremath{ \left( R {\: \!}: {\: \!}BC \right) }} }}}_{\omega} - 3\delta\log(d) - 3 \quad (\text{Fact \ref{fact:fannes}})\\ &\geq& \frac{1}{d_a}\sum_{a}{{\ensuremath{ \mathrm{I} {\: \! \!}{\ensuremath{ \left( R' {\: \!}: {\: \!}BC \right) }} }}}_{\omega^a} - 3\delta\log(d)-3 \quad (\text{Fact \ref{cqmutinf}}) \\ &=& 2\log(d)-3\delta\log(d)-3.\end{aligned}$$ To bound ${{\ensuremath{ \mathrm{I}_{\max} {\: \! \!}{\ensuremath{ \left( R {\: \!}: {\: \!}B \right) }} }}}_{\omega}$, notice that $\omega_{RB}=\frac{1}{d\cdot d_a}\sum_{a=1}^{d_a}\sum_{j=1}^d{{\ensuremath{ \left| a \middle\rangle \middle\langle a \right| }}}_{R_A}\otimes{{\ensuremath{ \left| u_j \middle\rangle \middle\langle u_j \right| }}}_{R'}\otimes{{\ensuremath{ \left| v_j(a) \middle\rangle \middle\langle v_j(a) \right| }}}_{B}$ is also a *classical-quantum* state. Using Fact \[fact:cqimax\], we obtain ${{\ensuremath{ \mathrm{I}_{\max} {\: \! \!}{\ensuremath{ \left( R {\: \!}: {\: \!}B \right) }} }}}_{\omega} \leq \log(|B|) = \log(d)$. Thus, communication cost is lower bounded by $$\frac{1}{2}({{\ensuremath{ \mathrm{I}^{\delta}_{\max} {\: \! \!}{\ensuremath{ \left( R {\: \!}: {\: \!}BC \right) }} }}}_{\omega}-{{\ensuremath{ \mathrm{I}_{\max} {\: \! \!}{\ensuremath{ \left( R {\: \!}: {\: \!}B \right) }} }}}_{\omega})\geq \frac{\log(d)-3\delta\log(d)-3}{2}=\frac{1-3\delta}{2}\log(d) - 1.5 > \frac{1}{6}\log(d),$$ for $d>2^{18}$. For quantum state transfer, we have following bound. \[eprworstcase\] Worst case quantum communication cost for state transfer of the state $\omega'_{RC}$, with error $\delta<\frac{1}{2}$, is at least $\frac{1}{2}\log(d) + \frac{1}{2}\log(1-\delta^2)$. The following lower bound on worst case interactive quantum communication cost of state transfer of $\omega'_{RC}$ has been shown ([@Berta14], Section $5$, Proposition $2$): $$\frac{1}{2}{{\ensuremath{ \mathrm{I}^{\delta}_{\max} {\: \! \!}{\ensuremath{ \left( R {\: \!}: {\: \!}C \right) }} }}}_{\omega'}.$$ Consider, $$\begin{aligned} {{\ensuremath{ \mathrm{I}^{\delta}_{\max} {\: \! \!}{\ensuremath{ \left( R {\: \!}: {\: \!}C \right) }} }}}_{\omega'} &\geq& - {{\ensuremath{ \mathrm{H}^{\delta}_{\min} {\: \! \!}{\ensuremath{ \left( R \middle | C \right) }} }}}_{\omega'} \quad (\text{Fact \ref{fact:imaxhmin}}) \\ &\geq& - {{\ensuremath{ \mathrm{H}_{\max} {\: \! \!}{\ensuremath{ \left( R \middle | C \right) }} }}}_{\omega'}+\log(1-\delta^2) \quad (\text{Proposition 6.3, \cite{tomamichel15}}) \\ &=& \log(d) + \log(1-\delta^2) \end{aligned}$$ Now we proceed to proof of Theorem \[thm:main\]. Suppose there exists a $r$-round communication protocol $\mathcal{P}$ for entanglement assisted quantum state redistribution of the pure state $\Psi_{RBCA}$ with error ${\varepsilon}$ and expected communication cost at most ${{{\ensuremath{ \mathrm{I} {\: \! \!}{\ensuremath{ \left( R {\: \!}: {\: \!}C {\: \!}\middle\vert {\: \!}B \right) }} }}}}_{\Psi}\cdot (\frac{1}{{\varepsilon}})^p$. Then we show a contradiction for $p< 1$. For a $\beta \geq 1$ to be chosen later, and $d>2^{18}$, we choose $\{e_1,e_2\ldots e_d\}$ (Definition \[staterediststate\]) as constructed in lemma \[lowentropy\]. Thus, $${{{\ensuremath{ \mathrm{I} {\: \! \!}{\ensuremath{ \left( R {\: \!}: {\: \!}C {\: \!}\middle\vert {\: \!}B \right) }} }}}}_{\Psi}\leq 2S(\Psi_C)\leq 4\frac{\log(d)}{\beta} \quad \text{(Fact \ref{informationbound})}.$$ Fix an error parameter $\mu$. From lemma \[exptoworst\], there exists a communication protocol $\mathcal{P}'$ for quantum state redistribution of $\omega_{RBCA}$, with error at most $\sqrt{\mu}+ \sqrt{8\beta{\varepsilon}}$ and worst case quantum communication cost at most $$\frac{2\cdot{{{\ensuremath{ \mathrm{I} {\: \! \!}{\ensuremath{ \left( R {\: \!}: {\: \!}C {\: \!}\middle\vert {\: \!}B \right) }} }}}}_{\Psi}}{\mu(1-{\varepsilon})}\cdot (\frac{1}{{\varepsilon}})^p\leq 8\frac{\log(d)}{\beta\mu(1-{\varepsilon})}\cdot (\frac{1}{{\varepsilon}})^p \leq 16\frac{\log(d)}{\beta\mu}\cdot (\frac{1}{{\varepsilon}})^p.$$ Last inequality holds since ${\varepsilon}<1/2$. Let $\beta\mu{\varepsilon}^p=128$. Then $\sqrt{\mu}+ \sqrt{8\beta{\varepsilon}} = \sqrt{\mu}+ \frac{32}{\sqrt{\mu}}{\varepsilon}^{\frac{1-p}{2}}$, which is minimized at $\mu = 32\cdot{\varepsilon}^{\frac{1-p}{2}}$. This gives $\sqrt{\mu}+ \frac{32}{\sqrt{\mu}}{\varepsilon}^{\frac{1-p}{2}} = 8\sqrt{2}\cdot{\varepsilon}^{\frac{1-p}{4}}$ and $\beta=4/{\varepsilon}^{\frac{1+p}{2}} > 1$. As in the theorem, let ${\varepsilon}\in [0, (\frac{1}{70})^{\frac{4}{1-p}}]$. Thus, we have a protocol for state redistribution of $\omega_{RBCA}$, with error at most $8\sqrt{2}\cdot{\varepsilon}^{\frac{1-p}{4}} < \frac{1}{6}$ and worst case communication at most $\frac{1}{8}\log(d)$, in contradiction with lemma \[redistworstcase\]. Above argument does not hold for any $p\geq 1$ since we need to simultaneously satisfy $\beta\geq 1$, $8\beta{\varepsilon}<1$ and $\mu<1$. On similar lines, we prove Theorem \[thm:main2\] below. Suppose there exists a communication protocol for state transfer of the pure states $\tilde{\Psi}_{RC}$ with error ${\varepsilon}<\frac{1}{2}$ and expected communication cost at most $S(\tilde{\Psi}_R)\cdot (\frac{1}{{\varepsilon}})^p$. Then we show a contradiction for $p< 1$. For a $\beta \geq 1$ to be chosen later, choose $a_i$ as constructed in lemma \[lowentropy\]. Then $S(\tilde{\Psi}_R)\leq 2\frac{\log(d)}{\beta}$. Fix an error parameter $\mu$. From corollary \[exptoworstmerge\], there exists a communication protocol for state transfer of $\omega'_{RC}$, with error at most $\sqrt{\mu}+ \sqrt{8\beta{\varepsilon}}$ and worst case quantum communication cost at most $$\frac{2S(\Psi'_R)}{\mu(1-{\varepsilon})}\cdot (\frac{1}{{\varepsilon}})^p\leq \frac{4\log(d)}{\beta\mu(1-{\varepsilon})}\cdot (\frac{1}{{\varepsilon}})^p \leq \frac{8\log(d)}{\beta\mu}\cdot (\frac{1}{{\varepsilon}})^p.$$ Let $\beta\mu{\varepsilon}^p=16$. Then $\sqrt{\mu}+ \sqrt{8\beta{\varepsilon}} = \sqrt{\mu}+ \frac{8\sqrt{2}}{\sqrt{\mu}}{\varepsilon}^{\frac{1-p}{2}}$, which is minimized at $\mu = 8\sqrt{2}{\varepsilon}^{\frac{1-p}{2}}$. This gives $\sqrt{\mu}+ \sqrt{8\beta{\varepsilon}} = \sqrt{32\sqrt{2}}{\varepsilon}^{\frac{1-p}{4}}$ and $\beta=\sqrt{2}/{\varepsilon}^{\frac{1+p}{2}} > 1$. As in the theorem, let ${\varepsilon}\in [0, (\frac{1}{2})^{\frac{15}{1-p}}]$. Thus, we have a protocol for state transfer of $\omega'_{RC}$, with error at most $\sqrt{32}{\varepsilon}^{\frac{1-p}{4}} < \frac{1}{2}$ and worst case communication at most $\frac{1}{2}\log(d)$, in contradiction with lemma \[eprworstcase\]. Conclusion {#sec:conclusion} ========== We have shown a lower bound on expected communication cost of interactive quantum state redistribution and quantum state transfer. Main technique that we use is to construct an interactive protocol for quantum state redistribution of $\omega_{RBCA}$, using any interactive protocol for quantum state redistribution of the state $\Psi_{RBCA}$. To justify why this seems to be a necessary step, consider the sub-case of quantum state transfer. Suppose there exists a protocol for quantum state transfer of $\tilde{\Psi}_{RC}$ with expected communication cost $S(\tilde{\Psi}_R)$ and error ${\varepsilon}$. We can use lemma \[exptoworst\] to obtain an another protocol with error ${\varepsilon}+\sqrt{\mu}$ and worst case communication cost at most $S(\tilde{\Psi}_R)/\mu$. But this does not lead to any contradiction, since it is straightforward to exhibit a protocol for state transfer of $\tilde{\Psi}_{RC}$ with error ${\varepsilon}+\sqrt{\mu}$ and worst case communication cost $S(\tilde{\Psi}_{R})/({\varepsilon}+\sqrt{\mu})^2 < S(\tilde{\Psi}_{R})/\mu$. Furthermore, our argument does not apply to classical setting. This follows from the fact that we are considering a pure state $\Psi_{RBCA}$ and this allows us to obtain lemma \[convepr\] without changing the probability distribution $p'_{i_1,i_2\ldots i_r}$ (and hence the corresponding communication weight), when we apply the map $\tilde{\E}$. Some questions related to our work are as follows. 1. Can the bounds obtained in theorems \[thm:main\] and \[thm:main2\] be improved, or shown to be tight? 2. What are some applications of theorems \[thm:main\] and \[thm:main2\] in quantum information theory? An immediate application is that we obtain a lower bound on worst case communication cost of quantum state redistribution, since worst case communication cost is always larger than expected communication cost of a protocol. 3. Is it possible to improve the direct sum result for entanglement assisted quantum information complexity obtained in [@Dave14]? Acknowledgment {#acknowledgment .unnumbered} ============== I thank Rahul Jain for many valuable discussions and comments on arguments in the manuscript. I also thank Penghui Yao and Venkatesh Srinivasan for helpful discussions. This work is supported by the Core Grants of the Center for Quantum Technologies (CQT), Singapore.
--- abstract: 'Prevailing proposals for the first generation of quantum computers make use of 2-level systems, or *qubits*, as the fundamental unit of quantum information. However, recent innovations in quantum error correction and magic state distillation protocols demonstrate that there are advantages of using $d$-level quantum systems, known as *qudits*, over the qubit analogues. When designing a quantum architecture, it is crucial to consider protocols for compilation, the optimal conversion of high-level instructions used by programmers into low-level instructions interpreted by the machine. In this work, we present a general purpose automated compiler for multiqudit exact synthesis based on previous work on qubits that uses an algebraic representation of quantum circuits called phase polynomials. We assume Clifford gates are low-cost and aim to minimise the number of $M$ gates in a Clifford+$M$ circuit, where $M$ is the qudit analog for the qubit $T$ or $\pi/8$ phase gate. A surprising result that showcases our compiler’s capabilities is that we found a unitary implementation of the CCZ or Toffoli gate that uses 4 $M$ gates, which compares to 7 $T$ gates for the qubit analogue.' author: - 'Luke E. Heyfron' - Earl Campbell title: A quantum compiler for qudits of prime dimension greater than 3 --- Introduction ============ Despite its ubiquity in computing, the choice to use binary instead of ternary or some other numeral system is almost arbitrary. From a purely information theoretic perspective, there is no reason to prefer bits over $d$-value anologues, known as *dits*. In fact, successful experiments into 3-value logic were realised in the form of the *Setun*, a ternary computer built in 1958 by Sergei Sobolev and Nikolay Brustentsov at Moscow State University [@Brusentsov_2011]. The near universal adoption of binary can be explained from an engineering perspective in that it is much simpler to manufacture binary components. However, since as early as the 1940’s with the biquinary Collossus computer, it has been widely understood that there are intrinsic efficiency benefits of using higher dimensional logic components in that fewer are required. In the standard paradigm, there are three components required for a fault tolerant quantum computing architecture: quantum error correction (QEC) codes; magic state distillation (MSD) protocols; and finally, quantum compilers. For qudits, there has been progress showing that both qudit QEC [@Duclos-Cianci_2013; @Anwar_2014; @Hutter_2015; @Watson_2015_a; @Watson_2015_b] and qudit MSD [@anwar2012qutrit; @Campbell_2012; @Campbell_2014; @haah2017magic; @krishna2018towards] offer a resource advantage in shifting from qubits to qudits. However, surprisingly little work has been done on qudit compiling, except for the special case of qutrits where $d=3$ [@khan2005synthesis; @bocharov2017factoring]. Therefore, compiling is the crucial missing piece in understanding quantum computing with qudit logic beyond $d=3$. A standard metric for quantum compilers to minimize is the number of expensive gates that require magic state distillation. In the qubit case, the $T$ gate is typically the designated magic gate in the low-level instruction set and much progress has been made on gate synthesis in this context. For single qubits, the Matsumoto-Amano normal form [@Matsumoto_2008; @Giles_2013; @Kliuchnikov_2013] leads to decompositions of single qubit unitaries as sequences of gates from the Clifford + $T$ gate set that is optimal with regards to $T$ count for a given approximation error. So for single qubits, the problem is essentially “solved”. For multi-qubit operators, methods for $T$-optimal exact compilation have been developed but suffer exponential runtime [@Gosset_2014]. More recently, efficient optimizers have been developed that successfully reduce $T$ count, some of which are based on a correspondence between unitaries on a restricted gate set and so called phase-polynomials [@Amy_2014; @Amy_2016; @Campbell_2017; @Heyfron_2019], and others that are based on local rewrite rules [@Nam_2017]. For qudits, there has been some work on single qutrit (three level systems) synthesis [@Glaudell_2018] that can be considered a qutrit generalisation of the Matsumoto-Amano normal form. In this work, we borrow ideas from the phase-polynomial style $T$ count optimization protocols and apply these insights to qudits. We provide a general purpose compiler for exact synthesis of multiqudit unitaries generated by $M$, $P_l$ and $SUM$ gates where the $M$ gate is the canonical “expensive” magic gate (i.e. the qudit analogue of the $T$ gate). We present an example of a $M$ count reduction only possible for odd prime $d>3$. This is the CCZ gate, which is known to have optimal $T$ count of 7 when synthesised unitarily using qubit based quantum computers, whereas our decomposition has $M$ count of 4. Until now, this reduced cost has only been achieved in the qubit setting using non-unitary gadgets that exploit ancillas [@Jones_2013]. ![image](ccz){width="\linewidth"} Preliminaries {#s_prelim} ============= Let $d>3$ be a prime integer. We define a Hilbert space on $n$ qudits spanned by the computational basis vectors $\{\ket{\mathbf{x}} \ \mid \ \forall \mathbf{x} \in \mathbb{Z}_d^n\}$. We define the single qudit Pauli operators $X := \sum_{x \in \mathbb{Z}_d}\ket{x+1}\bra{x}$, where addition is performed modulo $d$; $Z := \sum_{x \in \mathbb{Z}_d}\omega^x\ket{x}\bra{x}$, where $\omega := \exp^{i\frac{2\pi}{d}}$ is a primitive $d$[^th^]{}root of unity. The set of all $n$ qudit unitaries generated by $X$ and $Z$ form the Pauli group, $\mathcal{P}$. The Clifford group, $\mathcal{C}$, is the normalizer of the Pauli group and is generated by: $$\begin{aligned} H & := \frac{1}{\sqrt{d}}\sum_{x,y \in \mathbb{Z}_d} \omega^{xy}\ket{y}\bra{x} \\ SUM & := \sum_{c,t \in \mathbb{Z}_d}\ket{c,t+c}\bra{c,t} \\ S & := \sum_{x \in \mathbb{Z}_d} \omega^{x^2}\ket{x}\bra{x}\end{aligned}$$ We refer to $H$ as the Hadamard; the Hadamard gate, $SUM$ is the two-qudit SUM gate; and $S$ is the *phase gate*. Note that the $Z$ and $S$ gates are both diagonal and correspond to linear and quadratic terms, respectively, appearing in the exponent of the phase. We further define the Clifford unitaries $$\label{Pl_gate} P_l := \sum_{x \in \mathbb{Z}_d}\ket{l x}\bra{x}$$ for all integer $l \neq 0 $, which we call *product operators* as they perform field multiplication between the input basis states and a non-zero field element, $l$. It can be shown that all product operators are in the Clifford group. As in previous works [@Campbell_2012; @Howard_2012; @Cui_2017], we define the canonical non-Clifford gate to be $$M := \sum_{x \in \mathbb{Z}_d}\omega^{x^3}\ket{\mathbf{x}}\bra{\mathbf{x}} ,$$ which lies in the third level of the Clifford hierarchy and in standard fault tolerant architectures are much more costly than Clifford gates due to the need for MSD. The Compiling Problem ===================== A compiler converts high-level instructions into low-levels ones. In this paper, we concern ourselves with high-level instructions that take the form of $n$-qudit unitaries which can be exactly synthesised by a discrete gate set, $\mathcal{G}$. By low-level instructions, we specifically refer to quantum circuits, which are represented as *netlists*, or time-ordered lists of gates taken from $\mathcal{G}$ where the qudits to which they apply (as well as any other gate parameters) are specified for each gate. The unitary that a particular quantum circuit implements is simply the right-to-left matrix product of each gate in the netlist extracted in time-order. \[prob\_comp\] (Compiling Problem). Given a unitary $U \in \mathcal{G}$, find a quantum circuit that implements $U$ with the lowest cost. Note that the compiling problem is ill-defined and depends on the definition of cost. The most accurate metric of quantum circuit cost is the full space-time volume, which is the number of machine level operations multiplied by the number of physical qubits. The calculation required to determine the full space-time volume is lengthy and is highly sensitive to the choice of architecture [@fowler2012towards; @o2017quantum; @babbush2018encoding]. The *$M$ count*, or the number of $M^k$ gates in a quantum circuit, is an alternative cost metric that gives a good approximation to the full cost and can be easily read off compiler-level quantum circuits. Using the $M$ count in problem \[prob\_comp\], we obtain a well defined compiling problem. ($M$-Minimization). Given a unitary $U \in \mathcal{G}$, find a quantum circuit that implements $U$ with the fewest $M^k$ gates. We choose our gate set to be $\mathcal{G} = \{ Z, S, M^k, P_l, SUM \}$ for all available choices of $k$ and $l$. While we would ideally work with a universal gate set such as Clifford + $M$, the compiling problem is known to be intractable in the universal case so we focus on this simpler sub-problem. For the selection of gates in $\mathcal{G}$, we have taken inspiration from previous work [@Amy_2014; @Amy_2016; @Campbell_2017; @Heyfron_2019], where it was demonstrated that such a restriction leads to an algebraic reformulation of the compiler problem that is more amenable to computational methods, including efficient heuristics. Phase Polynomial Formalism {#s_phasepoly} ========================== The formalism described in this section allows us to reframe the $M$-minimization problem as a computationally-friendly problem on integer matrices. It applies strictly to unitaries\ $U \in \langle Z, S, M^k, P_l, SUM \rangle$ and is a straightforward generalisation of previous work [@Amy_2014; @Amy_2016; @Campbell_2017; @Heyfron_2019]. We proceed with a lemma that establishes a correspondence between unitaries generated by $\mathcal{G}$ and cubic polynomials that we call *phase polynomials*. \[thm\_pf2cub\] Any $n$ qudit unitary $U_f \in \langle \mathcal{G} \rangle$ can be expressed as follows: $$\label{eq_pf} U_f = \sum_{\mathbf{x} \in \mathbb{Z}_d^n} \omega^{f(\mathbf{x})}\ket{E\mathbf{x}}\bra{\mathbf{x}},$$ where $E$ is an invertible matrix implementable with $SUM$ gates, and $f: \mathbb{Z}_d^n \mapsto \mathbb{Z}_d$ is a polynomial of order less than or equal to 3. To prove the first part, we first show that each gate in the generating set can be written in the above form, then show that the set generated by these operators form a group. From the definitions provided in section \[s\_prelim\], we have that $Z$, $S$ and $M$ gate applied to the $t$^th^ qudit can be written in the form of equation  with $f(\mathbf{x}) = x_t, x_t^2, x_t^3$, respectively, and with $E = \mathbb{I}$. $P_l$ applied to the $t$^th^ qudit has $f(\mathbf{x}) = 0$ (as does the $SUM$ gate) and $E=\mathbb{I} + ((l-1)\delta_{i,t}\delta_{j,t})$ with inverse $E^{-1}=\mathbb{I} + ((\frac{1}{l}-1)\delta_{i,t}\delta_{j,t})$. Finally, the $SUM$ gate whose control and target are the $c$^th^ and $t$^th^ qudits, respectively, has $E=\mathbb{I} + (\delta_{i,t}\delta_{j,c})$, which has inverse $E^{-1}=\mathbb{I} - (\delta_{i,t}\delta_{j,c})$. By definition, the set generated by $\mathcal{G}$ is closed under multiplication and as each generator is a unitary matrix, the associative property holds. Finally, $\mathbb{I}, Z^\dagger, S^\dagger, M^\dagger \in \langle \mathcal{G} \rangle$ so the identity and inverse group axioms are satisfied. To prove the second part, that $f(\mathbf{x})$ is cubic, we note that the only gates which contribute to $f(\mathbf{x})$ are $Z$, $S$ and $M$, which add a term equal to the state of the acted-upon qudit raised to the first, second and third power, respectively. Because the $Z$, $S$ and $M$ gates are diagonal, the state of any qudit at any point in the circuit can only change due to the $P_l$ and $SUM$ gates, which together map the state of each qudit to linear functions of the input states with coefficients in $\mathbb{Z}_d$. The linear functions can, at most, be raised to the $3$^rd^ power (due to the $M$ gate), before contributing a term to $f(\mathbf{x})$. Therefore, the order of $f(\mathbf{x})$ is at most cubic. The linear and quadratic terms of any $f(\mathbf{x})$ can be implemented using just Clifford operations, which cost considerably less than the cubic terms that require $M$ gates. Therefore, we assume that $f(\mathbf{x})$ is a homogeneous cubic polynomial. It follows that $f(\mathbf{x})$ can be decomposed in the monomial basis as follows: $$\label{eq_mon} f(\mathbf{x}) = \sum_{\alpha,\beta,\gamma=1}^nS_{\alpha,\beta,\gamma}x_\alpha x_\beta x_\gamma,$$ where $S \in \mathbb{Z}_d^{(n,n,n)}$. and $$c_{\alpha,\beta,\gamma} = \begin{cases} 1 & \alpha = \beta \text{ AND } \beta = \gamma \\ 3 & \alpha = \beta \text{ XOR } \beta = \gamma \\ 6 & \text{otherwise}.\end{cases}$$Since every choice of $(\alpha,\beta,\gamma)$ for $\alpha \leq \beta \leq \gamma$ corresponds to a different linearly independent monomial, if we enforce that $S$ is symmetric, it follows that the elements of $S$ uniquely determine the function $f(\mathbf{x})$. For this reason, we call it the *signature tensor*. The phase polynomial $f(\mathbf{x})$ can also be decomposed as a sum over linear forms raised to the third power, as in the following: $$\label{eq_imp} f(\mathbf{x}) = \sum_{j=1}^{m}\lambda_j\left(\sum_{i=1}^{n}A_{i,j}x_i\right)^3,$$ where $\mathbf{\lambda} \in (\mathbb{Z}_d \setminus \{0\})^m$ and $A \in \mathbb{Z}_d^{(n,m)}$ such that for each column in $A$, there is at least one non-zero element. It is straightforward to calculate the signature tensor from the elements of $A$ and $\lambda$ using the following relation, $$\label{eq_st} S_{\alpha,\beta,\gamma} = \sum_{j=1}^m \lambda_j A_{\alpha,j} A_{\beta,j} A_{\gamma,j}.$$ **Implementation.** Let $U_f$ be a unitary with signature tensor $S \in \mathbb{Z}_d^{(n,n,n)}$. Let $A \in \mathbb{Z}_d^{(n,m)}$ and $\mathbf{\lambda} \in (\mathbb{Z}_d \setminus \{0\})^m$. We say that the tuple $(A, \lambda)$ is an *implementation* of $S$ if it satisfies equation . We refer to the tuple $(A, \lambda)$ as an implementation because it reveals information sufficient to construct a quantum circuit that implements $U_f$ with known $M$ count, as stated in the following lemma. \[lem\_imp\] Let $U_f$ be a unitary with an implementation $(A , \lambda)$ that has $m$ columns. It follows that a quantum circuit can be efficiently generated which implements $U_f$ using no more than $m$ $M$ gates. As proof of lemma \[lem\_imp\], we provide in appendix \[ap\_proof\] an explicit algorithm for efficiently converting an implementation with $m$ columns into a quantum circuit with $m$ $M^k$ gates. The connection between column count of implementations and $M$ count of quantum circuits is central to the understanding of this work and leads to a restatement of the compiler problem that is more amenable to computational solvers. \[pr\_col\] (Column-minimization). Let $S$ be a signature tensor. Find an implementation $( A , \lambda )$ that implements $S$ with minimal columns. Example: CCZ Gate {#s_CCZ} ================= Take the $CCZ$ gate as an example, which acts upon the computational basis as follows. $$CCZ\ket{x_1, x_2, x_3} = \omega^{x_1 x_2 x_3}\ket{x_1, x_2, x_3}.$$ In the monomial basis, the phase polynomial can be read off directly as $f(\mathbf{x}) = x_1 x_2 x_3$, which corresponds to a signature tensor with $S_{\sigma(1,2,3)} = \frac{1}{6}$ for all permutations $\sigma$ and $S_{\alpha,\beta,\gamma} = 0$ for all other elements. However, to generate a quantum circuit for $U_f=CCZ$, we first need to find an implementation for $S$. By applying knowledge of the qubit version of the $CCZ$ gate to qudits [@Amy_2014; @Amy_2016; @Campbell_2017; @Heyfron_2019], we arrive at the following implementation[^1] that has an $M$ count of 7: $$\begin{pmatrix} A \\ \hline \lambda \end{pmatrix} = \begin{pmatrix} 1 & 0 & 0 & 1 & 1 & 0 & 1 \\ 0 & 1 & 0 & 1 & 0 & 1 & 1 \\ 0 & 0 & 1 & 0 & 1 & 1 & 1 \\ \hline \frac{1}{6} & \frac{1}{6} & \frac{1}{6} & -\frac{1}{6} & -\frac{1}{6} & -\frac{1}{6} & \frac{1}{6}\end{pmatrix}, \\$$ which corresponds to the phase polynomial, $$\begin{split} \label{eq_ccz_imp1} f(\mathbf{x}) &= \frac{1}{6} x_1^3 + \frac{1}{6} x_2^3 + \frac{1}{6} x_3^3 - \frac{1}{6} (x_1 + x_2)^3 \\&- \frac{1}{6} (x_1 + x_3)^3 - \frac{1}{6} (x_2 + x_3)^3 + \frac{1}{6} (x_1 + x_2 + x_3)^3. \end{split}$$ We remind the reader that all elements of an implementation are in $\mathbb{Z}_d$, so the fraction $\frac{1}{6} = x \in \mathbb{Z}_d$ where $x$ solves $6x = 1 \pmod{d}$. One can easily verify that the above implementation, $(A,\lambda)$, satisfies equation  for every element of the signature tensor, $S_{a,b,c}$, confirming that it implements the $CCZ$ gate. Using a computer aided discovery method described in section \[s\_opt\], we have found an implementation with $M$ count 4 that works for all choices of $d$. This is a key result of the present work and is provided below. $$\label{eq_ccz_imp} \begin{pmatrix} A \\ \hline \lambda \end{pmatrix} = \begin{pmatrix} 1 & 1 & -1 & -1 \\ 1 & -1 & 1 & -1 \\ 1 & -1 & -1 & 1 \\ \hline \frac{1}{24} & \frac{1}{24} & \frac{1}{24} & \frac{1}{24} \end{pmatrix}.$$ This corresponds to the phase polynomial $$\begin{split} \label{eq_ccz_imp2} f(\mathbf{x}) &= \frac{1}{24}(x_1 + x_2 + x_3)^3 + \frac{1}{24}(x_1 - x_2 - x_3)^3 \\&+\frac{1}{24} (x_2 - x_1 - x_3)^3 +\frac{1}{24} (x_3 - x_1 - x_2)^3. \end{split}$$ An explicit quantum circuit for the above implementation of the $CCZ$ gate is provided in figure \[fig\_ccz\]. Compilers {#s_opt} ========= Brute-Force ----------- In order to construct an $M$-optimal implementation for a given phase polynomial, one can perform a brute-force search over all possible implementations, checking in polynomial-time in each case that it corresponds to the correct signature tensor using equation . However, the size of the search space scales as $O(d^{(n+1)m})$, which makes execution times impractical, even for modest sized inputs. However, by searching in $m$-order where $m$ is the candidate number of $M$ gates, we can optimally compile unitaries on $n=3$ ququints ($d=5$) with $M$ count of up to $4$ (and lower bound unitaries with an implicitly higher optimal $M$ count). It was through this brute-force method that we were able to discover the implementation of $CCZ$ with $M$ count of 4 presented in equation . Monomial Substitution --------------------- It is critically important that a general-purpose compiler is efficient. Fortunately, there is a simple method to map a phase polynomial in the monomial basis to an implementation. There are three kinds of monomial that may appear in a phase polynomial, which are distinguished by the number of variables they take. These are $x_a^3$, $x_a x_b^2$ and $x_a x_b x_c$. As each monomial is linearly independent, if we can find a prototypical implementation for each kind of monomial, then it follows that we can compile an implementation for a general phase polynomial by substituting instances of the prototypes for each monomial. Again using [@Heyfron_2019] as inspiration, we provide prototype implementations for the three kinds of monomial below. $$\begin{aligned} \label{eq_mon1}x_a^3 &\rightarrow x_a^3 \\ \label{eq_mon2} x_a x_b^2 &\rightarrow \frac{1}{6}(x_a + x_b)^3 + \frac{1}{6}(x_a - x_b)^3 - \frac{1}{3}x_a^3 \\ \label{eq_mon3} x_a x_b x_c &\rightarrow \frac{1}{24}(x_1 + x_2 + x_3)^3 + \frac{1}{24} (x_1 - x_2 - x_3)^3 + \frac{1}{24} (x_2 - x_1 - x_3)^3 + \frac{1}{24}(x_3 - x_1- x_2)^3,\end{aligned}$$ where we have used the implementation from equation  for the $x_a x_b x_c$ prototype. Of course, we can also use the “legacy” $M$-count 7 implementation from equation , which for certain input unitaries (e.g. ones that contain many gates on the same qudit lines) lead to lower $M$ count implementations due to column merging (see section \[sec\_opt\]). We call the above method *monomial substitution*, which executes in time that scales as $O(n^3)$ in the worst case, making it efficient. However, the output $M$ count should be considered a crude initial guess at the optimal $M$ count and can be significantly improved by the optimization methods described in remainder of this section. $M$-Optimization {#sec_opt} ---------------- One approach to solving problem \[pr\_col\] is to try and ‘merge’ columns of an existing implementation. A pair of columns can be merged if they are duplicates of one another. This is because we can collect like terms in the phase polynomial where the coefficients combine linearly. An illustrative example is the following. Let $f$ be a phase polynomial with two terms, hence has implementation matrix $A$ with two columns, $$\begin{aligned} f(\mathbf{x}) &= \lambda_1 (A_{1,1}x_1 + A_{2,1}x_2 + \dots + A_{n,1}x_n)^3 \\ &+ \lambda_2 (A_{1,2}x_1 + A_{2,2}x_2 + \dots + A_{n,2}x_n)^3 \end{aligned}$$ if the two columns of $A$ are duplicates, then we have $A_{i,1}=A_{i,2} \ \forall \ i \in [1,n]$. And so $$f(\mathbf{x}) = (\lambda_1 + \lambda_2) (A_{1,1}x_1 + A_{2,1}x_2 + \dots + A_{n,1}x_n)^3,$$ which needs only a single column to represent it, and therefore only a single magic state to implement it. Of course, it is often the case that an $A$ matrix does not contain any duplicates. In this case, we wish to transform $A$ in some way in order to make it contain duplicates, and in such a way that it does not alter the unitary it implements. In appendix \[ap\_dam\], we describe an $M$-optimizer that systemically searches for and performs such “duplication transformations”, and subsequently merges the duplicated columns. For this reason, we call it the Duplicate And Merge (DAM) algorithm. The algorithm runs in time that scales as $O(m n^3 d^m)$ so is inefficient. However, in practice, it executes much faster than the brute-force compiler and often outputs $M$-optimal implementations, albeit non-deterministically, and is useful for raising the practical limit on input circuits. Benchmarks ========== In order to determine the speed benefits of using DAM over a brute force search (BFS) and to assess the inevitable drop in $M$-optimality, we performed a benchmark on randomly generated implementations with an $M$-count of 3 for $d=5$ and $n=3$. These parameters were chosen as they are the largest parameters that are feasible for BFS where many repetitions are required. Each of the $100$ random implementations were first compiled by BFS, then the legacy monomial substitution compiler was run using the signature tensor as input, which was subsequently optimized using DAM $1000$ times. The distribution of $M$-counts after optimization with DAM was recorded and an example of a single random instance is shown in figure \[fig\_hist\]. ![The distribution of $M$-counts for DAM run on a single random implementation with $d=5$, $n=3$ and known optimal $M$-count of 3 performed 1000 times.[]{data-label="fig_hist"}](DAM_rep_hist){width="0.4\linewidth"} We have performed a number of benchmarks on larger circuits in order to compare the monomial substitution, legacy monomial substitution and DAM compilers, which are shown in table \[tab\_1\]. From the data we see that MS is the preferred compiler for low depth circuits such as the $CCZ^{\otimes k}$ family, which is to be expected as it is likely that the optimal $M$-count in this case is $4k$. In contrast, the DAM compiler consistently outperforms MS and Legacy for random circuits. Unfortunately, DAM is too inefficient to be practically useful for scalable quantum computers as we see from the large execution times. The computational bottleneck for DAM is due to the hardness of solving the multivariate cubic system of equations in equation . Our implementation uses a search over the space of all vectors on $\mathbb{Z}_d$ that are of length $m$ in the worst case (for details see appendix \[ap\_dam\]). Therefore, the search consists of $O(d^m)$ iterations and as all other parts of the DAM algorithm executes in polynomial time, this is the sole source of inefficiency. It follows that if one had access to an efficient heuristic that solves the system of equations in , then DAM immediately becomes efficient. Although, it would probably be the case that as a consequence of it being a heuristic, not all column mergings would be discoverable. As DAM is a non-deterministic heuristic, it is important to quantify the stability of its output. We used the results to estimate the probability that DAM produces an $M$-optimal implementation of an unknown quantum circuit for the given circuit parameters and found it to be $p_{\text{opt}}:=p(\text{optimal} \ | \ d=5, n=3, m=3) = 0.47\pm0.02$. This probability should in no way be interpreted as representative of DAM’s performance in general (esp. not for larger circuits) but instead suggests that a best-of-$N$ approach (i.e. where DAM is run $N$ times and the implementation with the minimum $M$-count is returned) would be effective and may return $M$-optimal solutions. The probability of optimality can be used to estimate the minimum number of DAM repetitions, $N$, that one should perform in order to reach a desired confidence threshold, $p_{\text{conf}}$, using the following: $$N = \ceil[\bigg]{\frac{\ln(1-p_{\text{conf}})}{\ln(1-p_{\text{opt}})}}.$$ Using our results for $p_{\text{opt}}$ and a confidence threshold of $p_{\text{conf}}=0.95$, we arrive at $N=5$. The mean execution time[^2] for a single run of DAM was $0.119s \pm 0.006s$ compared to $91s\pm6s$ for BFS. Therefore, the best-of-5 DAM compiler remains faster than BFS by two orders of magnitude. [|X|c|c|c|c|c|c|c|c|]{} **Circuit** & $\mathbf{d}$ & $\mathbf{n}$ & $\mathbf{M_{\text{Leg.}}}$ & $\mathbf{M_{\text{MS}}}$ & $\mathbf{M_{\text{DAM}}}$ & $\mathbf{t_{\text{Leg.}}}$ **(s)** & $\mathbf{t_{\text{MS}}}$ **(s)**& $\mathbf{t_{\text{DAM}}}$ **(s)**\ CCZ &5 & 3&7&4&5&0.10&0.02&0.98\ CCZ$^{\otimes 2}$ &5 & 6 &14&8&10&0.03&0.01&20.87\ CCZ$^{\otimes 3}$ &5 & 9 &21&12&16&0.02&$<$0.01&813.02\ CCZ &7 & 3 &7&4&7&0.08&0.03&6.99\ CCZ$^{\otimes 2}$ &7 & 6 &14&8&10&0.04&0.01&182.69\ CCZ &11 & 3 &7&4&7&0.09&0.04&38.81\ CCZ$^{\otimes 2}$ &11 & 6 &14&8&12&0.04&0.01&11489.15\ CCZ$_{\# 2}$ &5 & 5&13&8&8&0.09&0.02&34.9\ CCZ$_{\# 3}$ &5 & 7&19&12&12&0.04&0.01&2219.68\ CCZ$_{\# 2}$ &7 & 5&13&8&8&0.08&0.02&214.09\ CCZ$_{\# 3}$ &7 & 7&19&12&12&0.04&0.01&23950.64\ CCZ$_{\# 2}$ &11 & 5&13&8&8&0.08&0.02&7976.43\ Random &5 & 3&8.26&11.38&4.52&$<$0.01&$<$0.01&1.40\ Random &5 & 4&16.93&28.86&7.21&$<$0.01&$<$0.01&825.4959\ Random &7 & 3&8.68&11.59&4.38&$<$0.01&$<$0.01&11.76\ Random &7 & 4&16.88&28.75&7.13&$<$0.01&$<$0.01&10416.00\ Random &11 & 3&9.08&12.05&4.38&$<$0.01&$<$0.01&185.46\ Conclusions and Acknowledgements ================================ In this work we have generalised the phase polynomial type optimizers to qudit based quantum computers and have used it to demonstrate cost savings only possible in the qudit picture. This motivates serious discussion into fundamental questions regarding the nature of first generation fault-tolerant architectures, namely whether they use qubits or qudits. We acknowledge support by the Engineering and Physical Sciences Research Council (EPSRC) through grant EP/M024261/1. We thank Mark Howard for discussions throughout the project. Proof of Lemma \[lem\_imp\] {#ap_proof} =========================== Let $U_f\in\langle \mathcal{G} \rangle$ be a unitary and $(A, \lambda)$ be an implementation for $f$ with $m$ columns. We can efficiently generate a circuit, $C$, on $\mathcal{G}$ that implements $U_f$ from $(A, \lambda)$ using $m$ $M$ gates with the following algorithm: 1. Initialize an empty circuit, $C$. 2. For each $j \in [1,m]$: 1. Initialize an empty circuit, $D$. 2. Let $H := \{i \ \mid A_{i,j} \neq 0\}$. 3. Arbitrarily choose a $t \in H$. 4. \[step\_lin1\] Append $P_{A_{t,j}}$ on qudit line $t$ to $D$. 5. For each $c \in H \setminus \{t\}$: 1. Append $P_{A_{c,j}}$ on qudit line $c$ to $D$. 2. Append $SUM_{c,t}$ to $D$. 3. Append $P_{\frac{1}{A_{c,j}}}$ on qudit line $c$ to $D$. 6. \[step\_lin2\] Append $P_{\frac{1}{A_{t,j}}}$ on qudit line $t$ to $D$. 7. Append $D$ to $C$. 8. \[step\_m\] Append $M^{\lambda_j}$ on qudit line $t$ to $C$. 9. \[step\_uncomput\] Append $D^\dagger$ to $C$. First, observe steps \[step\_lin1\] to \[step\_lin2\], which creates a subcircuit $D$ using only $P_l$ and $SUM$ gates that maps the state of the [$t$^th^]{} qudit to a linear function of the $n$ input qudits $x_1, x_2, \dots, x_n$ that has coefficients given by the [$j$^th^]{} column of $A$. After $D$ is appended to the output circuit, $C$, step \[step\_m\] applies an $M^k$ gate with $k=\lambda_j$, which adds a term to the phase polynomial $f(\mathbf{x})$ equal to the aforementioned linear function cubed multiplied by $\lambda_j$, as required. Finally, in step \[step\_uncomput\], the linear function is uncomputed by $D^\dagger$. The whole process is repeated for each of the $m$ columns of $A$. Each iteration requires only one $M^k$ gate so the total number of $M$ gates required is $m$. The algorithm executes in $O(mn)$ steps and so is efficient. We end this appendix with the disclamation that the above algorithm is not intended to be optimal with respect to the number of Clifford gates (i.e. $P_l$ and $SUM$) used. The Duplicate And Merge Optimizer {#ap_dam} ================================= The following is a direct generalisation of the TODD compiler from reference [@Heyfron_2019] to qudits. The key difference is that the the null space step from TODD is replaced with a multivariate cubic system, for which a common root must be found. We refer the reader to Section 3.4 and Algorithm 1 of [@Heyfron_2019] for an overview of the TODD compiler, which may aid in understanding DAM. \[def\_duptra\] **Duplication Transformation.** Let $A\in \mathbb{Z}_d^{(n,m)}$ be an implementation and $\mathbf{y} \in \mathbb{Z}_d^m$ be a vector. We define the *duplication transformation* as follows: $$A \rightarrow A + (\mathbf{c}_{b} - \mathbf{c}_{a})\mathbf{y}^T,$$ where $\mathbf{c}_j$ is the $j$^th^ column of $A$. We can use this transformation to ‘create’ duplicates as the following lemma shows. \[lem\_duptra\] Let $A^\prime = A + (\mathbf{c}_{b} - \mathbf{c}_{a})\mathbf{y}^T$ and assume that $y_a - y_b = 1$. It follows that $\mathbf{c}^\prime_a=\mathbf{c}^\prime_b$. From the definition of $A^\prime$, $$A^\prime_{i,j} = A_{i,j} + z_iy_j,$$ now substitute in $z_i \equiv A_{i,b} - A_{i,a}$, $$\label{eq_l2_1} A^\prime_{i,j} = A_{i,j} + (A_{i,b} - A_{i,a})y_j.$$ Apply equation to both $(\mathbf{c}_a)_i \equiv A^\prime_{i,a}$ and $(\mathbf{c}_b)_i \equiv A^\prime_{i,b}$, $$A^\prime_{i,b} = A_{i,b} + (A_{i,b} - A_{i,a})y_b,$$ $$\label{eq_l2_2} A^\prime_{i,a} = A_{i,a} + (A_{i,b} - A_{i,a})y_a.$$ Substitute $y_a = y_b + 1$ into equation and rearrange, $$\begin{aligned} A^\prime_{i,a} &= A_{i,a} + (A_{i,b} - A_{i,a})(y_b + 1) \\ &= A_{i,a} + (A_{i,b} - A_{i,a})y_b + A_{i,b} - A_{i,a} \\ &= A_{i,b} + (A_{i,b} - A_{i,a})y_b = A^\prime_{i,b}. \end{aligned}$$ $\forall \ i \in [1,n]$ so $\mathbf{c}^\prime_{a} = \mathbf{c}^\prime_{b}$. The duplication transformation must not alter $f$. This leads to the condition $S^\prime_{\alpha,\beta,\gamma} = S_{\alpha,\beta,\gamma} \ \forall \ \alpha,\beta,\gamma \in [1,n]$, where $S^\prime$ and $S$ are the signature tensors for $A^\prime$ and $A$, respectively. So, $$\begin{aligned} S^\prime_{\alpha,\beta,\gamma} &= \sum_{j=1}^m \lambda_j A^\prime_{\alpha,j} A^\prime_{\beta,j} A^\prime_{\gamma,j}, \\ &= \sum_{j=1}^m \lambda_j (A_{\beta,j} + z_\beta y_j) (A_{\beta,j} + z_\beta y_j) (A_{\gamma,j} + z_\gamma y_j), \\ &= \sum_{j=1}^m \lambda_j A_{\alpha,j} A_{\beta,j} A_{\gamma,j} + \Delta_{\alpha,\beta,\gamma} = S_{\alpha,\beta,\gamma} + \Delta_{\alpha,\beta,\gamma}, \end{aligned}$$ where we define, $$\begin{split} \label{eq_delta_1} \Delta_{\alpha,\beta,\gamma} :=& \sum_{j=1}^m \lambda_j (A_{\alpha,j} A_{\beta,j} z_\gamma y_j + A_{\beta,j} A_{\gamma,j} z_\alpha y_j + A_{\gamma,j} A_{\alpha,j} z_\beta y_j \\ &+ A_{\alpha,j}z_\beta z_\gamma y_j^2 + A_{\beta,j}z_\gamma z_\alpha y_j^2 + A_{\gamma,j}z_\alpha z_\beta y_j^2 + z_\alpha z_\beta z_\gamma y_j^3), \end{split}$$ and $\mathbf{z} := \mathbf{c}_b - \mathbf{c}_a$. In order for $S^\prime = S$, we require that $$\label{eq_sig_cons} \Delta_{\alpha,\beta,\gamma}=0 \quad \forall \quad \alpha,\beta,\gamma \in [1,n].$$ This leads to a system of $\sum_{i=1}^3\binom{n}{i}$ cubic polynomials on $r$ variables ($y_1, y_2, \dots, y_m$) that can be rewritten as follows: $$\begin{aligned} \label{eq_dodd1} \sum_{j=1}^{m}\left(l_{\alpha,\beta,\gamma,j}y_j + q_{\alpha,\beta,\gamma,j}y_j^2 + c_{\alpha,\beta,\gamma,j}y_j^3\right) &= 0, \\ \label{eq_dodd2} y_a - y_b - 1 = 0 \end{aligned}$$ where the linear, quadratic and cubic coefficients for variable $t$ are given by, $$\begin{aligned} l_{\alpha,\beta,\gamma,j} &= \lambda_j (A_{\alpha,j} A_{\beta,j} z_\gamma + A_{\beta,j} A_{\gamma,j} z_\alpha + A_{\gamma,j} A_{\alpha,j} z_\beta) \\ q_{\alpha,\beta,\gamma,j} &= \lambda_j (A_{\alpha,j} z_\beta z_\gamma + A_{\beta,j} z_\gamma z_\alpha + A_{\gamma,j} z_\alpha z_\beta)\\ c_{\alpha,\beta,\gamma,j} &= \lambda_j z_\alpha z_\beta z_\gamma, \end{aligned}$$ respectively. Any $\mathbf{y}$ that is a simultaneous solution to equations and equation allows us to reduce the number of columns of $A$ using the duplication transformation from definition . Unfortunately, the problem of solving a general multivariate cubic system such as this is known to be NP-complete. A brute-force solver that searches through every possible $\mathbf{y}$ runs in $O(d^m)$ time. However, we can significantly speed up the search using the following relinearisation technique. First, we introduce new variables, $y_{m+1}, y_{m+2}, \dots, y_{3m}$, such that $$\begin{aligned} \label{eq_relin2} y_{m+j} &= y_j^2, \\ y_{2m+j} &= y_j^3, \\ \end{aligned}$$ for all $j \in [1,m]$. The system of equations from  becomes: $$\label{eq_relin} \sum_{j=1}^m l_{\alpha,\beta,\gamma,j}y_j + \sum_{k=m+1}^{2m} q_{\alpha,\beta,\gamma,k}y_{k} + \sum_{l=2m+1}^{3m} c_{\alpha,\beta,\gamma,l}y_{l} = 0,$$ which is linear in the $\{y_j\}$. Let $D$ be the coefficient matrix defined as follows: For each triple $\{(\alpha, \beta, \gamma) \ \mid \ \alpha \leq \beta \leq \gamma \in [1,n]$, there exists a row in $D$ of the following form: $$\textsc{Row}_{\alpha,\beta,\gamma}(D) = \begin{pmatrix} (l_{\alpha,\beta,\gamma,j}) & (q_{\alpha,\beta,\gamma,j}) & (c_{\alpha,\beta,\gamma,j}) \end{pmatrix}.$$ Now we can calculate a complete basis for the solutions to equation  by calculating the right null space of $D$, which we denote $N_D$. We can think of the columns of $N_D$ as a basis for the ‘partial’ solutions of the system of equations . In order to promote them to ‘full’ solutions, we need to enforce conditions from equations  and , which we do using the following algorithm. 1. Form $N_D^\prime$ by erasing all but the first $m$ rows of $N_D$. 2. Form $N_D^{\prime\prime}$ by column-reducing $N_D^\prime$ and subsequently removing every all-zero column. 3. Let $\mu := \textsc{Cols}(N_D^{\prime\prime})$. 4. For each $\mathbf{x} \in \mathbb{Z}_d^{\mu}$: 1. Construct $\mathbf{y}_{\mathbf{x}} = N_D^{\prime\prime}\mathbf{x}$ 2. Construct $\mathbf{y}_{\mathbf{x}}^\prime = \begin{pmatrix} \mathbf{y}_{\mathbf{x}} \\ \mathbf{y}_{\mathbf{x}}^2 \\ \mathbf{y}_{\mathbf{x}}^3\end{pmatrix}$ 3. If $D\mathbf{y}_{\mathbf{x}}^\prime = \mathbf{0}$ and  holds, then return $\mathbf{y}_{\mathbf{x}}$. 5. Return “No Solution”. In effect, the relinearlisation method replaces a search over every $\mathbf{y}\in \mathbb{Z}_d^m$ with a search over every $\mathbf{x}\in\mathbb{Z}_d^\mu$, so it only runs faster if $\mu < m$. It is certainly the case that $\mu \leq m$ as $\mu$ is the column rank of $N_D^{\prime}$, which has $m$ rows. Whether or not the strict inequality holds depends on the input circuit but in practice, we often find that it does hold and leads to significant speed-up over the naive brute force approach. [31]{} ifxundefined \[1\][ ifx[\#1]{} ]{} ifnum \[1\][ \#1firstoftwo secondoftwo ]{} ifx \[1\][ \#1firstoftwo secondoftwo ]{} ““\#1”” @noop \[0\][secondoftwo]{} sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{} @startlink\[1\] @endlink\[0\] @bib@innerbibempty [ ()](\doibase 10.1007/978-3-642-22816-2_10) [****,  ()](\doibase 10.1103/PhysRevA.87.062338) [****,  ()](http://stacks.iop.org/1367-2630/16/i=6/a=063038) [****,  ()](http://stacks.iop.org/1367-2630/17/i=3/a=035017) [****,  ()](\doibase 10.1103/PhysRevA.92.022312) [****,  ()](\doibase 10.1103/PhysRevA.92.032309) @noop [****,  ()]{} [****,  ()](\doibase 10.1103/PhysRevX.2.041021) [****,  ()](\doibase 10.1103/PhysRevLett.113.230501) @noop [****,  ()]{} @noop [ ()]{} @noop [ ()]{} @noop [****,  ()]{} [ ()](https://arxiv.org/abs/0806.3834) [ ()](https://arxiv.org/abs/1312.6584) [****,  ()](http://dl.acm.org/citation.cfm?id=2535649.2535653) [****,  ()](https://www.microsoft.com/en-us/research/publication/an-algorithm-for-the-t-count/) [****,  ()](\doibase 10.1109/TCAD.2014.2341953) [ ()](https://arxiv.org/abs/1601.07363) [****,  ()](\doibase 10.1103/PhysRevA.95.022316) [****,  ()](http://stacks.iop.org/2058-9565/4/i=1/a=015004) [**** (), 10.1038/s41534-018-0072-4](\doibase 10.1038/s41534-018-0072-4) [ ()](https://arxiv.org/abs/1803.05047) @noop [****,  ()]{} @noop [****,  ()]{} [****,  ()](\doibase 10.1103/PhysRevA.95.012329) @noop [****,  ()]{} @noop [****,  ()]{} @noop [ ()]{} [^1]: Note that for notational convenience, we often write an implementation as a single matrix where $\lambda$ is the final row and the rest is the $A$ matrix with a separating horizontal line between them. [^2]: Execution times were obtained on a laptop with an Intel Core i7 2.40GHz processor, 12GB of RAM running Microsoft Windows 10 Home edition.
--- abstract: 'We report detection of the binary companion to the millisecond pulsar 1741with the Gran Telescopio Canarias. The optical source position coincides with the pulsar coordinates and its magnitudes are $g''=24.84(5)$, $r''=24.38(4)$ and $i''=24.17(4)$. Comparison of the data with the WD evolutionary models shows that the source can be a He-core WD with a temperature of $\approx 6000$ K and a mass of $\approx 0.2$. The latter is in excellent agreement with the companion mass obtained from the radio timing solution for PSR 1741.' address: | $^1$Ioffe Institute, Politekhnicheskaya 26, St. Petersburg, 194021, Russia\ $^2$Instituto de Astronomía, Universidad Nacional Autónoma de México, Apdo. Postal 877, Baja California, México, 22800\ $^3$Department of Physics & McGill Space Institute, McGill University, 3600 University Street, Montreal, QC, H3A 2T8, Canada\ $^4$Instituto de Astrofísica de Canarias, Vía Láctea s/n, E38200, La Laguna, Tenerife, Spain\ $^5$GRANTECAN, Cuesta de San José s/n, E-38712, Breña Baja, La Palma, Spain author: - | D A Zyuzin$^1$, A Yu Kirichenko$^2$, A V Karpova$^1$, Yu A Shibanov$^1$,\ S V Zharikov$^2$, E Fonseca$^3$ and A Cabrera-Lavers$^{4,5}$ bibliography: - 'refmsp.bib' title: Detection of the PSR J1741+1351 white dwarf companion with the Gran Telescopio Canarias --- Introduction ============ Millisecond pulsars (MSPs) are neutron stars (NSs) that rotate particularly fast, having periods $P<30$ ms. The canonical ‘recycling’ scenario assumes that MSPs are formed in binary systems where they spin up by transfer of mass and angular momentum from the secondary star [@Bisnovatyi-Kogan1974; @alpar1982]. Optical observations of MSPs allow one to determine the nature and properties of their companions [@vankerkwijk2005]. This provides additional constraints on binary systems’ parameters which is important for understanding their formation and evolution. In most cases, companions are low-mass white dwarfs (WDs) [@tauris2011]. These objects are faint and thus hardly detectable. Fortunately, the number of identifications gradually increases thanks to world’s largest sky surveys and ground-based telescopes [@bassa2016; @kirichenko2018; @karpova2018]. The binary MSP 1741 (hereafter ) was discovered with the Parkes radio telescope [@freire2006] and then detected in $\gamma$-rays by  [@espinoza2013]. It is among the best timed MSPs. High-precision timing using 11-yr data set from the North American Nanohertz Observatory for Gravitational Waves provided measurements of the system inclination and masses of both the pulsar and its companion [@arzoumanian2018]. The system parameters are presented in table \[t:msp-par\]. To study the properties of the  companion, we performed deep optical observations with the Gran Telescopio Canarias (GTC). Here we report the results of the analysis of these data. [lc]{} Right ascension $\alpha$ (J2000) & 174131144731(2)\ Declination $\delta$ (J2000) & +13514412188(4)\ Epoch (MJD) & 56209\ Proper motion $\mu_\alpha =\dot{\alpha}{\rm cos}\delta$ (mas yr$^{-1}$) & $-$8.98(2)\ Proper motion $\mu_\delta$ (mas yr$^{-1}$) & $-$7.42(2)\ Spin period $P$ (ms) & 3.747154500259940030(7)\ Period derivative $\dot{P}$ ($10^{-20}$ s s$^{-1}$) & 3.021648(14)\ Characteristic age $\tau$ (Gyr) & 2\ Orbital period $P_b$ (days) & 16.335347828(4)\ Dispersion measure (DM, pc cm$^{-3}$) & 24.2\ Distance $D_{\rm NE2001}$ (kpc) & 0.9\ Distance $D_{\rm YMW}$ (kpc) & 1.36\ Distance $D_{p}$ (kpc) & $1.8^{+0.5}_{-0.3}$\ Pulsar mass $M_p$ () & $1.14^{+0.43}_{-0.25}$\ Companion mass $M_c$ () & $0.22^{+0.05}_{-0.04}$\ System inclination $i$ (deg) & $73^{+3}_{-4}$\ Observations and data reduction {#sec:data} =============================== The observations of the  field[^1] were carried out in June 2018 in the Sloan $g'$, $r'$ and $i'$ bands with the GTC/OSIRIS[^2] instrument. Dithered science frames were taken, with total exposure times of $3$, $3.5$ and $2.76$ ks for the $g'$, $r'$ and $i'$ filters, respectively. A short 20 s exposure was obtained in the $r'$ band to avoid saturation of bright stars that were further used for precise astrometry. The formal $rms$ uncertainties of the astrometric fit are 0.05 arcsec in RA and Dec for all bands. The data reduction was performed with the Image Reduction and Analysis Facility (IRAF) package. For the photometric calibration, we used the standard stars SA 112\_805 and SA 104\_428 observed during the same night as the target. The derived photometric zeropoints are $z_{g'}=28.76(2)$, $z_{r'}=28.99(1)$ and $z_{i'}=28.55(1)$. The values were obtained by comparing the standard star magnitudes from [@smith2002] with their instrumental magnitudes, corrected for the finite aperture and atmospheric extinction. We used the atmospheric extinction coefficients[^3] $k_{g'}=0.15(2)$, $k_{r'}=0.07(1)$ and $k_{i'}=0.04(1)$. Results and discussion {#sec:results} ====================== In figure \[fig:images\] we show sections of the  field in the $g'$, $r'$ and $i′$ bands. The pulsar coordinates at the epoch of our observations (MJD 58274, $i'$ filter) corrected for its proper motion (table \[t:msp-par\]) are $\alpha$ = 174131141245(8) and $\delta$ = +1351440799(1). Its position uncertainty is shown in the images by the circle which also accounts for the astrometric referencing uncertainty. In all bands, we detect a starlike source (hereafter ) overlapping with the pulsar position. We obtained the  magnitudes $g'=24.84(5)$, $r' = 24.38(4)$ and $i'=24.17(4)$. To correct these values for the interstellar extinction, we used the dust map [@green2018] and distances to  (table \[t:msp-par\]). We got the reddening $E(B-V)$ of $0.10(3)$ and $0.13(2)$ for the minimum and maximum distance estimates. $E(B-V)$ was then transformed to the extinction correction values using coefficients from [@schlafly2011]. For $D_{\rm NE2001}=0.9$ kpc, the resulting intrinsic colours and absolute magnitude are $(g'-r')_0 = 0.36(14)$, $(r'-i')_0 = 0.15(10)$ and $M_{r'} = 14.38(8)$ while for $D_p=1.8^{+0.5}_{-0.3}$ kpc – $(g'-r')_0 = 0.33(11)$, $(r'-i')_0 = 0.13(8)$ and $M_{r'} = 12.80^{+0.46}_{-0.59}$. To check whether  is a WD, we compared its colours and magnitudes with the WD cooling tracks from [@panei2007; @holberg2006; @kowalski2006; @tremblay2011; @bergeron2011][^4] which are shown in figure \[fig:col-mag\]. Indeed, according to the diagrams,  is likely a WD with a temperature of about 6000 K. This estimate is appropriate for the whole range of distance estimates to  (table \[t:msp-par\]) since the reddening does not vary significantly. However, various distances imply different estimates of other parameters of . The minimum distance corresponds to a WD with a mass of $>0.4$ and age of 2–5 Gyr and the maximum one – to a WD with a mass of about 0.2–0.3  and age of 1–2 Gyr. The former case is not compatible with radio timing measurements of the  companion mass $M_c=0.22(5)$ while the latter one is in a good agreement with it. This supports the larger estimate of the distance to  and implies that its companion is a DA He-core WD. References {#references .unnumbered} ========== [^1]: Proposal GTC4-18AMEX, PI A. Kirichenko [^2]: <http://www.gtc.iac.es/instruments/osiris> [^3]: <http://www.iac.es/adjuntos/cups/CUps2014-3.pdf> [^4]: <http://www.astro.umontreal.ca/~bergeron/CoolingModels/>
--- abstract: 'We present in this paper electron impact broadening for six Ar XV lines using our quantum mechanical formalism and the semiclassical perturbation one. Additionally, our calculations of the corresponding atomic structure data (energy levels and oscillator strengths) and collision strengths are given as well. The lines considered here are divided into two sets: a first set of four lines involving the ground level: 1s$^{2}2$s$^{2}$ $^{1}$S$_{0}-$ 1s$^{2}2$s$n$p $^{1}$P$^{\rm o}_{1}$ where $2\leq n \leq5$ and a second set of two lines involving excited levels: 1s$^{2}2$s2p $^{1}$P$^{\rm o}_{1}-$1s$^{2}2$s3s $^{1}$S$_{0}$ and 1s$^{2}2$s2p $^{3}$P$^{\rm o}_{0}-$1s$^{2}2$s3s $^{3}$S$_{1}$. An extensive comparison between the quantum and the semiclassical results was performed in order to analyze the reason for differences between quantum and semiclassical results up to the factor of two. It has been shown that the difference between the two results may be due to the evaluation of strong collision contributions by the semiclassical formalism. Except few semiclassical results, the present results are the first to be published. After the recent discovery of the far UV lines of Ar VII in the spectra of very hot central stars of planetary nebulae and white dwarfs, the present -and may be further- results can be used also for the corresponding future spectral analysis.' address: - 'Deanship of the Foundation Year, Umm Al-Qura University, Makkah, KSA' - 'GRePAA, Faculté des Sciences de Bizerte, Université de Carthage, Tunisia' - 'LERMA, Obs. Paris, UMR CNRS 8112, UPMC, Bâtiment Evry Schatzman, France' - 'Astronomical Observatory, Volgina 7, 11060 Belgrade 38, Serbia' author: - 'H. Elabidi' - 'S. Sahal-Bréchot' - 'M. S. Dimitrijević' title: 'Quantum Stark broadening of Ar XV lines. Strong collision and quadrupolar potential contributions' --- line profiles; stars atmospheres; white dwarfs; quantum formalism =0.5 cm Introduction ============ The Stark broadening mechanism is important in stellar spectroscopy and in the analysis of astrophysical and laboratory plasmas. Its influence should be considered for the opacity calculations, the modelling of stellar interiors, the estimation of radiative transfer through the stellar plasmas and for the determination of chemical abundances of elements [@Dimitrijevic03]. The need for spectral line broadening calculations is stimulated by the development of computers. Moreover, the development of instruments and space astronomy, such as the new X$-$ray space telescope [*Chandra*]{}, stimulated the calculations of line broadening of trace elements in the X$-$ray wavelength range. @Barstow98 have shown that analysis of white dwarf atmospheres, where Stark broadening is dominant compared to the thermal Doppler broadening, needs models taking into account heavy element opacity. Consequently, atomic and line broadening data for many elements are needed for stellar plasma research. The recent discovery of the far UV lines of Ar VII in the spectra of very hot central stars of planetary nebulae and white dwarfs [@Werner07] showed the astrophysical interest for atomic and line broadening data for this element in various ionization stages. Ar XV is one of these important ions. The only Ar XV line broadening calculations existing in the literature are the semiclassical ones [@Dimitrijevic10], where the authors claimed that there are no experimental or other theoretical results for a comparison. The calculations performed in the present paper are based on the quantum mechanical approach and the semiclassical perturbation one. The quantum mechanical expression for electron impact broadening calculations for intermediate coupling was obtained in @Elabidi04. We performed the first calculations for the $2s3s-2s3p$ transitions in Be-like ions from nitrogen to neon [@Elabidi07; @Elabidi08a] and for the $3s-3p$ transitions in Li-like ions from carbon to phosphor [@Elabidi08b; @Elabidi09]. This approach was also used in @Elabidi11a to check the dependence on the upper level ionization potential of electron impact widths, and in @Elabidi11b to provide some missing line broadening data for the C IV, N VI, O VI and F VII resonance lines. In our quantum approach, all the parameters required for the calculations of the line broadening such as radiative atomic data (energy levels, oscillator strengths...) or collisional data (collision strengths or cross sections, scattering matrices...) are evaluated during the calculation and not taken from other data sources. We used the sequence of the UCL atomic codes SUPERSTRUCTURE/DW/JAJOM that have been used for many years to provide fine structure wavefunctions, energy levels, wavelengths, radiative probability rates and electron impact collision strengths. Recently they have been adapted to perform line broadening calculations [@Elabidi08a]. The semiclassical perturbation formalism is described in @SSB69a [@SSB69b; @SSB74; @Fleurier77] and updated by @DSB84 [@DSB95]. The atomic structure data (energy levels and oscillator strengths) used by the semiclassical formalism for the evaluation of line broadening are taken from the code SUPERSTRUCTURE [@Eissner74]. We will analyze here as well the reasons for discrepancies (up to factor 2) between results for electron broadening of isolated non hydrogenic ion lines obtained with semiclassical and quantum methods as as were used by @Ralchenko01 [@Ralchenko03; @Alexiou06]. For example @Ralchenko01 obtained, using quantum-mechanical method, electron-impact widths of the 2s3s$-$2s3p singlet and triplet lines of the beryllium-like ions from B II to O V, and found that their results are generally smaller from most semiclassical widths. In @Ralchenko03, the similar conclusion was obtained for electron-impact widths of the 3s$-$3p transitions in Li-like ions from B III to Ne VIII. It was also found that the difference between experimental and quantum results monotonically increases with the spectroscopic charge of an ion. @Alexiou06 investigated the reasons for discrepancies of electron-impact widths of isolated ion lines, obtained with semiclassical non-perturbative and fully quantum close-coupling and convergent close-coupling calculations, and they concluded that the major reason is the neglect of penetration by the semiclassical calculations. They also obtained and analyzed data for Li-like 3s$-$3p from Be III to Ne VIII, Be-like 2s3s $^{3}$S$-$2s3p$^{3}$P from C III to Ne VII and Be-like 2s3s $^{1}$S$-$2s3p$^{1}$P from N IV to N VII. In order to contribute to the clarification of this problem, it is of interest to compare quantum and semiclassical results and for a more highly charged ion like Ar XV, which is one of objectives of the present work. In the present paper, Stark widths for six Ar XV lines will be calculated using the two described formalisms and an extensive comparison between the two results will be performed, in order to contribute to the explanation of reasons for discrepancies found in some cases, for line widths for ions in lower ionization stages than Ar XV. It is also of interest to compare two methods for such a higher ionization stage. Besides the Stark broadening data, we will present the results of our calculations of the corresponding atomic structure data (energy levels and oscillator strengths) and collision strengths. Outline of the quantum approach and computational procedure =========================================================== We present here an outline of the quantum formalism of electron impact broadening. More details have been given elsewhere [@Elabidi04; @Elabidi08a]. The calculations are made within the framework of the impact approximation, which means that the time interval between collisions is much longer than the duration of a collision. The expression of the Full Width at Half Maximum $W$ obtained in @Elabidi08a is : $$\begin{aligned} W&=&2N_{e}\left( \frac{\hbar }{m}\right) ^{2}\left( \frac{2m\pi }{k_{B}T}\right) ^{ \frac{1}{2}} \nonumber \\ &&\times\int\limits_{0}^{\infty }\Gamma _{w}\left( \varepsilon\right) \exp \left( -\frac{\varepsilon}{k_{B}T}\right) d\left( \frac{\varepsilon}{k_{B}T}\right), \label{integw}\end{aligned}$$ where $k_{B}$ is the Boltzmann constant, $N_{e}$ is the electron density, $T$ is the electron temperature and $$\begin{aligned} \Gamma _{w}(\varepsilon) &=&\sum_{{J_{i}^{T}J_{f}^{T}lK_{i}}{K_{f}}} \frac{\left[ K_{i},K_{f},J_{i}^{T},J_{f}^{T}\right] }{2} \nonumber \\ &&\times\left\{ \begin{array}{c} J_{i}K_{i}l \\ K_{f}J_{f}1 \end{array} \right\} ^{2}\left\{ \begin{array}{c} K_{i}J_{i}^{T}s \\ J_{f}^{T}K_{f}1 \end{array} \right\} ^{2} \nonumber \\ &&\times \left[ 1-\left( {\rm Re}\,(S_{i}){\rm Re}\,(S_{f})+{\rm Im}\,(S_{i}){\rm Im}\,(S_{f})\right) \right], \nonumber \\ \label{w2}\end{aligned}$$ where **$L_{i}$**+**$S_{i}$** =**$J_{i}$**, **$J_{i}$**+**$l$** =**$K_{i}$** and **$K_{i}$**+**$s$** =**$J_{i}^{T}$**. **$L$** and **$S$** represent the atomic orbital angular momentum and spin of the target, **$l$** is the electron orbital momentum, the superscript $T$ denotes the quantum numbers of the total electron+ion system. $S_{i}$ ($ S_{f}$) are the scattering matrix elements for the initial (final) levels, expressed in the intermediate coupling approximation, $\rm Re\, (S)$ and $\rm Im\, (S)$ are respectively the real and the imaginary parts of the S-matrix element, $\left\{ \begin{array}{c} a b c \\ d e f \end{array} \right\} $ represent 6–j symbols and we adopt the notation $[x, y, ...] = (2x + 1)(2y + 1)$... Both $S_{i}$ and $S_{f}$ are calculated for the same incident electron energy $\varepsilon=mv^{2}/2$. The equation (\[integw\]) takes into account fine structure effects and relativistic corrections resulting from the breakdown of the $LS$ coupling approximation for the target. The atomic structure and the collisional data are needed for line broadening evaluation. The atomic structure in intermediate coupling is performed through the SUPERSTRUCTURE code (SST) [@Eissner74]. The scattering problem in $LS$ coupling is carried out by the DISTORTED WAVE (DW) code [@Eissner98] as in @Elabidi08a. This weak coupling approximation for the collision part assumed in DW is adequate for highly charged ions colliding with electrons since the close collisions are of small importance. The JAJOM code [@Saraph78] is used for the scattering problem in intermediate coupling. [**R**]{}-matrices in intermediate coupling and real ($\rm Re\, \mathbf{S}$) and imaginary part ($\rm Im\, \mathbf{S}$) of the scattering matrix [**S**]{} have been calculated using the transformed version of JAJOM (Elabidi & Dubau, unpublished results) and the program RtoS (Dubau, unpublished results) respectively. The evaluation of $\rm Re\, \mathbf{S}$ and $\rm Im\, \mathbf{S}$ is done according to: $$\begin{aligned} {\rm Re}\, \mathbf{S}=\left( 1-\mathbf{R}^{2}\right) \left( 1+\mathbf{R} ^{2}\right) ^{-1}, {\rm Im}\ \mathbf{S}=2\mathbf{R}\left( 1+\mathbf{R}^{2}\right) ^{-1} \nonumber\end{aligned}$$ The relation $\bf{S}=(1+i\bf{R})(1-i\bf{R})^{-1}$ guarantees the unitarity of the [**S**]{}-matrix. Results and discussion ====================== We present in the following subsections some atomic data and line broadening data for Ar XV. Energy levels are compared to the available theoretical [@Bhatia08; @NIST12] and experimental [@Edlen83; @Edlen85; @Khardi94; @Lepson03] results. Oscillator strengths and collision strengths for the Ar XV lines are compared to the available theoretical results [@Bhatia08]. Electron impact full widths at half maximum (FWHM) in Å ($W=2w$) for the considered Ar XV lines are calculated for a range of electron temperatures from $5\times 10^{5}$ K to 2 $\times 10^{6}$ K and for an electron density of 10$^{20}$ cm$^{-3}$. We choose four lines involving the ground level (1s$^{2}2$s$^{2}$ $^{1}$S$_{0}-$ 1s$^{2}2$s$n$p $^{1}$P$^{\rm o}_{1}$ where $2\leq n \leq5$) and two others involving excited levels (1s$^{2}2$s2p $^{1}$P$^{\rm o}_{1}-$1s$^{2}2$s3s $^{1}$S$_{0}$ and 1s$^{2}2$s2p $^{3}$P$^{\rm o}_{0}-$1s$^{2}2$s3s $^{3}$S$_{1}$). Calculations are based on the quantum mechanical and the semiclassical perturbation formalisms. Structure and electron scattering data -------------------------------------- The configurations used in the atomic structure description are 1s$^{2}$(2s$^{2}$, 2s2p, 2p$^{2}$, 2s$nl$) where 3$\leq n \leq 5$ and $l=$s,p,d. This set of configurations gives rise to 118 fine structure levels. In the code SST, the wave functions are determined by diagonalization of the non relativistic Hamiltonian using orbitals calculated in a scaled Thomas-Fermi-Dirac Amaldi (TFDA) potential. The scaling parameters for this potential ($\lambda_{l}$) have been obtained by a self-consistent energy minimization procedure, in our case on all term energies of the 21 configurations. Relativistic corrections (spin-orbit, mass, Darwin and one-body) are also introduced in SST. We perform a comparison of our energy levels and oscillator strengths to those published by @Bhatia08 and in the database NIST [@NIST12]. This preliminary comparison is important since the accuracy of the atomic structure (especially the oscillator strengths) is a prerequisite for the accuracy of the line broadening results. We present in Table \[tab1\], energy levels for the lowest 20 levels belonging to the configurations 1s$^{2}(2$s$^{2}$,2s2p, 2p$^{2}$,2s3s, 2s3p, 2s3d). Our energies are compared to the experimental ones [@Edlen83; @Edlen85; @Khardi94; @Lepson03], to the 27-configuration model of @Bhatia08 and to the NIST [@NIST12] values, and an excellent agreement (the difference is less than 1 %) has been found between the three results showing that our 21-configuration model provides acceptable atomic structure data. Oscillator strengths for some transitions from the first five levels to the lowest ten levels (belonging to the configurations 2s$^{2}$, 2s2p and 2p$^{2}$) are presented in Table \[tab2\] and compared to the 27-configuration model of @Bhatia08. The relative difference between the two results is about 10 %. We can conclude from the preceding comparisons that our atomic structure study is sufficiently accurate to be adopted in the scattering problem and thus in the line broadening calculations. Collision strengths for the same transitions as for the oscillator strengths are presented in Table \[tab2\]. Comparison has been made with the 27-configuration results of @Bhatia08 and an overall reasonable agreement has been found between the two results. In some cases, notable differences appear especially for the energy 180 Ry. We can note the case of the transition $1-5$ (2s$^{2}$ $^{1}$S$_{0}-$ 2s2p $^{1}$P$^{\rm o}_{0}$) which is an optical allowed transition, and it is shown in @Elabidi12 that for such transitions, whose energy difference $\Delta E$ is very small, collision strengths can not converge at low total angular momentum $J^{T}$ especially at high electron energies. In our line broadening calculations (Eq. \[integw\]), we use the imaginary and the real parts of the scattering matrices and these parameters are related to the corresponding collision strengths. Consequently, the accuracy of collision strengths presented in Table \[tab2\] is very important for the accuracy of our line broadening data. Line broadening data -------------------- We present in Table \[tab3\] widths of the four lines: 2s$^{2}$ $^{1}$S$_{0}-$ 2s$n$p $^{1}$P$^{\rm o}_{1}$ where $2\leq n \leq 5$ involving the ground level, and in Table \[tab3\] the two lines 2s2p $^{1}$P$^{\rm o}_{1}-$2s3s $^{1}$S$_{0}$ and 2s2p $^{3}$P$^{\rm o}_{0}-$2s3s $^{3}$S$_{1}$ involving excited levels. Calculations are based on the quantum mechanical (Q) and the semiclassical perturbation (SCP) formalisms. We note that for all these transitions, the SCP results are overestimated compared to the quantum ones, and the average relative difference is about 70 %. We note also that, except for the resonance line, the two results Q and SCP become close to each other with the increase of the principal quantum number $n$. Table \[tab4\] shows that for transitions that do not involve the ground level, the SCP results are no longer higher than the quantum ones. We found also that the disagreement between the two results is less for these transitions: the quantum results are about 33 % higher than the SCP ones. To explain at least a part of the previous behaviour of line widths with the principal quantum number $n$, we present in Table \[tab5\] the contributions of strong collisions and those of the quadrupolar potential for all the considered transitions. We note that, in general (except for the resonance line), the contributions of strong collisions and those of the quadrupolar potential are important for transitions involving levels with low principal quantum number $n$. Inelastic collisions due to more distant collisions are quite negligible. We found also that when the contributions of strong and close collisions, and thus the contributions of the elastic collisions due to the quadrupolar potential is dominant, the disagreement between the SCP and the quantum results is important. For example, for transitions between excited levels (the two last ones in Table \[tab5\]), the relative difference between the SCP and the quantum results are about 25 %. In these cases, the strong collisions and the quadrupolar potential have the lowest contributions (respectively 35 % and 56 %) compared to the four other transitions. This behaviour can be explained by the use of the hydrogenic model for the atomic structure in the SCP formalism to evaluate the quadrupolar potential. It is known that this approach overestimates the corresponding contributions to line widths. Conclusion ========== We have calculated in this work, atomic structure data (energy levels and oscillator strengths), collision strengths and electron impact broadening for Ar XV. To check their accuracy, comparisons of our level energies with the experimental [@Edlen83; @Edlen85; @Khardi94; @Lepson03] and with the theoretical [@Bhatia08; @NIST12] results have been performed and a relative difference of about 1 % has been found. Our oscillator strengths have been compared to those of @Bhatia08, and we found that the two results agree within 10 %. For collision strengths, an overall agreement has been found between our results and those of @Bhatia08. This shows firstly that we can trust our preliminary data and that they can be used with confidence in our line broadening calculations. For line broadening, several important results can be derived from our study. Firstly, we find that the disagreement between the semiclassical and the quantum results is important when the contributions of strong and close collisions, and thus the contributions of the elastic collisions due to the quadrupolar potential are dominant. In these cases, the semiclassical results are always higher than the quantum ones. Secondly, we remark that the contributions of such elastic collisions are important for transitions involving levels with low principal quantum numbers $n$ (except for the resonance line). Another point is that for transitions that do not involve the ground level (for which the contributions of strong collisions are the smallest), the SCP results are no longer higher than the quantum ones. Finally, we can explain the overestimation of the semiclassical line widths compared to the quantum ones by the fact that the semiclassical formalism uses the hydrogenic approximation to evaluate the quadrupolar potential. Extensive works on the strong collision contributions to the line widths and their behaviour with the ionization stages along some isoelectronic sequences will be welcome to investigate their effects on lines broadening and to study their evaluation in the semiclassical formalism. Acknowledgments {#acknowledgments .unnumbered} =============== This work has been supported by the Tunisian research unit 05/UR/12-04. It is also a part of the project 176002 “Influence of collisional processes on astrophysical plasma line shapes” supported by the Ministry of Education, Science and Technological Development of Serbia. Alexiou, S. & Lee, R.W., Semiclassical calculations of line broadening in plasmas: Comparison with quantal results, J. Quant. Spectrosc. Radiat. Tranfer, 99, 10-20, 2006. Barstow M.A., Hubeny I. & Holberg J.B., The effect of photospheric heavy elements on the hot DA white dwarf temperature scale, Mon. Not. R. Astron. Soc., 299, 520-534, 1998. Bhatia, A.K. & Landi, E., Atomic data and spectral line intensities for Ar XV, At. Data Nucl. Data Tables, 94, 223-256, 2008. Dimitrijević, M.S., Stark broadening in astrophysics (Applications of Belgrade school results and collaboration with former soviet republics), Astronomical and Astrophysical Transactions, 22, 389-412, 2003. Dimitrijević, M.S. & Sahal-Bréchot, S., Stark broadening of neutral helium lines, J. Quant. Spectrosc. Radiat. Transfer, 31, 301-313, 1984. Dimitrijević, M.S. & Sahal-Bréchot, S., Stark broadening of Mg I spectral lines, Physica Scripta, 52, 41-51, 1995. Dimitrijević, M.S. et al., On the Stark broadening of Ar XV spectral lines, Proceedings of the VII Bulgarian-Serbian Astronomical Conference (VII BSAC) 1-4 june 2010, Chepelare, Bulgaria. Publ. Astron. Soc. “Rudjer Bošković”, vol. 11, pp. 243-246, 2012. Edlen, B., Comparison of Theoretical and Experimental Level Values of the n = 2 Complex in Ions Isoelectronic with Li, Be, O and F, Physica Scripta, 28, 51-67, 1983. Edlen, B., A Note on the 2p$^{2}$ Configuration in the Beryllium Isoelectronic Sequenc, Physica Scripta, 32, 86-88, 1985. Eissner, W., The UCL distorted wave code, Comput. Phys. Commun., 114, 295-341, 1998. Eissner, W., Jones, M. & Nussbaumer, H., Techniques for the calculation of atomic structures and radiative data including relativistic corrections, Comput. Phys. Commun., 8, 270-306, 1974. Elabidi, H., Ben Nessib, N., Cornille, M., Dubau, J. & Sahal-Bréchot, S., Quantum-mechanical Calculations of Ne VII Spectral Line Widths, Spectral Line Shapes in Astrophysics (Sremski Karlovci, Serbia), edited by L. Č. Popović & M. S. Dimitrijević, AIP Conf. Proc., No. 938, pp. 272-275, 2007. Elabidi, H., Ben Nessib, N., Cornille, M., Dubau, J. & Sahal-Bréchot, S., Electron impact broadening of spectral lines in Be-like ions: quantum calculations, J. Phys. B: At. Mol. Opt. Phys., 41, n$^{\rm o}$ 025702, 2008a. Elabidi, H., Ben Nessib, N., Sahal-Bréchot, S., Quantum mechanical calculations of the electron-impact broadening of spectral lines for intermediate coupling, J. Phys. B: At. Mol. Opt. Phys., 37, 63-71, 2004. Elabidi, H., Ben Nessib, N. & Sahal-Bréchot, S., Quantum calculations of Stark broadening of Li-like ions; T and Z—scaling, Spectral Line Shapes (Valladolid, Spain), edited by M. A. González & M. A. Gigosos, AIP Conf. Proc. No. 1058, pp. 146-148, 2008b. Elabidi, H. & Sahal-Bréchot, S., Checking the dependence on the upper level ionization potential of electron impact widths together with corresponding quantum calculations, Eur. Phys. J. D, 61, 285-290, 2011. Elabidi, H., Sahal-Bréchot, S. & Ben Nessib, N., Quantum Stark broadening of 3s–3p spectral lines in Li-like ions; Z-scaling and comparison with semi-classical perturbation theory, Eur. Phys. J. D, 54, 51-64, 2009. Elabidi, H., Sahal-Brechot, S. & Ben Nessib, N., Fine structure collision strengths for S VII lines,Phys. Scr., 85, 065302, 2012. Elabidi, H., Sahal-Bréchot, S., Dimitrijević, M.S. & Ben Nessib, N., Quantum Stark broadening data for the C IV, N V, O VI, F VII and Ne VIII resonance doublets, Mon. Not. R. Astron. Soc., 417, 2624-2630, 2011. Fleurier, C., Sahal-Bréchot, S. & Chapelle, J., Stark profiles of some ion lines of alkaline earth elements, J. Quant. Spectrosc. Radiat. Transfer, 17, 595-604, 1977. Khardi, S. et al., Beam-foil spectroscopy in the extreme UV of highly ionized silicon Si XI and the isoelectronic ions AI X, S XIII and Ar XV, Physica Scripta, 49, 571-577, 1994. Kramida, A., Ralchenko, Yu.V., Reader, J., and NIST ASD Team (2012). NIST Atomic Spectra Database (ver. 5.0), \[Online\]. Available: http://physics.nist.gov/asd. Lepson, J.K., Beiersdorfer, P., Behar, E. & Kahn, S.M., Emission-Line Spectra of Ar IX-Ar XVI in the Soft X-Ray Region 20-50 Å, Astrophys. J., 590, 604-607, 2003. Ralchenko, Yu.V., Griem, H.R. & Bray, I., Electron-impact broadening of the 3s-3p lines in low-Z Li-like ions, J. Quant. Spectrosc. Radiat. Tranfer, 81, 371-384, 2003. Ralchenko, Yu.V., Griem, H.R., Bray, I. & Fursa, D.V., Electron collisional broadening of 2s3s-2s3p lines in Be-like ions, J. Quant. Spectrosc. Radiat. Tranfer, 71, 595-607, 2001. Sahal-Bréchot, S., Impact Theory of the Broadening and Shift of Spectral Lines due to Electrons and Ions in a Plasma, A&A, 1, 91-123, 1969a. Sahal-Bréchot, S., Impact Theory of the Broadening and Shift of Spectral Lines due to Electrons and Ions in a Plasma (Continued), A&A, 2, 322-354, 1969b. Sahal-Bréchot, S., Stark Broadening of Isolated Lines in the Impact Approximation, A&A, 35, 319-321, 1974. Saraph, H.E., Fine structure cross sections from reactance matrices– a more versatile development of the program JAJOM, Comput. Phys. Commun., 15, 247-258, 1978. Werner, K., Rauch, T. & Kruk, J.W., Discovery of photospheric argon in very hot central stars of planetary nebulae and white dwarfs, A&A, 466, 317-322, 2007. ----- ------------------ ---------------------------- --------- --------- ----------- ----------------- $i$ Conf. Level Present Exp. NIST Bhatia08 1 1s$^{2}2$s$^{2}$ $^{1}$S$_{0}$ 0 0 0 2 1s$^{2}$2s2p $^{3}$P$_{0}^{\mathrm{o}}$ 228727 228674 228684 229202 3 1s$^{2}$2s2p $^{3}$P$_{1}^{\mathrm{o}}$ 236470 235863 235860.2 236662 4 1s$^{2}$2s2p $^{3}$P$_{2}^{\mathrm{o}}$ 253842 252683 252679.6 254115 5 1s$^{2}$2s2p $^{1}$P$_{1}^{\mathrm{o}}$ 459530 452212 452182 459911 6 1s$^{2}$2p$^{2}$ $^{3}$P$_{0}$ 608399 604961 604917 609224 7 1s$^{2}$2p$^{2}$ $^{3}$P$_{1}$ 618807 615128 615140 619718 8 1s$^{2}$2p$^{2}$ $^{3}$P$_{2}$ 633295 628292 628308 633409 9 1s$^{2}$2p$^{2}$ $^{1}$D$_{2}$ 698851 689621 699392 10 1s$^{2}$2p$^{2}$ $^{1}$S$_{0}$ 854805 840612 840620 855441 11 1s$^{2}$2s3s $^{3}$S$_{1}$ 3938369 3935000 3938375 12 1s$^{2}$2s3s $^{1}$S$_{0}$ 3983232 3980000 3980760 3981941 13 1s$^{2}$2s3p $^{3}$P$_{1}^{\mathrm{o}}$ 4044723 4042037 4044306 14 1s$^{2}$2s3p $^{3}$P$_{0}^{\mathrm{o}}$ 4046486 0 4045888 15 1s$^{2}$2s3p $^{1}$P$_{1}^{\mathrm{o}}$ 4051014 4042600 4042040 4050223 16 1s$^{2}$2s3p $^{3}$P$_{2}^{\mathrm{o}}$ 4053511 4050500 4052584 17 1s$^{2}$2s3d $^{3}$D$_{1}$ 4111931 0 4106160 4110053 18 1s$^{2}$2s3d $^{3}$D$_{2}$ 4112940 0 4113330\* 4111049 19 1s$^{2}$2s3d $^{3}$D$_{3}$ 4114464 4110000 4109660\* 4112559 20 1s$^{2}$2s3d $^{1}$D$_{2}$ 4158547 4150000 4149860 4155932\[tab1\] ----- ------------------ ---------------------------- --------- --------- ----------- ----------------- : Our energies in cm$^{-1}$ (Present) for the lowest 20 levels of Ar XV compared to other results. Exp: experimental energies in @Edlen83 [@Edlen85; @Khardi94; @Lepson03] and taken from @Bhatia08. NIST: energies from the database NIST [@NIST12], Bhatia08: calculated energies with a 27-configuration model [@Bhatia08]. $i$ labels the 20 levels. The NIST energies of the two levels 18 and 19 (designed by asterisks) are inverted compared to all the other results. ------------ ------------- ------------- ------------- ------------- -------------- ---------------------- Transition $i-j$ Present Bhatia08 Present Bhatia08 Present Bhatia08 $1-2$ 2.898E$-$03 3.064E$-$03 3.530E$-$04 3.596E$-$04 $1-3$ 2.290E$-$04 1.203E$-$04 8.693E$-$03 1.146E$-$02 4.256E$-$03 4.305E$-$03 $1-4$ 1.449E$-$02 1.507E$-$02 1.735E$-$03 1.765E$-$03 $1-5$ 2.090E$-$01 2.093E$-$01 6.753E$-$01 6.682E$-$01 7.623E$-$01 1.197E$+$00 $1-6$ 1.510E$-$04 1.595E$-$04 2.600E$-$05 2.766E$-$05 $1-7$ 4.520E$-$04 3.878E$-$04 2.100E$-$05 2.207E$-$05 $1-8$ 7.540E$-$04 8.283E$-$04 2.150E$-$04 2.274E$-$04 $1-9$ 3.374E$-$03 4.802E$-$03 4.585E$-$03 4.848E$-$03 $1-10$ 1.317E$-$03 1.310E$-$03 1.021E$-$03 1.192E$-$03 $2-3$ 3.131E$-$02 3.289E$-$02 3.066E$-$03 3.085E$-$03 $2-4$ 2.220E$-$02 2.172E$-$02 1.776E$-$02 1.764E$-$02 $2-5$ 7.731E$-$03 7.963E$-$03 5.970E$-$04 6.016E$-$04 $2-6$ 1.771E$-$03 1.988E$-$03 2.140E$-$04 2.170E$-$04 $2-7$ 8.158E$-$02 8.167E$-$02 3.286E$-$01 3.192E$-$01 3.536E$-$01 5.789E$-$01 $2-8$ 2.214E$-$03 3.664E$-$03 3.900E$-$04 4.094E$-$04 $2-9$ 4.695E$-$03 3.662E$-$03 3.850E$-$04 3.929E$-$04 $2-10$ 5.490E$-$04 4.368E$-$04 3.600E$-$05 3.616E$-$05 $3-4$ 8.910E$-$02 8.868E$-$02 4.333E$-$02 4.340E$-$02 $3-5$ 2.319E$-$02 2.417E$-$02 2.074E$-$03 2.091E$-$03 $3-6$ 7.775E$-$02 7.792E$-$02 3.286E$-$01 3.242E$-$01 5.866E$-$01 5.880E$-$01 $3-7$ 5.972E$-$02 5.984E$-$02 2.534E$-$01 2.466E$-$01 2.655E$-$01 4.361E$-$01 $3-8$ 1.041E$-$01 1.041E$-$01 4.158E$-$01 4.025E$-$01 4.409E$-$01 7.225E$-$01 $3-9$ 7.344E$-$04 7.387E$-$04 1.409E$-$02 1.574E$-$02 4.123E$-$03 5.680E$-$03 $3-10$ 6.809E$-$05 6.917E$-$05 1.646E$-$03 1.655E$-$03 2.950E$-$04 3.828E$-$04 $4-5$ 3.866E$-$02 4.095E$-$02 3.162E$-$03 3.181E$-$03 $4-6$ 2.214E$-$03 2.015E$-$03 2.150E$-$04 2.326E$-$04 $4-7$ 9.494E$-$02 8.517E$-$02 4.158E$-$01 4.095E$-$01 4.432E$-$01 7.369E$-$01 $4-8$ 2.871E$-$01 2.871E$-$02 1.245E$+$00 1.164E$+$00 1.281E$+$00 2.116E$+$00 $4-9$ 1.170E$-$02 1.168E$-$02 2.348E$-$03 6.612E$-$02 4.662E$-$02 7.232E$-$02 $4-10$ 2.743E$-$03 3.441E$-$03 3.060E$-$04 3.103E$-$04 $5-6$ 2.182E$-$04 2.214E$-$04 2.790E$-$03 5.106E$-$03 2.934E$-$03 5.598E$-$03 $5-7$ 6.686E$-$05 6.663E$-$05 8.371E$-$03 9.420E$-$03 1.693E$-$03 2.430E$-$03 $5-8$ 4.230E$-$03 4.198E$-$03 1.395E$-$02 6.147E$-$02 4.492E$-$02 8.579E$-$02 $5-9$ 2.363E$-$01 2.356E$-$01 1.776E$+$00 1.660E$+$00 1.697E$+$00 3.127E$+$00 $5-10$ 1.485E$-$01 1.490E$-$01 5.802E$-$01 5.506E$-$01 6.374E$ -$01 1.036E$+$00 \[tab2\] ------------ ------------- ------------- ------------- ------------- -------------- ---------------------- : Weighted oscillator strengths $g*f$ and collision strengths $\Omega$ for transitions from the lowest five levels to the lowest 10 ones. Present: the present results, Bhatia08: calculated values from @Bhatia08 with the 27-configuration model. Transition T($10^{5}$ K) $Q$ ($10^{-3}$Å) $SCP$ ($10^{-3}$Å) $\frac{SCP}{Q}$ ---------------------------------------------------------------------------- --------------- ------------------ -------------------- ----------------- $1s^{2}2s^{2}$$^{1}\mathrm{S}_{0}-2s2p$ $^{1}\mathrm{P}_{1}^{\mathrm{0}}$ 5 8.550 14.3 1.67 $\lambda =221.15$ Å 7.5 7.660 11.7 1.53 10 7.040 10.2 1.45 20 5.620 7.46 1.33 $1s^{2}2s^{2}$ $^{1}\mathrm{S}_{0}-2s3p$ $^{1}\mathrm{P}_{1}^{\mathrm{0}}$ 5 0.242 0.455 1.88 $\lambda =24.7$ Å 7.5 0.209 0.373 1.78 10 0.188 0.325 1.73 20 0.141 0.235 1.67 $1s^{2}2s^{2}$ $^{1}\mathrm{S}_{0}-2s4p$ $^{1}\mathrm{P}_{1}^{\mathrm{0}}$ 5 0.498 0.758 1.52 $\lambda =18.8$ Å 7.5 0.398 0.634 1.59 10 0.338 0.560 1.66 20 0.224 0.422 1.88 $1s^{2}2s^{2}$ $^{1}\mathrm{S}_{0}-2s5p$ $^{1}\mathrm{P}_{1}^{\mathrm{0}}$ 5 0.884 1.37 1.55 $\lambda =16.95$ Å 7.5 0.693 1.17 1.69 10 0.580 1.04 1.79 20 0.370 0.805 2.18 \[tab3\] : Present quantum (Q) and semiclassical (SCP) line widths of some Ar XV transitions involving the ground level. Transition T($10^{5}$ K) $Q$ ($10^{-3}$Å) $SCP$ ($10^{-3}$Å) $\frac{SCP}{Q}$ ------------------------------------------------------------------------- --------------- ------------------ -------------------- ----------------- $1s^{2}2s2p$ $^{3}\mathrm{P}_{0}^{\mathrm{0}}-2s3s$$^{3}\mathrm{S}_{1}$ 5 0.331 0.242 0.73 $\lambda =27.0$ Å 7.5 0.273 0.202 0.74 10 0.236 0.177 0.75 20 0.164 0.133 0.81 $1s^{2}2s2p$$^{1}\mathrm{P}_{1}^{\mathrm{0}}-2s3s$ $^{1}\mathrm{S}_{0}$ 5 0.363 0.280 0.77 $\lambda =28.4$ Å 7.5 0.309 0.232 0.75 10 0.273 0.204 0.75 20 0.197 0.152 0.77 \[tab4\] : Present quantum (Q) and semiclassical (SCP) line widths of two Ar XV transitions between excited levels. strong/total$(\%)$ quad/total$(\%)$ $\frac{\left\vert SCP-Q\right\vert }{Q}(\%)$ --------------------------------------------------------------------- -------------------- ------------------ ---------------------------------------------- $2s^{2}$$^{1}\mathrm{S}_{0}-2s2p$ $^{1}\mathrm{P}_{1}^{\mathrm{0}}$ $42$ $65$ $67$ $2s^{2}$ $^{1}\mathrm{S}_{0}-2s3p$$^{1}\mathrm{P}_{1}^{\mathrm{0}}$ $56$ $88$ $88$ $2s^{2}$ $^{1}\mathrm{S}_{0}-2s4p$$^{1}\mathrm{P}_{1}^{\mathrm{0}}$ $45$ $71$ $52$ $2s^{2}$ $^{1}\mathrm{S}_{0}-2s5p$$^{1}\mathrm{P}_{1}^{\mathrm{0}}$ $38$ $60$ $55$ $2s2p$ $^{3}\mathrm{P}_{0}^{\mathrm{0}}-2s3s$ $^{3}\mathrm{S}_{1}$ $35$ $56$ $27$ $2s2p$ $^{1}\mathrm{P}_{1}^{\mathrm{0}}-2s3s$ $^{1}\mathrm{S}_{0}$ $36$ $ 56$ $23$ \[tab5\] : Strong collisions (strong) and quadrupolar potential (quad) contributions to line widths.
--- abstract: 'A chain of metallic particles, of sufficiently small diameter and spacing, allows linearly polarized plasmonic waves to propagate along the chain. In this paper, we consider how these waves are altered by an anisotropic host (such as a nematic liquid crystal) or an applied magnetic field. In a liquid crystalline host, with principal axis (director) oriented either parallel or perpendicular to the chain, we find that the dispersion relations of both the longitudinal ($L$) and transverse ($T$) modes are significantly altered relative to those of an isotropic host. Furthermore, when the director is perpendicular to the chain, the doubly degenerate $T$ branch is split by the anisotropy of the host material. With an applied magnetic field ${\bf B}$ parallel to the chain, the propagating transverse modes are circularly polarized, and the left and right circularly polarized branches have slightly different dispersion relations. As a result, if a linearly polarized transverse wave is launched along the chain, it undergoes Faraday rotation. For parameters approximating that of a typical metal and for a field of 2T, the Faraday rotation is of order 1$^o$ per ten interparticle spacings, even taking into account single-particle damping. If ${\bf B}$ is perpendicular to the chain, one of the $T$ branches mixes with the $L$ branch to form two elliptically polarized branches. Our calculations include single-particle damping and can, in principle, be generalized to include radiation damping. The present work suggests that the dispersion relations of plasmonic waves on chain of nanoparticles can be controlled by immersing the chain in a nematic liquid crystal and varying the director axis, or by applying a magnetic field.' author: - 'N. A. Pike and D.  Stroud' title: 'Plasmonic Waves on a Chain of Metallic Nanoparticles: Effects of a Liquid Crystalline Host or an Applied Magnetic Field' --- Introduction ============ The optical properties of small metal particles have been of interest to physicists since the time of Maxwell[@maxwell]. Such particles, if subjected to light of wavelength much larger than their linear dimensions, exhibit optical resonances due to localized electronic excitations known as “particle” or “ surface" plasmons. These plasmons can give rise to characteristic absorption peaks, typically in the near-infrared or the visible, which may play an important role in the optical response of dilute suspensions of metal particles in a dielectric host[@pelton; @maier3; @solymar]. Because of recent advances in sample preparation, it has become possible to study [*ordered*]{} arrays of metal particles in a dielectric host[@meltzer; @maier2; @tang; @park]. In one-dimensional ordered arrays of such closely-spaced particles, waves of plasmonic excitations can propagate along the chains, provided that the interparticle spacing is small compared to the wavelength of light[@koederink; @brong; @maier03; @park04; @plasmonchain; @alu06; @halas; @weber04; @simovski05; @abajo; @jain; @crozier]. In this limit, the electric fields produced by the dipole moment of one nanoparticle induces dipole moments on the neighboring nanoparticles. The dispersion relations for both transverse ($T$) and longitudinal ($L$) plasmonic waves can then be calculated in the so-called quasistatic approximation[@brong; @maier03; @park04], in which the curl of the electric field is neglected. While this approximation neglects some significant coupling between the plasmonic waves and free photons[@weber04], it gives reasonable results over most of the Brillouin zone. Interest in such plasmonic waves has grown vastly in recent years[@park04; @plasmonchain]. In this paper, we extend the study of propagating plasmonic waves in two ways. First, we calculate the dispersion relation for such plasmonic waves when the metallic chain is immersed in an anisotropic host, such as a nematic liquid crystal (NLC). Using a simple approximation, we show that both the $L$ and $T$ waves have modified dispersion relations when the director is parallel to the chain axis. If the director is perpendicular to that axis, we show that the previously degenerate $T$ branches are split into two separate branches. Secondly, we consider the effects of a static magnetic field applied either parallel and perpendicular to the chain. For the parallel case we show that a linearly polarized $T$ wave is rotated as it propagates along the chain. For a field of 2 tesla and reasonable parameters for the metal, this Faraday rotation may be as large as 1-2$^o$ over ten interparticle spacings. A perpendicular field mixes together the $L$ branch and one of the $T$ branches, leading to two elliptically polarized branches. These results suggest that either an NLC host or an applied magnetic field could be used as an additional “control knob” to manipulate the properties of the propagating waves in some desired way. The remainder of this paper is organized as follows. In the next section, we present the formalism which allows one to calculate the dispersion relations for $L$ and $T$ waves in the presence of either an anisotropic host or an applied dc magnetic field. In Section III, we give simple numerical examples, and we follow this by a brief concluding discussion in Section IV. Formalism ========= Overview -------- We consider a chain of identical metal nanoparticles, each a sphere of radius $a$, arranged in a one-dimensional periodic lattice along the $z$ axis. The n$^{th}$ particle is assumed centered at $(0, 0, nd)$ ($-\infty < n < + \infty$). The propagation of plasmonic waves along such a chain of nanoparticles has already been considered extensively for the case of isotropic metal particles embedded in a homogeneous, isotropic medium[@brong]. Various works have considered the quasistatic case in which the electric field is assumed to be curl-free; this is roughly applicable when both the radius of the particles and the distance between them are small compared the wavelength of light [@brong; @maier03; @park04]. The extension of such studies to include radiative corrections, i. e., to the case when the electric fields cannot be approximated as curl-free, has also been carried out; these corrections can be very important even in some long-wavelength regimes [@weber04]. Here we consider how the plasmon dispersion relations are modified when the particle chain is immersed in an anisotropic dielectric, such as an NLC, or subjected to an applied dc magnetic field. For the case of metallic particles immersed in an NLC, we assume that the host medium is a uniaxial dielectric, with principal dielectric constants $\epsilon_\perp$, $\epsilon_\perp$, and $\epsilon_\|$. For metal particles in the presence of an applied magnetic field, we take the host medium to be vacuum, with dielectric constant unity. In the absence of a magnetic field, the medium inside the particles is assumed to have a scalar dielectric function $\epsilon(\omega)$. If there is a magnetic field along the $z$ axis, the dielectric function of the particles becomes a tensor, whose components may be written $$\begin{aligned} \label{eq:dielectric_matrix} \epsilon_{xx}(\omega) = \epsilon_{yy}(\omega) = \epsilon_{zz}(\omega) = \epsilon(\omega) \nonumber \\ \epsilon_{xy}(\omega) = -\epsilon_{yx}(\omega) = iA(\omega),\end{aligned}$$ with all other components vanishing [@hui]. In the calculations below, we will assume that the nanoparticles are adequately described by a Drude dielectric function. In this case, the components of the dielectric tensor take the form [@hui] $$\epsilon(\omega) = 1 - \frac{\omega_p^2}{\omega(\omega+ i/\tau )} \label{eq:epsw}$$ and $$A(\omega) = -\frac{\omega_p^2\tau}{\omega}\frac{(\omega_c\tau)}{(1-i\omega\tau)^2}. \label{eq:aw}$$ Here $\omega_p$ is the plasma frequency, $\tau$ is a relaxation time, and $\omega_c = eB/(mc)$ is the cyclotron frequency, where ${\bf B} = B\hat{z}$ is the magnetic field, $m$ is the electron mass, and $e$ is its charge. We will use Gaussian units throughout. In the limit $\omega\tau \rightarrow \infty$, we may write $$\begin{aligned} \label{eq:elements_dielectric} \epsilon(\omega) = 1 - \frac{\omega_p^2}{\omega^2}; \nonumber \\ A(\omega) = \frac{\omega_p^2\omega_c}{\omega^3}.\end{aligned}$$ Uniaxially Anisotropic Host --------------------------- We first assume that the host has a dielectric tensor $\epsilon_h$ with principal components $\epsilon_\perp$, $\epsilon_\perp$, and $\epsilon_\|$. Such a form is appropriate, for example, in a nematic liquid crystal below its nematic-to-isotropic transition. We begin by writing down the electric field at ${\bf x}$ due to a sphere with a polarization ${\bf P}({\bf x}^\prime)$. In component form, this field takes the form (see, e. g., Ref [@stroud75]) $$E_i({\bf x}) = -\int{\cal G}_{ji}({\bf x} - {\bf x}^\prime)P_j({\bf x}^\prime)d^3x^\prime, \label{eq:polfield}$$ where repeated indices are summed over, and we use the fact that ${\cal G}_{ji} = {\cal G}_{ij}$. In eq. (\[eq:polfield\]), ${\bf P}({\bf x}^\prime) = (\epsilon - \epsilon_h){\bf E}({\bf x}^\prime)$ is the polarization of the metallic particle, $\epsilon$ is the dielectric function of the metal particle, and ${\bf{\cal G}}$ denotes a 3$\times$3 matrix whose elements are $${\cal G}_{ij} = \partial_i^\prime\partial_jG({\bf x}_i- {\bf x}^\prime_j), \label{eq:gfgrad2}$$ where $G({\bf x} - {\bf x}^\prime)$ is a Green’s function which satisfies the differential equation (see, e. g., Ref. [@Stroud; @stroud75]) $${\bf \nabla}\cdot\epsilon_h{\bf \nabla}G({\bf x}- {\bf x}^\prime) =-\delta({\bf x} - {\bf x}^\prime).$$ If the host dielectric tensor is diagonal and uniaxial with diagonal components $\epsilon_\perp$, $\epsilon_\perp$ and $\epsilon_\|$, which we take for the moment to be parallel to the $x$, $y$, and $z$ axes respectively, this Green’s function is given by [@stroud75] $$G({\bf x}- {\bf x}^\prime) = \frac{1}{4\pi\epsilon_{\perp}\epsilon_{\|}^{1/2}} \left[\frac{(x-x^\prime)^2+(y-y^\prime)^2}{\epsilon_\perp} + \frac{(z-z^\prime)^2}{\epsilon_\|}\right]^{-1/2}. \label{eq:gfaniso}$$ Physically, $-{\cal G}_{ij}({\bf x} - {\bf x}^\prime)$ represents the i$^{th}$ component of electric field at ${\bf x}$ due to a unit point dipole oriented in the j$^{th}$ direction at ${\bf x}^\prime$, in the presence of the anisotropic host. The next step is to use this result to obtain a self-consistent equation for plasmonic waves along a chain immersed in an anisotropic host. To do this, we consider the polarization of the n$^{th}$ particle, which we write as ${\bf P}_n({\bf x}) = \delta\epsilon{\bf E}_{in,n}({\bf x})$, where ${\bf E}_{in,n}({\bf x})$ is the electric field within the n$^{th}$ particle and $\delta\epsilon=\epsilon-\epsilon_h$. This field, in turn, is related to the external field acting on the n$^{th}$ particle and arising from the dipole moments of all the other particles. We approximate this external field as uniform over the volume of the particle, and denote it ${\bf E}_{ext,n}$. This approximation should be reasonable if the particle radius is not too large compared to the separation between particles (in practice, an adequate condition is probably $a/d \leq 1/3$, where $a$ is the particle radius and $d$ the nearest neighbor separation). Then ${\bf E}_{in, n}$ and ${\bf E}_{ext,n}$ are related by [@Stroud] $${\bf E}_{in,n} = ({\bf 1} - {\bf \Gamma}\delta\epsilon)^{-1}{\bf E}_{ext,n}, \label{eq:einext}$$ where ${\bf \Gamma}$ is a "depolarization matrix” defined, for example, in Ref. [@stroud75]. ${\bf E}_{ext,n}$ is the field acting on the n$^{th}$ particle due to the dipoles produced by all the other particles, as given by eq. (\[eq:polfield\]). Hence, the dipole moment of the n$^{th}$ particle may be written $${\bf p}_n = \frac{4\pi}{3}a^3{\bf P}_{in,n} = \frac{4\pi}{3}a^3{\bf t}{\bf E}_{ext,n} \label{eq:pinn}$$ where $${\bf t} = \delta\epsilon\left({\bf 1}-{\bf \Gamma}\delta\epsilon\right)^{-1} \label{eq:tmatrix}$$ is a “t-matrix” describing the scattering properties of the metallic sphere in the surrounding material. Finally, we make the assumption that the portion of ${\bf E}_{ext,n}$ which comes from particle n$^\prime$ is obtained from eq. (\[eq:polfield\]) as if the spherical particle $n^\prime$ were a point particle located at the center of the sphere (this approximation should again be reasonable if $a/d \leq 1/3$). With this approximation, and combining eqs.  (\[eq:polfield\]), (\[eq:pinn\]), and (\[eq:tmatrix\]), we obtain the following self-consistent equation for coupled dipole moments: $${\bf p}_n = -\frac{4\pi a^3}{3}{\bf t}\sum_{n^\prime \neq n}{\cal G}({\bf x}_n - {\bf x}_{n^\prime}){\bf p}_{n^\prime}. \label{eq:selfconsist}$$ Let us first assume that the principal axis of the anisotropic host coincides with the chain direction, which we take as the $z$ axis. In this case, the $L$ and $T$ waves decouple and can be treated independently, because one of the principal axes of the ${\bf \Gamma}$ tensor coincides with the chain axis. First, we consider the longitudinally polarized waves. To find their dispersion relation, we need to calculate ${\cal G}_{zz}({\bf x}_n - {\bf x}_{n^\prime})$. From the definition of this quantity, and from the fact that ${\bf x}_n = nd\hat{z} \equiv z_n\hat{z}$, we can readily show that ${\cal G}_{zz}({\bf x}_n - {\bf x}_{n^\prime})= -\frac{1}{2\pi\epsilon_\perp}\frac{1}{|z_n-z_{n^\prime}|^3}$. Hence, we obtain the following equations for the $p_{nz}$’s: $$p_{nz} =-\frac{2}{3\epsilon_{\perp}}a^3\frac{\delta\epsilon_{\|}}{1-\Gamma_\|\delta\epsilon_{\|}}\sum_{n^\prime \neq n} \frac{p_{n^\prime z}}{|z_n-z_n^\prime|^3}. \label{eq:coupled}$$ For transverse modes, the relevant Green’s function takes the form ${\cal G}_{xx}({\bf x}_n - {\bf x}_{n^\prime}) = \frac{\epsilon_{\|}}{4\pi\epsilon_\perp^2}\frac{1}{|z_n-z_{n^\prime}|^3}$. The resulting equation for the dipole moments takes the form $$p_{nx} =\frac{1}{3}\frac{\epsilon_{\|}}{\epsilon_\perp^2} a^3\frac{\delta\epsilon_\perp}{1-\Gamma_\perp\delta\epsilon_\perp} \sum_{n^\prime \neq n}\frac{p_{n^\prime,x}}{|z_n- z_{n^\prime}|^3}.$$ In the isotropic case with a vacuum host, $\epsilon_\|=\epsilon_\perp=1$, and $\Gamma_{xx}=\Gamma_{yy} = \Gamma_{zz} = -1/3$, The equations for both the parallel and perpendicular cases reduce to the results obtained in Ref. [@brong] for both $L$ and $T$ modes, as expected. For an anisotropic host and only nearest neighbor interactions, the dispersion relation for the $T$ waves is implicitly given by $$1 = -\frac{2}{3}\frac{a^3}{d^3}\frac{\delta\epsilon_{\perp}}{1-\Gamma_{\perp}\delta\epsilon_{\perp}}\frac{\epsilon_{\|}}{\epsilon_{\perp}^2}\cos kd \label{eq:twave1}$$ and for the $L$ waves by $$1 = \frac{4}{3}\frac{a^3}{d^3}\frac{\delta\epsilon_{\|}}{1-\Gamma_{\|}\delta\epsilon_{\|}}\frac{1}{\epsilon_{\perp}}\cos kd. \label{eq:lwave1}$$ The forms of $\Gamma_{\perp}$ and $\Gamma_{\|}$ are well known (see, e. g., Ref. [@stroud75], where they are denoted $\Gamma_{xx}$ and $\Gamma_{zz}$). We rewrite them here for convenience: $$\begin{aligned} \Gamma_{\|} & = & -\frac{1}{\epsilon_{\|}\lambda}\left[1 - \sqrt{1-\lambda}\frac{\sin^{-1}\sqrt{\lambda}}{\sqrt{\lambda}}\right] \nonumber \\ \Gamma_{\perp} & = & -\frac{1}{2} \left[\Gamma_{\|} + \frac{1}{\sqrt{\epsilon_{\perp}\epsilon_{\|}}}\frac{\sin^{-1}\sqrt{\lambda}}{\sqrt{\lambda}}\right] \label{eq:gammaxz}\end{aligned}$$ where $\lambda = 1 - \epsilon_{\perp}/\epsilon_\|$. If we assume that the metallic particle has a Drude dielectric function of the form $\epsilon(\omega) = 1-\omega_p^2/\omega^2$, then the dispersion relation for $T$ waves becomes $$\frac{\omega_t^2(k)}{\omega_p^2} = \frac{T(k)}{(1-\epsilon_{\perp})T(k)-1}, \label{eq:tdisp1}$$ where $$T(k)= \Gamma_{\perp} - \frac{2}{3}\frac{a^3}{d^3}\frac{\epsilon_{\|}}{\epsilon_{\perp}^2}\cos(kd),$$ and that of the $L$ waves is $$\frac{\omega_\ell^2(k)}{\omega_p^2} = \frac{L(k)}{(1-\epsilon_{\|})L(k) - 1}, \label{eq:ldisp1}$$ where $$L(k) = \Gamma_{\|} + \frac{4}{3}\frac{a^3}{d^3}\frac{1}{\epsilon_{\perp}}\cos(kd).$$ Eqs.  (\[eq:tdisp1\]) and (\[eq:ldisp1\]) neglect damping of the waves due to dissipation within the metallic particles. To include this effect, one can simply solve eq.  (\[eq:twave1\]) or (\[eq:lwave1\]) for $k(\omega)$, using the Drude function with a finite $\tau$. The resulting $k(\omega)$ will be complex in both cases; the inverse of the imaginary part of $k(\omega)$ will give the exponential decay length of the $T$ or $L$ wave along the chain. Now, let us repeat this calculation but with the principal axis of the liquid crystalline host parallel to the $x$ axis, while the chain itself again lies along the $z$ axis. The self-consistent equation for the dipole moments again takes the form (\[eq:selfconsist\]), but the diagonal elements of ${\cal G}$ are given by ${\cal G}_{ii}({\bf x}, {\bf x}^\prime) = \partial_i^\prime\partial_iG({\bf x}- {\bf x}^\prime)$, where now $$G({\bf x}- {\bf x}^\prime) = \frac{1}{4\pi\epsilon_\perp\epsilon_\|^{1/2}}\left[\frac{(x-x^\prime)^2}{\epsilon_\|} + \frac{(y-y^\prime)^2 + (z-z^\prime)^2}{\epsilon_\perp}\right]^{-1/2}.$$ For the case of interest, ${\bf x} = nd{\bf \hat{z}} \equiv z_n{\bf \hat{z}}$, ${\bf x}^\prime = n^\prime d{\bf \hat{z}} \equiv z_{n^\prime}{\bf \hat{z}}$, and one finds that ${\cal G}_{xx}({\bf x} - {\bf x}^\prime) = \frac{1}{4\pi}\frac{\epsilon_\perp^{1/2}}{\epsilon_\|^{3/2}}\frac{1}{|z_n-z_{n^\prime}|^3}$, ${\cal G}_{yy}({\bf x} - {\bf x}^\prime) = \frac{1}{4\pi}\frac{1}{\epsilon_\perp^{1/2}\epsilon_\|^{1/2}}\frac{1}{|z_n-z_{n^\prime}|^3}$ and ${\cal G}_{zz}({\bf x}-{\bf x}^\prime) = -\frac{1}{2\pi}\frac{1}{\epsilon_\perp^{1/2}\epsilon_\|^{1/2}}\frac{1}{|z_n-z_{n^\prime}|^3}$. The self-consistency condition determining the relation between $\omega$ and $k$ can be written out, for all three polarizations, including only nearest neighbor dipole-dipole interactions, in the form $1 = -8\pi \frac{a^3}{d^3}\frac{\delta\epsilon_{ii}}{1-\Gamma_{ii}\delta\epsilon_{ii}}{\cal G}_{ii}(d)\cos kd$. Substituting in the values of ${\cal G}_{ii}$ for the three cases, we obtain $$\begin{aligned} 1 & = & -\frac{2a^3}{3d^3}\frac{\delta\epsilon_{xx}}{1-\Gamma_{xx}\delta\epsilon_{xx}}\frac{\epsilon_\perp^{1/2}}{\epsilon_\|^{3/2}}\cos kd \nonumber \\ 1 & = & -\frac{2a^3}{3d^3}\frac{\delta\epsilon_{yy}}{1-\Gamma_{yy}\delta\epsilon_{yy}}\frac{1}{\epsilon_\|^{1/2}\epsilon_\perp^{1/2}}\cos kd \nonumber \\ 1 & = & \frac{4a^3}{3d^3}\frac{\delta\epsilon_{zz}}{1-\Gamma_{zz}\delta\epsilon_{zz}}\frac{1}{\epsilon_\perp^{1/2}\epsilon_\|^{1/2}} \cos kd.\end{aligned}$$ Here $\Gamma_{xx} = \Gamma_\|$, $\Gamma_{yy} = \Gamma_{zz} = \Gamma_\perp$, where $\Gamma_\|$ and $\Gamma_\perp$ are given by eqs. (\[eq:gammaxz\]). Similarly, $\delta\epsilon_{xx} = \epsilon(\omega) - \epsilon_\|$, while $\delta\epsilon_{yy}=\delta\epsilon_{zz}=\epsilon(\omega)-\epsilon_\perp$. These equations can again be solved for $k(\omega)$ in the three cases, with or without a finite $\tau$, leading to dispersion relations with or without single-particle damping. Chain of Metallic Nanospheres in an External Magnetic Field ----------------------------------------------------------- Next, we turn to a chain of metallic nanoparticles in an external magnetic field, which we initially take to be parallel to the $z$ axis. For such a system, we assume that the metal dielectric tensor is of the form (\[eq:dielectric\_matrix\]), (\[eq:epsw\]) and (\[eq:aw\]). The cases of a dilute suspension of metallic particles[@hui], or of a random composite of ferromagnetic and non-ferromagnetic particles[@xia], have been treated previously. Once again, we take the chain of particles to lie along the $z$ axis, with the n$^{th}$ particle centered at $z_n= nd$. The self-consistent equation for the dipole moments is still eq.  (\[eq:selfconsist\]), but now the elements of both ${\cal G}$ and ${\bf \Gamma}$ are different from the case of an NLC host. For a chain of particles parallel to the $z$ axis, ${\cal G}$ is diagonal, with non-zero elements ${\cal G}_{xx}({\bf x} -{\bf x}^\prime) = {\cal G}_{yy}({\bf x}- {\bf x}^\prime) = \frac{1}{4\pi|z_n-z_{n^\prime}|^3}$, ${\cal G}_{zz}({\bf x} -{\bf x}^\prime) = -\frac{1}{2\pi|z_n-z_{n^\prime}|^3}$, where $z_n = nd$ and we have assumed that the host has a dielectric constant equal to unity. The tensor ${\bf \Gamma}$ is also diagonal, with nonzero elements $\Gamma_{ii} = -1/3$, i = 1, 2, 3. The quantity $\delta\epsilon = \epsilon(\omega) - 1$, where $\epsilon(\omega)$ is now the dielectric tensor of the metallic particle. Using eq. (\[eq:dielectric\_matrix\]) to evaluate this tensor, we obtain the following result for the tensor ${\bf t} \equiv \delta\epsilon[1-{\bf \Gamma}\delta\epsilon]^{-1}$, to first order in the quantity $A(\omega)$, which is assumed to be small: $$\begin{aligned} t_{zz} & = & \delta\epsilon_{zz}(1 - \Gamma_{zz}\delta\epsilon_{zz})^{-1} \nonumber \\ t_{xx}=t_{yy} & = & \delta\epsilon_{xx}(1-\Gamma_{xx}\delta\epsilon_{xx})^{-1} \nonumber \\ t_{xy} = -t_{yx} & = & \delta\epsilon_{xx}\Gamma_{xx}\delta\epsilon_{xy}(1-\Gamma_{xx}\delta\epsilon_{xx})^{-2}.\end{aligned}$$ Using these expressions, we can now write out the self-consistent linear equations for the oscillating dipole moments and obtain dispersion relations for the modes. Once again, the longitudinal and transverse modes decouple. For the longitudinal modes, the self-consistent equation simplifies to $$p_{nz} = 2a^3\frac{\epsilon(\omega)-1}{\epsilon(\omega)+2}\sum_{n^\prime \neq n}\frac{p_{n^\prime z}}{|z_n-z_n^\prime|^3}. \label{eq:pzmag}$$ This is the same as the equation for the longitudinal modes in the absence of a magnetic field and gives the same dispersion relation. For the transverse modes, the $x$ and $y$ components of the polarization are now coupled, and satisfy the equations \[writing ${\cal G}_{ij}({\bf x} - {\bf x}^\prime) = {\cal G}_{ij}(z_n - z_{n^\prime})$\] $$\begin{aligned} p_{nx} & = & -\frac{4\pi}{3}a^3 \sum_{n^\prime \neq n} \left[t_{xx}{\cal G}_{xx}(z_n-z_{n^\prime})p_{n^\prime, x} + t_{xy}{\cal G}_{yy}(z_n-z_{n^\prime})p_{n^\prime,y}\right] \nonumber \\ p_{ny} & = & -\frac{4\pi}{3}a^3 \sum_{n^\prime \neq n} \left[t_{yx}{\cal G}_{xx}(z_n-z_{n^\prime})p_{n^\prime, x} + t_{yy}{\cal G}_{yy}(z_n-z_{n^\prime})p_{n^\prime,y}\right] \label{eq:pxymag}\end{aligned}$$ We can simplify these equations using the fact that ${\cal G}_{xx}(z_n-z_{n^\prime})={\cal G}_{yy}(z_n-z_{n^\prime})$, $t_{xx}=t_{yy}$, and $t_{xy} = t_{yx}$ to obtain $$p_{n,+} = -\frac{4\pi}{3}a^3t_{\perp,-}\sum_{n^\prime \neq n}{\cal G}(z_n-z_{n^\prime})p_{n^\prime,+} \label{eq:pxymag1}$$ and $$p_{n,-} = -\frac{4\pi}{3}a^3t_{\perp,+}\sum_{n^\prime \neq n}{\cal G}(z_n-z_{n^\prime})p_{n^\prime,-}, \label{eq:pxymag2}$$ where $t_{\perp,\pm} = t_{xx} \pm it_{xy}$ and $p_{n,\pm} = p_{nx} \pm ip_{ny}$. Thus, the equations for left- and right-circularly polarized waves are decoupled. To obtain explicit dispersion relations for left- and right-circularly polarized waves, we assume, as before, that $p_{n,\pm} = p_\pm\exp(iknd-i\omega t)$, and substitute the known forms for the quantities $t_{xx}$, $t_{xy}$, and $G_{xx}(z_n-z_{n^\prime})$, with the following result: $$1 = -2\frac{a^3}{d^3}\left[\frac{\epsilon(\omega)-1}{\epsilon(\omega)+2} \pm \frac{3A(\omega)}{\left[\epsilon(\omega)+2\right]^2}\right]\sum_{n=1}^\infty\frac{cos(nk_\pm d)}{n^3}. \label{eq:dispers1}$$ In the special case where we include dipolar interactions only between nearest neighbors, this relation becomes $$1 = -2\frac{a^3}{d^3}\left[\frac{\epsilon(\omega)-1}{\epsilon(\omega)+2}\pm \frac{3A(\omega)}{\left[\epsilon(\omega)+2\right]^2}\right]\cos(k_\pm d). \label{eq:dispers2}$$ Since the frequency-dependence of both $\epsilon(\omega)$ and $A(\omega)$ is assumed known, these equations represent implicit relations between $\omega$ and $k_\pm$ for these transverse waves. By solving for $k_\pm(\omega)$ in eq. (\[eq:dispers1\]) or (\[eq:dispers2\]), one finds that left and right circularly polarized transverse waves propagating along the nanoparticle chain have slightly different wave vectors $k_+$ and $k_-$ for the same frequency $\omega$. Since a linearly polarized wave is composed of an equal fraction of right and left circularly polarized waves, this behavior corresponds to a [*rotation*]{} of the plane of polarization of a linearly polarized wave, as it propagates down the chain, and is analogous to the usual Faraday effect in a [*bulk*]{} dielectric. The angle of rotation per unit chain length may be written $$\label{eq:angle_single} \theta(\omega) = \frac{1}{2} \left[k_+(\omega) - k_-(\omega)\right].$$ In the absence of damping, $\theta$ is real. If $\tau$ is finite, the electrons in each metal particle will experience damping within each particle, leading to an exponential decay of the plasmons propagating along the chain. This damping is automatically included in the above formalism, and can be seen most easily if only nearest neighbor coupling is included. The quantity $$\theta(\omega) = \theta_1(\omega) + i\theta_2(\omega) \label{eq:thetatau}$$ is then the [*complex*]{} angle of rotation per unit length of a linearly polarized wave propagating along the chain of metal particles. By analogy with the interpretation of a complex $\theta$ in a homogeneous bulk material, Re$\theta(\omega)$ represents the angle of rotation of a linearly polarized wave (per unit length of chain), and Im$\theta(\omega)$ as the corresponding Faraday ellipticity - i. e., the degree to which the initially linearly polarized wave becomes elliptically polarized as it propagates along the chain. For a magnetic field perpendicular to the chain (let us say, along the $x$ axis), the elements of the matrix ${\bf t}$ become $$\begin{aligned} t_{xx} =t_{yy}=t_{zz} & = & 3[\epsilon(\omega)-1]/[\epsilon(\omega)+2] \nonumber \\ t_{yz} = -t_{zy} &= &-3iA(w)[\epsilon(\omega)-1]/[\epsilon(\omega)+2]^2, \end{aligned}$$ with other elements equal to zero. The transverse waves polarized parallel to the $x$ axis are unaffected by the magnetic field, and are described by the equations $$p_{nx} = -\frac{4\pi}{3}a^3t_{xx}\sum_{n^\prime\neq n}{\cal G}_{xx}(z_n-z_{n^\prime})p_{n^\prime,x}.$$ The $y$ and $z$ polarized waves, with components $p_{ny}$ and $p_{nz}$, are coupled, however, and satisfy $$\begin{aligned} p_{ny} & = & -\frac{4\pi}{3}a^3\sum_{n^\prime\neq n}\left[t_{yy}{\cal G}_{yy}(z_n- z_{n^\prime})p_{n^\prime, y} + t_{yz}{\cal G}_{zz}(z_n-z_{n^\prime})p_{n^\prime, z}\right] \nonumber \\ p_{nz} & = & -\frac{4\pi}{3}a^3\sum_{n^\prime\neq n}\left[t_{zy}{\cal G}_{yy}(z_n-z_{n^\prime})p_{n^\prime, y} + t_{zz}{\cal G}_{zz}(z_n-z_{n^\prime})p_{n^\prime, z}\right]\end{aligned}$$ The tensor ${\cal G}$ is still diagonal, with the same nonzero elements as in the case of ${\bf B}$ parallel to the chain. Assuming propagating waves of the form $p_{ny} = p_{0y}\exp(inkd-i\omega t)$, $p_{nz} = p_{0z}\exp(inkd-i\omega t)$, we find that the amplitudes $p_{0y}$ and $p_{0z}$ satisfy the equations $$\begin{aligned} p_{0y} & = & -\frac{4\pi}{3}a^3\sum_{n^\prime \neq 0}\left[t_{yy}{\cal G}_{yy}(-z_{n^\prime})p_{0y} + t_{yz}{\cal G}_{zz}(-z_{n^\prime})p_{0z}\right]exp(ikn^\prime d) \nonumber \\ p_{0z} & = & -\frac{4\pi}{3}a^3\sum_{n^\prime \neq 0}\left[-t_{yz}{\cal G}_{yy}(-z_{n^\prime})p_{0y}+t_{zz}{\cal G}_{zz}(-z_{n^\prime})p_{0z}\right]\exp(ikn^\prime d). \label{eq:rotyz}\end{aligned}$$ In the special case where only nearest neighbor interactions are included, these equations simplify to $$\begin{aligned} p_{0y} & = &-\frac{8\pi}{3}a^3\left[t_{yy}{\cal G}_{yy}(d)p_{0y}+t_{yz}{\cal G}_{zz}(d)p_{0z}\right]\cos(kd) \nonumber \\ p_{0z} &= & -\frac{8\pi}{3}a^3\left[-t_{yz}{\cal G}_{yy}(d)p_{0y} + t_{zz}{\cal G}_{zz}(d)p_{0z}\right]\cos(kd). \label{eq:rotyz1}\end{aligned}$$ Substituting in the explicit forms of ${\cal G}_{yy}$ and ${\cal G}_{zz}$, we find that these equations take the form $$\begin{aligned} p_{0y} & = & \frac{a^3}{d^3}\left[-\frac{2}{3}t_{yy}p_{0y} + \frac{4}{3}t_{yz}p_{0z}\right]\cos kd \nonumber \\ p_{0z} & = & -\frac{a^3}{d^3}\left[\frac{2}{3}t_{yz}p_{0y} + \frac{4}{3}t_{zz}p_{0z}\right]\cos kd. \label{eq:rotyz2}\end{aligned}$$ If we solve the pair of equations (\[eq:rotyz\]) or (\[eq:rotyz2\]) for $p_{0y}$ and $p_{0z}$ for a given value of $k$, we obtain a nonzero solutions only if the determinant of the matrix of coefficients vanishes. For a given real frequency $\omega$, there will, in general, be two solutions for $k(\omega)$ which decay in the $+z$ direction. These correspond to two branches of propagating plasmon (or plasmon polariton) waves, with dispersion relations which we may write $k_1(\omega)$ and $k_2(\omega)$. The frequency dependence appears because both $t_{yz}$ and $t_{yy}$ depend on $\omega$ \[through $\epsilon(\omega)$ and $A(\omega)$\]. The corresponding solutions $(p_{0y}$, $p_{0z})$ are no longer linearly polarized but will instead be elliptically polarized. However, unlike the case where the magnetic field is parallel to the $z$ axis, the waves are not circularly polarized, and the two solutions are non-degenerate. Because $A(\omega)$ is usually small, the ellipse has a high eccentricity, and the change in propagation characteristics due to the magnetic field will usually be small for this magnetic field direction. Numerical Illustrations ======================= As a first numerical example, we calculate the plasmon dispersion relations for a chain of spherical Drude metal particles immersed in an NLC. We consider two cases: liquid crystal director parallel and perpendicular to the chain axis, which we take as the $z$ axis. For $\epsilon_\|$ and $\epsilon_\perp$, we take the values used in Ref. [@Park]. These are taken from experiments described in Ref. [@muller], which were carried out on the NLC known as E7. For comparison, we also show the corresponding dispersion relations for an isotropic host of dielectric constant which is arbitrarily taken as $\frac{1}{3}\epsilon_\| + \frac{2}{3}\epsilon_\perp = 2.5611$. The results of these calculations are shown in Figs. 1 and 2 in the absence of damping ($\tau \rightarrow \infty$ in the Drude expression). As can be seen, both the $L$ and $T$ dispersion relations are significantly altered when the host is a nematic liquid crystal rather than an isotropic dielectric; in particular, the widths of the $L$ and $T$ bands are changed. When the director is perpendicular to the chain axis, the two $T$ branches are split when the host is an NLC, whereas they are degenerate for an isotropic host, or an NLC host with director parallel to the chain. Next, we turn to the effects of an applied magnetic field on these dispersion relations for the case ${\bf B} \| {\bf \hat{z}}$. The $L$ waves are unaffected by a magnetic field, but the $T$ waves are split into left- and right-circularly polarized waves. To illustrate the predictions of our simple expressions, we again take $a/d = 1/3$, and we assume a magnetic field such that the ratio $\omega_c/\omega_p = 3.5\times 10^{-5}$. For a typical metallic plasma frequency of $\sim 10^{16}$ sec$^{-1}$, this ratio would correspond to a magnetic induction $B \sim 2$ T. We consider both the undamped case ($\tau \rightarrow \infty$) and the damped case ($\omega_p\tau = 100$). Using these parameters, the dispersion relations for the two circular polarizations are shown in Fig. 3 both with and without single-particle damping. The splitting between the two circularly polarized $T$ waves is not visible on the scale of the figure. We also plot the corresponding rotation angles $\theta d$ for a distance equal to one interparticle spacing in Fig. 4. When there is no damping, $\theta$ diverges near the edge of the $T$ bands, but this divergence disappears for a finite $\tau$ (e. g. $\omega_p\tau = 100$, as shown in in Fig. 4). In this case, Re$\theta(\omega) d$ never exceeds about $0.001$ rad per interparticle separation, corresponding to a rotation of less than 0.1$^o$ over this distance. Im$\theta(\omega)d$ is also small, showing that a linear incident wave acquires little ellipticity over such distances. Over a distance of thirty or so interparticle separations, a linearly polarized transverse wave would typically rotate by only 1-3$^o$. Since theory and experiment suggests that the wave [*intensity*]{} typically has an exponential decay length of no more than around 20 interparticle spacings [@book], the likely Faraday rotation of such a wave in practice will probably not exceed a degree or two, at most, even for a field as large as 2 T. Thus, while the rotation found here is likely to be measurable, it may not be large, at least for this simple chain geometry with one particle per unit cell. The present expressions also indicate that $\theta$ is very nearly linear in B, so a larger rotation could be attained by increasing B. As can be seen, $\theta$ depends strongly on frequency in the case of zero damping, but less so at finite damping. In the (unrealistic) no-damping case, at the very top and the very bottom of the plasmonic band, only one of the two circularly polarized waves can propagate down the chain. Because this filtering occurs only over a very narrow frequency range (of order $\omega_c/\omega_p$), and because this calculation assumes no damping, it would be quite difficult to detect a region where only one of the two circularly polarized waves can propagate. Finally, we mention the case where ${\bf B} \perp {\bf \hat{z}}$ for the same parameters as in the parallel case. The effect of a finite B causes two non-degenerate dispersion relations (one an $L$ and the other a $T$ wave) for the perpendicular case to become mixed. We have not computed the rotation angles for this perpendicular case, but we expect that they would be similar in magnitude to the values shown in Fig. 4. Discussion ========== The calculations and formalism presented in the previous section leave out several effects which may be at least quantitatively important. First, in our numerical calculations, but not in the formalism, we have included only nearest neighbor dipolar coupling. Inclusion of further neighbors will quantitatively alter the dispersion relations in all cases considered, as well as the Faraday rotation angle when there is an applied magnetic field, but these effects should not be very large, as is already suggested by the early calculations in Ref. [@brong] for an isotropic host. Another possible effect will appear when $a/d$ is significantly greater than $1/3$, namely, the emergence of quadrupolar and higher quasistatic bands[@park04]. These will mix with the dipolar band and alter its shape. For the separations we consider, this multipolar effect should be small. Also, even if $a > d/3$, the plasmon dispersion relations will still be altered by an NLC host or by an applied magnetic field in the manner described here. The present treatment also omits radiative damping. In the absence of a magnetic field, such damping is known to be small but non-zero in the long-wavelength limit, but it becomes more significant when the particle radius is a substantial fraction of a wavelength. Even at long wavelengths, radiative damping can be very important at certain characteristic values of the wave vector[@weber04]. We have not, as yet, extended the present approach to include such radiative effects. We expect that, just as for an isotropic host in the absence of a magnetic field, radiative effects will further damp the propagating plasmons in the geometries we consider, but will not qualitatively change the effects we have described. For the case when the host is an NLC, the present work oversimplifies the treatment of the NLC host by assuming that the director field is [*uniform*]{}, i. e., position-independent. In reality, the director is almost certain to be modified close to the metal nanoparticle surface, i. e., to become nonuniform, as has been pointed out by many authors[@lubensky]. The effects of such complications on the optical properties of a single metallic particle immersed in an NLC have been treated, for example, in Ref. [@park05], and similar approaches might be possible for the present problem also. To summarize, we have shown that the dispersion relations for plasmonic waves propagating along a chain of closely spaced nanoparticles of Drude metal are strongly affected by external perturbations. First, if the host is a uniaxially anisotropic dielectric (such as a nematic liquid crystal), the dispersion relations of both $L$ and $T$ modes are significantly modified, compared to those of an isotropic host, and if the director axis of the NLC is perpendicular to the chain, the two degenerate transverse branches are split. Secondly, if the chain is subjected to an applied magnetic field parallel to the axis, the $T$ waves undergo a small but measurable Faraday rotation, and also acquire a slight ellipticity. A similar ellipticity develops if the magnetic field is perpendicular to the chain, but its effect will likely be more difficult to observe, because the magnetic field couples two non-degenerate branches of the dispersion relation. All these effects show that the propagation of such plasmonic waves can be tuned, by either a liquid crystalline host or a magnetic field, so as to change the frequency band where wave propagation can occur, or the polarization of these waves. This control may be valuable in developing devices using plasmonic waves in future optical circuit design. Acknowledgments =============== This work was supported by the Center for Emerging Materials at The Ohio State University, an NSF MRSEC (Grant No. DMR0820414). [99]{} J. C. Maxwell, Phil. Trans. R. Soc. Lond. [**155**]{}, 459-512 (1865). For reviews, see, e. g., M. Pelton, J. Aizpurua, and G. Bryant, Laser and Photonic Reviews [**2**]{}, 136 (2008), or the following two references. , S. A. Maier (Springer, New York, 2007). , L. Solymar and E. Shamonina, Oxford University Press, Oxford, 2009. S.  A. Maier, M.  L.  Brongersma, P.  G. Kik, S.  Meltzer, A.  A. G. Requicha, and H.  A. Atwater, Adv. Mat. [**133**]{} 1501 (2001). S. A. Maier, P. G. Kik, H. A. Atwater, S.  Meltzer, E. Harel, B. E. Koel, A. A. G. Requicha, Nature Mater. [**2**]{}, 229 (2003). Z. Y. Tang and N. A. Kotov, Adv.  Mater. [**17**]{}, 951 (2005). S.  Y.  Park, A. K. R. Lytton-Jean, B.  Lee, S. Weigand, G. C. Schatz and C. A. Mirkin, Nature [**451**]{}, 7178 (2008). A.  F. Koenderink and A.  Polman, Phys. Rev. [**B**]{} [**74**]{}, 033402 (2006). M. L. Brongersma, J. W. Hartman, and H. A. Atwater, Phys. Rev. [**B**]{} [**62**]{}, R16356 (2000). S. A. Maier, P. G. Kik, and H. A. Atwater, Phys. Rev. [**B 67**]{}, 205402 (2003). S. Y. Park and D. Stroud, Phys. Rev. [**B69**]{}, 125418(R) (2004). W. M. Saj, T.   J.  Antosiewicz, J. Pniewski, and T. Szoplik, Opto-Electronic Reviews. [**14**]{} 243 (2006); P.  Ghenuche, R.  Quidant, G.  Badenas, Optic Letters. [**30**]{} 1882 (2005). A. Alú and N. Engheta, Phys. Rev. [**B 74**]{}, 205436 (2006). N.  Halas, S. Lal, W. S. Chang, S. Link, and P.  Nordlander, Chem.  Rev. [111]{}, 3913 (2011). W. H. Weber and G. W. Ford, Phys. Rev. [**B 70**]{}, 125429 (2004). C. R. Simovski, A. J. Viitanen, and S. A. Tretyakov, Phys. Rev., 066606 (2005). F.  J.  G. de Abajo and F. J. Garcia, Rev. Mod. Phys. [**79**]{}, 1267 (2007). P.  K.  Jain, S.  Eustis, and M.  A.  El-Sayed, J. Phys. Chem.  B [**110**]{}, 18243 (2006). K.  B. Crozier, E. Togan, E. Simsek, and T. Yang, Opt. Express [**15**]{}, 17482 (2007). P.  M.  Hui and D. Stroud, Applied   Physics  Letters.  [**50**]{}, 950-952 (1987). D.  Stroud, Phys. Rev. [**B**]{} [**12**]{}, 3368 (1975). D.  Stroud and F. P.  Pan, Physical Review B [**13**]{} 1434 (1976). T. K. Xia,  P.  M.  Hui, D.  Stroud, J. Appl.  Phys. [**67**]{} 2736 (1990). S. Y. Park and D.  Stroud, Appl. Phys. Lett. [**85**]{} 2920 (2004). J.; Müller, C. Sönnichsen, H. von Poschinger, G. von Plessen, T. A. Klar, and J.  Feldmann, Appl. Phys. Lett. [**81**]{}, 171 (2002). J. Homola, [*Surface Plasmon Based Sensors*]{}, 1st Ed. (Springer, New York, 2006). See, e. g., T. C.  Lubensky, D. Pettey, N. Currier, and H. Stark, Phys. Rev. E [**57**]{}, 610 (1998); P. Poulin and D. A. Weitz, Phys. Rev. E [**57**]{},626 (1998); H. Stark, Phys. Rep. [**351**]{}, 387 (2001); R. D. Kamien and T. D. Powers, Liq. Cryst. [**23**]{}, 213 (1997); D. W. Allender, G. P. Crawford, and J. W. Doane, Phys.Rev. Lett. [**67**]{}, 1442 (1991). S. Y. Park and D. Stroud, Phys. Rev. Lett. [**94**]{}, 217401 (2005). ![Calculated dispersion relations $\omega(k)$ for plasmon waves along a chain of metallic nanoparticles, in the presence of an NLC host. We plot $\omega/\omega_p$, where $\omega_p$ is the plasma frequency, as a function of $kd$, where $d$ is the distance between sphere centers. Green and blue (x’s and +’s): $L$ and $T$ modes for a chain embedded in an NLC with director parallel to the chain. The NLC is assumed to have principal dielectric tensor elements $\epsilon_\| = 3.0625$ and $\epsilon_\perp = 2.3104$ parallel and perpendicular to the director, corresponding to the material known as E7. In this and all subsequent plots $a/d = 1/3 $, where $a$ is the metallic sphere radius. Also shown are the corresponding $L$ and $T$ dispersion relations (black and red solid lines, respectively) when the host is isotropic with dielectric constant $\epsilon_h = 2.5611 = \frac{1}{3}\epsilon_\| + \frac{2}{3}\epsilon_\perp$.[]{data-label="figure1"}](comp_disp_LQ_ISO.eps) ![Same as Fig. 1 except that the director of the NLC is perpendicular to the chain of metal nanoparticles. The frequencies of the $L$ modes (asterisks, in green) and $T$ modes (+’s and x’s, shown in dark and light blue), divided by the plasma frequency $\omega_p$, are plotted versus $kd$. The NLC has the same dielectric tensor elements as in Fig. 1. Also shown are the corresponding $L$ (solid black) and $T$ (solid red) branches for an isotropic host with $\epsilon_h = 2.5611$. Note that the $T$ branches which were degenerate in Fig. 1 are split into two branches in this NLC geometry.[]{data-label="figure2"}](disp_chainx_LQz.eps) .
--- abstract: 'A general continuous mean-variance problem is considered where the cost functional has an integral and a terminal-time component. The problem is transformed into a superposition of a static and a dynamic optimization problems. The value function of the latter can be considered as the solution to a degenerate HJB equation either in viscosity or in Sobolev sense (after regularization) under suitable assumptions and with implications with regards to the optimality of strategies.' title: | [.tex]{}\ An HJB Approach to a General Continuous-Time Mean-Variance Stochastic Control Problem [^1].\  \ G. Aivaliotis[^2] & A. Yu. Veretennikov[^3] --- \[section\] \[count\][DEFINITION]{} \[count\][THEOREM]{} \[count\][LEMMA]{} \[count\][COROLLARY]{} \[count\][REMARK]{} [**keywords:**]{} stochastic control; mean-variance control; Hamilton-Jacobi-Bellman equation; Sobolev solutions; viscosity solutions.\ Introduction {#Se1} ============ Mean-variance optimisation problems have been established as a dominant methodology for portfolio optimisation. Markowitz [@Mar] introduced the single-period formulation of the problem in 1952. It was not until the beginning of the new century, however, that dynamic mean-variance optimisation by means of dynamic programming received much attention, mainly due to the difficulties that the non-markovianity of the variance introduced to the problem. As an alternative to dynamic programming, the problem was solved using martingale methods (see, e.g., Bielecki et al. [@bielecki2005]) or risk-sensitive functionals (see, e.g., Bielecki et al. [@bielecki2000]), whose second order Taylor expansion has the form of a mean-variance functional. A major advance in the theory for mean-variance functionals came by embedding the original problem into a class of auxiliary stochastic control problems that are in Linear-Quadratic form. This approach was introduced by Li and Ng [@li2000] in a discrete-time setting, while an extension of this method to a continuous-time framework is presented in Zhou and Li [@Zhou2000], and further employed Lim [@lim2004]. This approach leads to explicit solutions for the efficient frontier under some constraints imposed the optimisation problem (they assume that the cost function is a linear function of the controlled proccess). Wang and Forsyth [@wang2010] design numerical schemes for auxiliary linear-quadratic problems formulated in [@Zhou2000] and construct an efficient frontier. In [@Tse2013], Tse et al. show that the numerical schemes designed in [@wang2010] provide indeed all the Pareto-optimal points for the efficient frontier. Aivaliotis and Veretennikov [@A-Veretennikov2010] propose an alternative methodology that embeds the mean-variance problem into a superposition of a static and a dynamic optimisation problem, where the latter is suitable for dynamic programming methods. Solutions in the spaces of functions with generalised derivatives (henceforth called Sobolev spaces) are obtained through reqularisation. A further extension of this method is presented in [@A-Palczewski2014] where the viscosity solutions approach is followed. In the latter, each of the functionals either depending on the terminal value of the controlled process or on the integral from time $0$ to time $T$ of the controlled process are considered but separately. This approach does not in general provide any explicit solutions, but is geared towards numerical approximations that are proven to work efficiently. One advantage of the proposed methodology is that the problem can be solved for a pre-determined coefficient of risk aversion. For the LQ approach, the whole efficient frontier has to be traced and then optimal strategies can be assigned to different coefficients of risk-aversion. Let us consider a $d$-dimensional SDE driven by a $d$-dimensional Wiener process $(W_t,{\cal F}_t, \, t\ge 0)$ $$\label{diff} dX_t= b(\alpha_t,t,X_t)\,dt+ \sigma(\alpha_t,t,X_t)\,dW_t, \quad t\ge t_0, \qquad X_{t_0}=x.$$ We will specify the properties of the coefficients $b$ and $\sigma$ later, depending on the approach we use. The strategy $(\alpha_t, \, t_0\le t\le T)$ may be chosen from the class ${\cal A}$ of all progressive measurable processes with values in $A \subset \mathbb{R}^\ell$ such that $A\not= \emptyset$, bounded and closed. We will use the standard short notation where the dependence of $X$ on the strategy, initial data $x$ and $t_0$ is shown by $E^\alpha_{t_0,x}$ with respect to expectation; the full notation would be $X^{\alpha,t_0,x}_{t}$. Consider a cost function $f:\mathbb{R}^\ell\times[0,T]\times \mathbb{R}^d\rightarrow\mathbb{R}$. The cost from time $t_0$ to $T$ for a certain path of a process (\[diff\]) and strategy $\alpha\in\mathcal{A}$ will be $\int_{t_0}^Tf(\alpha_s,s,X^{\alpha,t_0,x}_{s})\,ds$. At the terminal time $T$, we will consider a “final payment” $\Phi(X_T)$. Thus the expected cost from $t_0$ to $T$ for a control strategy $\alpha\in\mathcal{A}$ will be $$E_{t_0,x}^\alpha\left(\int_{t_0}^Tf(\alpha_s,s,X_s)\,ds+\Phi(X_T)\right) .$$ In financial applications the standard terminology of “cost” has often a positive meaning, representing some function of portfolio returns or cashflows. Thus in the mean-variance control problem, one aims at maximising the expected cost while penalizing for variance (that represents risk).The value function in this case will be $$\begin{aligned} \label{valuefunction} v(t_0,x)&:=\sup_{\alpha \in {\cal A}} \left\{E^\alpha_{t_0,x} \left(\int_{t_0}^T f(\alpha_s,s,X_s)\,ds+\Phi(X_T)\right)\right.\nonumber\\ &\left.-\theta\, \mathop{\mbox{Var}^\alpha_{t_0,x}} \left(\int_{t_0}^T f(\alpha_s,s,X_s)\,ds+\Phi(X_T)\right)\right\}. $$ In this paper we are going to consider solutions of this problem both in Sobolev spaces and in Viscosity sense and discuss the optimal strategies. The contribution of this paper is threefold: we consider a general integral and terminal time payment functional, we relax the assumptions on boundedness of the coefficients and cost functions and we discuss the optimality of strategies in different settings. With regards to the functional, the problem without a“final payment” $\Phi$ has been discussed in [@A-Veretennikov2010] with solutions in Sobolev spaces and the problems with either only integral functional or only “final payment” were discussed in [@A-Palczewski2014] using Viscosity solutions. It is clear, however, that the solution to problem (\[valuefunction\]) cannot be derived as a combination of the previous two cases due to the supremum involved. When looking for solutions in Sobolev spaces, we need to use regularisation (similar to [@A-Veretennikov2010]) as we cannot relax the non-degeneracy assumption (which is not needed for Viscosity solutions). We do however relax the assumptions regarding the boundness of the drift and diffusion coefficients as well as of the cost function $f$ in comparison to the assumptions used in [@A-Veretennikov2010]. Finally we show that regularisation results into $\varepsilon-$optimal strategies, whereas a verification theorem for Viscosity solutions is only attainable under strict boundness assumptions, which are not fulfilled in our context. Mean-Variance Control {#sec-mean-var} ===================== The goal of this paper is to maximize a linear combination of the mean and variance of a payoff function that involves both an integral and a final payment. The value function (\[valuefunction\]) presents a non-markovian optimisation problem. This is due to the time-inconsistency of the variance term due to the square of the expectation and the square of an integral of the process. In detail $$\begin{aligned} v(t_0,x)&:=\sup_{\alpha \in {\cal A}} \Bigg\{E^\alpha_{t_0,x} \Big(\int_{t_0}^T f(\alpha_s,s,X_s)\,ds+\Phi(X_T)\Big)\nonumber\\ &\hspace{45pt}-\theta\,\Big[E^\alpha_{t_0,x}\big(\int_{t_0}^T f(\alpha_s,s,X_s)\,ds+\Phi(X_T)\big)^2\nonumber\\ &\hspace{45pt}-\Big(E^\alpha_{t_0,x}\big(\int_{t_0}^T f(\alpha_s,s,X_s)\,ds+\Phi(X_T)\big)\Big)^2\Big]\Bigg\}. \end{aligned}$$ In order to deal with the square of the integral, we define the following state process $(X_t, Y_t)$ by the following stochastic differential equation (as in Aivaliotis and Veretennikov [@A-Veretennikov2010] or Aivaliotis and Palczewski [@A-Palczewski2014]): $$\label{Yt} \begin{aligned} dX_t&= b(\alpha_t, t,X_t)\,dt + \sigma(\alpha_t, t,X_t)\,dW_t,~~X_0=x \\ dY_t&= f(\alpha_t, t,X_t)\,dt,~~Y_0=y. \end{aligned}$$ We will endeavor for existence and uniqueness of solutions to the above SDE. The different sets of assumptions, depending on the approach we follow, will result in different types of solutions of the above SDE. We will comment on these in the relevant sections. Note that $f$ drives the dynamics of $Y_t$ in the extended state process $(X_t,Y_t)$, therefore we will need to impose the same assumptions we impose on $b$. Naturally, we allow the process $Y_t$ to depend on some initial data $Y_{t_0}=y\in {\mathbb{R}}$. Then the value function can be written as $v(t_0,x):=\tilde{v}(t_0,x,0)$, where $$\begin{aligned} \tilde{v}(t_0,x,y)&=\sup_{\alpha \in {\cal A}} \Bigg\{E^\alpha_{t_0,x,y} \Big(g^\alpha(X_T,Y_T)\nonumber\\ &\hspace{45pt}-\theta\,\Big[E^\alpha_{t_0,x,y}\big(g^\alpha(X_T,Y_T)\big)^2-\Big(E^\alpha_{t_0,x,y}\big(g^\alpha(X_T,Y_T)\big)\Big)^2\Big]\Bigg\} \end{aligned}$$ and $$g^\alpha(X_T,Y_T):=g(X_T,Y_T,\alpha(\cdot))=\int_{t_0}^T f(\alpha_s,s,X_s)\,ds+\Phi(X_T)=Y_T+\Phi(X_T).$$ From now on the dependence of $g$ on the control will also be incorporated in the expectation in order to simplify notation. For the square of the expectation, we follow the dual representation $x^2 = \sup_{\psi \in {\mathbb{R}}} \{-\psi^2 - 2 \psi x\}$ (as in Aivaliotis and Veretennikov [@A-Veretennikov2010]). This results to the following representation: $$\label{vtilde} \begin{aligned} \tilde{v}(t_0,x,y)&=\sup_{\alpha \in {\cal A}} \Bigg\{E^\alpha_{t_0,x,y} g(X_T,Y_T)-\theta\,E^\alpha_{t_0,x,y}\big(g(X_T,Y_T)\big)^2\nonumber\\ &\hspace{43pt}-\sup_{\psi\in{\mathbb{R}}}\Big\{-\theta\psi^2-2\theta\psi E^\alpha_{t_0,x,y} g(X_T,Y_T)\Big\}\Bigg\}\nonumber\\ &=\sup_{\psi\in{\mathbb{R}}}\Big\{V(t_0,x,y,\psi)-\theta\psi^2\Big\},\nonumber \end{aligned}$$ where $V(t_0,x,y,\psi)=\sup_{\alpha \in {\cal A}} E^\alpha_{t_0,x,y}\Big((1-2\theta\psi)g(X_T,Y_T)-\theta \big(g(X_T,Y_T)\big)^2\Big).$ \[re21\] Note that a similar representation is available for any convex function in place of a parabola, although the implementation of this idea may be more involved. In the case of an even power function $x^{2n}$, however, such a representation has the same level of “complexity” as for $x^2$: $$x^{2n} = \sup_{\psi \in {\mathbb{R}}}\left(\psi^{2n} + 2n \psi^{2n-1}(x-\psi) \right).$$ This may be helpful in studying optimization of a wider family of functionals. Viscosity Solutions {#sec:visc} =================== In this section we make the following assumptions: 1. - The functions $\sigma, b, f, \Phi$ are Borel with respect to $(a,t,x)$ and continuous with respect to $(a,x)$ for every $t$; moreover, there exist constants $K_1, K_2$ such that $$\begin{aligned} &\|\sigma(a, t_1,x)-\sigma(a,t_2,z)\|\le K_1\left(\|x-z\|+|t_1-t_2|\right) \\ &\|b(a, t_1,x)-b(a, t_2,z)\|\le K_1\left(\|x-z\|+|t_1-t_2|\right)\\ &|f(a, t_1,x)-f(a,t_2,z)|\le K_2\left(\|x-z\|+|t_1-t_2|\right)\\ \end{aligned} \qquad \text{(Lipschitz condition)}$$ - $$\begin{aligned} &\|b(a, t,x)\|\le K_1\big(1 + \|x\|\big)\\ &\|\sigma(a, t,x)\|\le K_1\big(1 +\|x\|\big)\\ &|f(a, t,x)|\le K_2\big(1 + \|x\|\big) \end{aligned} \hspace{100pt} \text{(linear growth condition)}$$ - $|\Phi(x)| \le K_1(1 + \|x\|^m)$. For Viscosity solutions we do not need to assume non-degeneracy of matrix $\sigma\sigma^T$. Note that the process $(X_t,Y_t)$ would have been strongly degenerate even if we had assumed non-degeneracy of $\sigma\sigma^T$. Under assumptions (A$_V$) for every $\psi\in{\mathbb{R}}$ the value function $V(t_0,x,y)$ is a unique polynomially growing Viscosity solution of the following HJB equation: $$\label{eqn:HJB_visc} \begin{cases} V_{t_0} + \sup_{a \in A}\Big\{b(a, t_0,x)^T V_x +\frac{1}{2}tr\big(\sigma\sigma^T(a, t_0,x) V_{xx}\big)+f(u,t_0,x)V_{y}\Big\} = 0,&\\[5pt] V(T, x, y, \psi) = (1 - 2 \theta\psi) g(x,y) - \theta \big(g(x,y)\big)^2,& \end{cases}$$ We rewrite in a canonical form: $$\begin{cases} - V_{t_0}(t_0,{\tilde}x, \psi) -H\Big(t_0,{\tilde}x, V_{{\tilde}x}(t_0,{\tilde}x, \psi), V_{{\tilde}x{\tilde}x}(t_0,{\tilde}x), \psi\Big)=0,\\ V (T, {\tilde}x, \psi) =0,& \end{cases}$$ where ${\tilde}x = (x,y)$, the Hamiltonian $H$ is given by $$\nonumber H\Big(t,(x,y),{\tilde}p,{\tilde}M\Big)=\sup_{u \in A}\Big[b(u, t,x)^T p +\frac{1}{2}tr\big(\sigma\sigma^T(u, t,x)M\big)+p_yf(u, t,x)\Big],$$ with ${\tilde}p = (p, p_y)$ and $M$ is obtained from ${\tilde}M$ by removing the last row and column. Assumptions (A$_V$) imply that the domain of the Hamiltonian is the whole space ($dom(H)=\{(t,x,p,M)\in[0,T]\times\mathbb{R}^n\times\mathbb{R}^n\times\mathbb{S}^n\}$) and $H$ is continuous. By virtue of [@pham Theorem 4.3.1], $V$ is a viscosity solution of (it is clearly of polynomial growth because of the assumptions on the growth of $f$ and $\Phi$). Due to the Lipschitz continuity in $t,x$ of $f$, $b$ and $\sigma$ the value function $V$ is continuous at the terminal time $t = T$. Hence, the comparison theorem ([@pham Theorem 4.4.5]) yields the continuity of $V$ and assures that $V$ is a unique continuous polynomially growing viscosity solution to . Sobolev Solutions {#sec:sobol} ================= In this section we suggest suitable HJB equations for the suggested mean-variance problem, as reformulated in the previous section. The solutions of parabolic HJBs will be considered in the Sobolev classes $W^{1,2}_{p, loc}$ with one derivative with respect to $t$ and two with respect to $x$ in $L_p$. Denote $\overline W^{1,2} = \bigcap_{p>1} W^{1,2}_{p}$. For the functions of three variables, $v(t,x,y), \, 0\le t\le T, \, x,y\in R^d, \, $ we will use a similar Sobolev class $\overline W^{1,2,2} = \bigcap_{p>1} W^{1,2,2}_{p}$. Throughout this section, we assume the following 1. - The functions $\sigma, b, f$ are Borel with respect to $(u,t,x)$, continuous with respect to $(u,x)$ and continuous with respect to $x$ uniformly over $u$ for each $t$. $\Phi(x)$ is continuous with respect to $x$. Moreover, - $$\begin{aligned} &\|\sigma(u,t,x)-\sigma(u,t,x')\|\le K_1|x-x'|\\ &\|b(u,t,x)-b(u,t,x')\|\le K_1|x-x'|\\ &|f(u,t,x)-f(u,t,x')|\le K_2|x-x'| \end{aligned}\hspace{70pt} \text{(Lipschitz condition)}$$ - $$\begin{aligned} &\|\sigma(u,t,x)\| + \|b(u,t,x)\|\le K(1+\|x\|)\\ &|f(u,t,x)|\le K_2(1+\|x\|)\end{aligned} \hspace{55pt} \text{(linear growth condition)}$$ - $|\Phi(x)| \le K_2(1 + \|x\|^m)$. - $\sigma\sigma^T$ is uniformly non-degenerate, In order to establish the existence of solutions in Sobolev spaces, it is essential that the resulting HJB equations are non-degenerate. However, it is clear that the state process is strongly degenerate and so will be the resulting HJB equation for problem (\[vtilde\]). In order to avoid degeneracy, apart from assuming non-degeneracy for $\sigma\sigma^T$ we add a small constant positive diffusion coefficient with an independent (to $W_t$) Wiener process to the SDE for $Y_t$ in . The regularised state process becomes: $$\begin{aligned} \label{Yt-epsilon} dX_t&= b(\alpha_t, t,X_t)\,dt + \sigma(\alpha_t, t,X_t)\,dW_t, \\ dY_t^{\varepsilon}&= f(\alpha_t, t,X_t)\,dt+{\varepsilon}\tilde{W}_{t-t_0}.\end{aligned}$$ Accordingly we define the regularized value function: $$\begin{aligned} \tilde{v}^{\varepsilon}(t_0,x,y)&=\sup_{\alpha \in {\cal A}} \Bigg\{E^\alpha_{t_0,x,y} g(X_T,Y_T^{\varepsilon})-\theta\,E^\alpha_{t_0,x,y}\big(g(X_T,Y_T^{\varepsilon})\big)^2\nonumber\\ &\hspace{43pt}+\sup_{\psi\in{\mathbb{R}}}\Big\{-\theta\psi^2-2\theta\psi E^\alpha_{t_0,x,y} g(X_T,Y_T^{\varepsilon})\Big\}\Bigg\}\nonumber\\ &=\sup_{\psi\in{\mathbb{R}}}\Big\{V^{\varepsilon}(t_0,x,y,\psi)-\theta\psi^2\Big\},\nonumber $$ where $$V^{\varepsilon}(t_0,x,y,\psi)=\sup_{\alpha \in {\cal A}} E^\alpha_{t_0,x,y}\Big((1-2\theta\psi)g(X_T,Y_T^{\varepsilon})-\theta \big(g(X_T,Y_T^{\varepsilon})\big)^2\Big).$$ Under assumptions (A$_S$) for every $\psi\in{\mathbb{R}}$ the value function $V^{\varepsilon}(t_0,x,y)$ is a unique solution in $\overline W^{1,2,2}$ of the following HJB equation: $$\label{eqn:HJB_term} \begin{cases} V^{\varepsilon}_{t_0} + \sup_{u \in A}\Big\{b(u, t_0,x)^T V^{\varepsilon}_x +\frac{1}{2}tr\big(\sigma\sigma^T(u, t_0,x) V^{\varepsilon}_{xx}\big)+f(u,t,x)V^{\varepsilon}_{y}+\frac{1}{2}{\varepsilon}V^{\varepsilon}_{yy}\Big\} = 0,&\\[5pt] V^{\varepsilon}(T, x, y, \psi) = (1 - 2 \theta\psi) g(x,y) - \theta \big(g(x,y)\big)^2. \end{cases}$$ Under Assumptions (A$_S$) the value function has first and second order bounded generalised derivatives with respect to space and first order bounded generalised derivative with respect to time. The rest follows from [@kry Chapter 3 and 4]. In the next section, we will show that the function $V^{2,\varepsilon}$ is locally Lipschitz in $\psi$ and grows at most linearly in this variable. Hence, the supremum is again attained at some $\psi$ from a closed interval. Then the external optimisation problem becomes: $$\label{extern1fin} {v}^{\varepsilon}(t_0,x,y)=\sup_\psi\left[ V^{2,\varepsilon}(t_0,x,y,\psi)-\theta\psi^2\right].$$ Properties of value functions ============================= In this sections we show some properties of the value functions that are common in both approaches described above. These are important properties that allow the numerical solution of the mean-variance problem to be tractable. We assume that a new set of assumptions (A$_0$) holds, which is the intersection of the assumptions (A$_S$) and (A$_V$). \[thm:cont\_dep\_psi\] $\ $ Under Assumptions (A$_0$): - The functions $V, V^{\varepsilon},$ ($V^{({\varepsilon})}$ in short) are continuous in $\psi$ and convex in $\psi$. If $f, \Phi$ are non-negative, $V, V^{\varepsilon}$ are decreasing in $\psi$. - There exists a constant $C$ such that $$\nonumber | V^{({\varepsilon})}(t_0,x,y,\psi)- V^{({\varepsilon})}(t_0,x,y,\psi')| \le C\, (1 + \|x\|)\, |\psi-\psi'|.$$ - The value function $v$ is given by $$v(t_0, x) = \sup_{\psi_{min}\le\psi \le \psi_{max}} \big\{ V^{({\varepsilon})}(t_0, x, 0, \psi) - \theta \psi^2 \big\},$$ where $$\psi_{min}=-\sup_{\alpha\in\mathcal{A}}E_{t_0,x,y}^\alpha g(t,X_t,Y_t)$$ and $$\psi_{max}=-\inf_{\alpha\in\mathcal{A}}E_{t_0,x,y}^\alpha g(t,X_t,Y_t)$$ The Proof follows the same line of reasoning as in [@A-Palczewski2014]\[Theorem 2.2\] and making use of the definition for the function $g$. For part (i), it is straightforward to prove convexity, which implies continuity with respect to $\psi$. It is clear from the definition of $V, V^{\varepsilon}$ that they are decreasing in $\psi$ for non-negative $f, \Phi$. For part (ii) we check that $V^{\varepsilon}$ grows at most linearly, which implies that the mapping $h(\psi) = V^{\varepsilon}(t_0, x, y, \psi) - \theta \psi^2$ attains its maximum in a compact interval. By convexity, $V^{\varepsilon}$ has well-defined directional derivatives. Hence, $h$ also has well-defined directional derivatives and in a point where the maximum is attained the left-hand side derivative is non-negative while the right-hand side derivative is non-positive. We then show that $\partial^+ h(\psi) > 0$ for $\psi < \psi_{min}$ and $\partial^- h(\psi) < 0$ for $\psi > \psi_{max}$. This implies that the conditions for maximum can only be satisfied in the interval $[\psi_{min}, \psi_{max}]$. We skip further details and refer the reader to [@A-Palczewski2014]\[Theorem 2.2\]. \[Thm6fin\] Under assumptions (A$_0$): $$\label{boundfin}\sup_{t,x,y}\, | v^{\varepsilon}(t_0,x,y) - v(t_0,x,y)| \le \varepsilon^2 \theta(T-t_0).$$ We have, $$\begin{aligned} |v^{\varepsilon}(t_0,x,y)-v(t_0,x,y)|& \le \sup_\psi |\big(V^{\varepsilon}(t_0,x,y,\psi )-\theta \psi^2-V(t_0,x,y,\psi)+\theta \psi^2\big)| \nonumber\\ &= \sup_\psi|\Big\{\sup_{\alpha\in \cal{A}}E_{t_0,x,y}^\alpha\Big(g(X_T,Y_T^{\varepsilon})[1-2\theta \psi]-\theta \big(g(X_T,Y_T^{\varepsilon})\big)^2\Big)\nonumber\\ &-\sup_{\alpha\in {\cal A}}E_{t_0,x,y}^\alpha \Big(g(X_T,Y_T)[1-2\theta \psi]-\theta \big(g(X_T,Y_T)\big)^2\Big)\Big\}| \nonumber\\ &\le\sup_\psi\sup_{\alpha\in \cal{A}}|E_{t_0,x,y}^\alpha\Big\{\big(g(X_T,Y_T^{\varepsilon})-g(X_T,Y_T)\big)[1-2\theta \psi]\nonumber\\ &-\theta\Big(\big(g(X_T,Y_T^{\varepsilon})\big)^2- \big(g(X_T,Y_T^{\varepsilon})\big)^2\Big)\Big\}|\nonumber\\ &=\sup_\psi\sup_{\alpha\in \cal{A}} |E_{t_0,x,y}^\alpha\Big\{[1-2\theta\psi]\int_{t_0}^T{\varepsilon}\,d\tilde W_t\nonumber\\ &-\theta\big(g(X_T,Y_T^{\varepsilon})-g(X_T,Y_T)\big)\big(g(X_T,Y_T^{\varepsilon})+g(X_T,Y_T)\big)\Big\}|\nonumber\\ &=\sup_\psi\sup_{\alpha\in \cal{A}} |E_{t_0,x,y}^\alpha\Big\{[1-2\theta\psi]\int_{t_0}^T{\varepsilon}\,d\tilde W_t\nonumber\\ &-\theta\Bigg(2\int_{t_0}^T{\varepsilon}\,d\tilde W_t\int_{t_0}^Tf(t,X_t)\,dt\nonumber\\ &+\Big(\int_{t_0}^T{\varepsilon}\,d\tilde W_t\Big)^2+2\Phi(X_T)\int_{t_0}^T{\varepsilon}\,d\tilde W_t\Bigg)\Big\}|\nonumber\\ &\le {\varepsilon}^2 \theta (T-t_0),\end{aligned}$$ since the brownian motion $\tilde W_t$ is independent of $X_t$. This concludes the proof. Optimal Strategies ================== Sobolev approach ---------------- When we work with solutions of HJB equations in Sobolev spaces, then there is an automatic verification theorem that ensures the optimality of the strategy that corresponds to the particular solution. This convenient property is mainly due to the availability of some kind of Itô formula (in this case Itô-Krylov formula). Since, however the problem solved is regularised, we can only hope for “almost-optimal” strategies for the original problem. Let $\epsilon\ge 0$. A strategy $\alpha\in \mathcal{A}$ is said to be $\epsilon-$optimal for $(t,x)$ if $v(t,x)\le v^\alpha(t,x)+\epsilon$, where $v(t,x)=\sup_{\alpha\in\mathcal{A}}v^\alpha(t,x)$. \[lestrat\] Under assumptions (A$_0$), for any strategy $\alpha\in\mathcal{A}$, we have the following bounds $$\label{epszone} |v^{\varepsilon,\alpha}(t_0,x,y)-v^{\alpha}(t_0,x,y)|\le C\,\varepsilon^2.$$ with $C=\theta (T-t_0)$. One can show similarly to Theorem \[Thm6fin\] that $$\nonumber |v^{\varepsilon,\alpha}(t_0,x,y)-v^{\alpha}(t_0,x,y)|\le {\varepsilon}^2\theta(T-t_0),$$ therefore $$v^{\varepsilon,\alpha}(t_0,x,y) - C\varepsilon^2\le v^{\alpha}(t_0,x,y)\le v^{\varepsilon,\alpha}(t_0,x,y) + C\varepsilon^2.$$ Assume (A$_0$). Let the strategy $\bar\alpha^\varepsilon\in\mathcal{A}$ be the optimal strategy for the problem (\[extern1fin\]), or if the supremum is not attained, let the strategy $\tilde\alpha^\varepsilon\in\mathcal{A}$ be a $\delta-$optimal strategy for the same problem . Then, the same strategy is $\epsilon$-optimal for the original degenerate value function with appropriate choice of the constant $\epsilon$. Suppose that the value function of the degenerate problem attains its supremum for a strategy $\bar \alpha$. Then, we would have from Lemma \[lestrat\] $$v^{\bar\alpha}(t_0,x,y) - C\varepsilon^2\le v^{\varepsilon,\bar\alpha}(t_0,x,y)\le v^{\bar\alpha}(t_0,x,y) + C\varepsilon^2.$$ Furthermore, because the strategy $\bar\alpha^\varepsilon$ is optimal for the regularised value function (assuming that the supremum is attained), we know that $$v^{\varepsilon,\bar\alpha^\varepsilon}(t_0,x,y)\ge v^{\varepsilon,\bar\alpha}(t_0,x,y).$$ So, $$v^{\bar\alpha}(t_0,x,y) - C\varepsilon^2\le v^{\varepsilon,\bar\alpha}(t_0,x,y)\le v^{\varepsilon,\bar\alpha^\varepsilon}(t_0,x,y)\le v^{\bar\alpha^\varepsilon}(t_0,x,y) + C\varepsilon^2.$$ Hence $$v^{\bar\alpha}(t_0,x,y) - 2C\varepsilon^2\le v^{\bar\alpha^\varepsilon}(t_0,x,y),$$ i.e. $\bar\alpha^\varepsilon$ is $2C\varepsilon^2-$optimal for $v^\alpha$ or $\epsilon=2C\varepsilon^2$ in this case. In the case that the degenerate value function does not attain a supremum, a $\gamma-$optimal strategy exists, namely $\tilde\alpha$. Then $$\label{almostopt} v^{\tilde\alpha}(t_0,x,y)\ge \sup_{\alpha\in\mathcal{A}}v^{\alpha}(t_0,x,y)-\gamma$$ and due to the bounds (\[epszone\]) and inequality (\[almostopt\]) we have $$\sup_{\alpha\in\mathcal{A}}v^{\alpha}(t_0,x,y)-\gamma-C\varepsilon^2\le v^{\tilde\alpha}(t_0,x,y) - C\varepsilon^2\le v^{\varepsilon,\tilde\alpha}(t_0,x,y)\le v^{\tilde\alpha}(t_0,x,y) + C\varepsilon^2.$$ Now, suppose again that $v^{\varepsilon,\bar\alpha^\varepsilon}(t_0,x,y)=\sup_{\alpha\in\mathcal{A}} v^{\varepsilon,\alpha}(t_0,x,y). $ That means $v^{\varepsilon,\bar\alpha^\varepsilon}(t_0,x,y)\ge v^{\varepsilon,\tilde\alpha}(t_0,x,y).$ So, $$\begin{aligned} \sup_{\alpha\in\mathcal{A}}v^{\alpha}(t_0,x,y)-\gamma-C\varepsilon^2\le v^{\tilde\alpha}(t_0,x,y)-C\varepsilon^2\le v^{\varepsilon,\tilde\alpha}(t_0,x,y)\nonumber\\ \le v^{\varepsilon,\bar\alpha^\varepsilon}(t_0,x,y)\le v^{\bar\alpha^\varepsilon}(t_0,x,y)+ C\varepsilon^2.\end{aligned}$$ Thus, $$\sup_{\alpha\in\mathcal{A}}v^{\alpha}(t_0,x,y)\le v^{\bar\alpha^\varepsilon}(t_0,x,y)+\gamma+2C\varepsilon^2,$$ i.e. $\bar\alpha^\varepsilon$ is an $(\gamma+2C\varepsilon^2)-$optimal strategy for $v^{\alpha}(t_0,x,y)$ or $\epsilon=\gamma+2C\varepsilon^2$ in this case. Finally, in the case that $\sup_{\alpha\in\mathcal{A}} v^{\varepsilon,\alpha}(t_0,x,y)$ is not attained, a $\delta-$optimal strategy $\tilde\alpha^\varepsilon$ is attained instead, such that $v^{\varepsilon,\tilde\alpha^\varepsilon}(t_0,x,y)\le\sup_{\alpha\in\mathcal{A}} v^{\varepsilon,\alpha}(t_0,x,y)-\delta.$ Following the same reasoning as previously (note that $\tilde\alpha$ is $\gamma-$optimal for $v^\varepsilon$), we get $$\begin{aligned} \sup_{\alpha\in\mathcal{A}}v^{\alpha}(t_0,x,y)-\gamma-\delta-C\varepsilon^2\le v^{\tilde\alpha}(t_0,x,y)-C\varepsilon^2-\delta\nonumber\\\le v^{\varepsilon,\tilde\alpha}(t_0,x,y)-\delta \le v^{\varepsilon,\tilde\alpha^\varepsilon}(t_0,x,y)\le v^{\tilde\alpha^\varepsilon}(t_0,x,y)+ C\varepsilon^2.\end{aligned}$$ Therefore, $$\sup_{\alpha\in\mathcal{A}}v^{\alpha}(t_0,x,y)\le v^{\tilde\alpha^\varepsilon}(t_0,x,y)+\gamma+\delta+2C\varepsilon^2,$$ i.e. $\bar\alpha^\varepsilon$ is an $(\gamma+\delta+2C\varepsilon^2)-$optimal strategy for $v^{\alpha}(t_0,x,y)$ or $\epsilon=\gamma+\delta+2C\varepsilon^2.$ Let assumptions (A$_0$) hold. The supremum attained for the degenerate value function by using Markov strategies is almost the same as the one attained in the class of admissible strategies. So Markov strategies are $\epsilon-$ optimal for the degenerate value function. Since we deal everywhere with “first moment theory” and additionally an optimal $\bar\psi$ can always be found in a real interval, we know that Markov strategies are sufficient for the problem (see [@kry]). Therefore, because of the previous Theorem, Markov strategies are sufficient in obtaining an $\epsilon-$optimal strategy for the degenerate value function. Viscosity Approach ------------------ When working with viscosity solutions, there is no verification theorem that can be applied under the assumptions made in this paper (see Gozzi et al. [@GSZ2009] for the latest results on the verification theorem for viscosity solutions). One possible approach is to verify the optimality of the strategies using Monte Carlo simulations using the strategy calculated from the solution of the HJB equation and compare the value of the value function obtained by simulation to the one from the numerical scheme. Concluding Remarks =================== In this paper, we formulated a general mean-variance problem in continuous time that includes a functional with two terms: an integral that depends on the whole trajectory of the controlled process and a terminal time one. We interpreted the problem first as a terminal time problem, through the introduction of a coupled state process with an additional dimension and then transformed it into a superposition of a static and a dynamic optimization problem, where the latter is feasible for dynamic programming methods and for which we were able to write down an HJB equation. We proved existence and uniqueness of solutions both in viscosity sense and also in classical (Sobolev) sense. The advantage of the first approach is that there is no need to address the inherent degeneracy of the coupled state process, whereas numerical solutions can be employed to solve the problem and to even show optimality thought Monte Carlo simulations; however, in this approach a verification theorem is not readily applicable under the assumptions we use. When following the Sobolev approach, a regularisation of the state process is required. This has the advantage that a verification theorem can be obtained through Itô-Krylov’s formula. We then showed that strategies obtained through this route are $\varepsilon-$optimal. Finally recall that using the hint shown in the Remark \[re21\], the results in both sections \[sec:visc\] and \[sec:sobol\] allow extentions to the case of functionals that involve even power functions like $\displaystyle \Big(E^\alpha_{t_0,x}\big(\int_{t_0}^T f(\alpha_s,s,X_s)\,ds+\Phi(X_T)\big)\Big)^{2n}$. [99]{} Aivaliotis, G. and Veretennikov, A. Yu., 2010 [*On Bellman’s equations for mean and variance control of a Markov diffusion*]{}. Stochastics: An International Journal of Probability and Stochastic Processes, 82: 1, 41 � 51. Aivaliotis, G., Palczewski, J., 2014. [*Investment strategies and compensation of a mean-variance optimizing fund manager*]{}. European Journal of Operational Research, 234:2, 561-570. Aivaliotis, G., Palczewski, J., 2010. [*Tutorial for viscosity solutions in optimal control of diffusions*]{}. Available at SSRN: http://ssrn.com/abstract=1582548. Bielecki, T., Jin, H., Pliska, S.R., Zhou, X.Y., 2005. [*Continuous-Time Mean-Variance Portfolio Selection with Bankruptcy Prohibition*]{}. Mathematical Finance 15, 213-244. Bielecki, T., Pliska, S.R., Sherris, M., 2000. [*Risk Sensitive Asset Allocation*]{}. Journal of Economic Dynamics and Control 24, 1145-1177. Gozzi, F., Świech, A., Zhou, X.Y., 2009. [*Erratum: “A corrected proof of the stochastic verification theorem within the framework of viscosity solutions”*]{}. SIAM Journal on Control Optimization 48:6, 4177-4179. Krylov, N. V., 1980. [*Controlled Diffusion Processes*]{}, Springer-Verlag. Li, D., Ng, W.-L., 2000. [*Optimal Dynamic Portfolio Selection: Multiperiod Mean-Variance Formulation*]{}. Mathematical Finance 10:3, 387-406. Lim, A.E.B., 2004. [*Quadratic hedging and mean-variance portfolio selection in an incomplete market*]{}. Mathematics of Operations Research 29, 132-161. Markowitz, H. M., 1952. [*Portfolio Selection*]{}. Journal of Finance 7:1 , 77-91. Pham, H., 2009. Continuous-time Stochastic Control and Optimization with Financial Applications. Springer. Tse, S., T., Forsyth, P., A., Li, Y., 2014. Preservation of Scalarization Optimal Points in the Embedding Technique for Continuous Time Mean-Variance Optimization. SIAM Journal on Control and Optimization, 52:3, 1527�1546. Wang, J., Forsyth, P.A., 2010. Numerical solution of the Hamilton-Jacobi-Bellman formulation for continuous time mean variance asset allocation. Journal of Economic Dynamics & Control 34, 207-230. Zhou, X. Y. and Li, D., 2000. [*Continuous-Time Mean-Variance Portfolio Selection: A Stochastic LQ Framework*]{}. Applied Mathematics and Optimization, 42, 19-33. [**Acknowledgments:**]{}\ For the second author the paper was prepared within the framework of a subsidy granted to the HSE by the Government of the Russian Federation for the implementation of the Global Competitiveness Program. [^1]: [**ams classification:**]{} 93E20, 60H10. [^2]: University of Leeds, School of Mathematics, Leeds, LS2 9JT, UK, e-mail: [email protected] [^3]: University of Leeds, School of Mathematics, Leeds, LS2 9JT, UK, e-mail: [email protected] & University of Leeds, UK & National Research University Higher School of Economics, Moscow, Russia & Institute of Information Transmission Problems, Moscow, Russia
Generic Hamiltonian systems are neither integrable nor chaotic [@MM74], but rather exhibit a mixed phase space, where regular and chaotic regions coexist. Each island of regular motion is surrounded by infinitely many chains of smaller islands. As the same holds for any of these smaller islands a very complex hierarchical phase-space structure is found for generic Hamiltonian systems, which is well understood[@lichtenberg] and nowadays appears in textbooks on classical mechanics[@saletan]. The dynamical properties, however, are still unclarified. The most fundamental statistical quantity for characterizing dynamics is the decay of correlations in time. It determines transport properties and is directly related to the distribution of Poincaré recurrences $P(t)$, which is the probability to return to a given region in phase space with a recurrence time larger than $t$. This probability decays on average like a power-law[@CS81] $$\label{powerlaw} P(t)\sim t^{-\gamma}\quad,$$ due to the trapping of chaotic trajectories in the hierarchically structured vicinity of islands of regular motion. The power-law decay is a universal property of Hamiltonian systems. It has dramatic consequences for transport[@transport] (anomalous diffusion) and quantum mechanics[@quantum] (conductance fluctuations and eigenfunctions), which sensitively depend on the value of $\gamma$. The exponent $\gamma$, as determined from finite time numerical experiments, seems to be non-universal, varies with system and parameter, and typically ranges between $1$ and $2.5$[@CS81; @transport; @quantum]. It is a fundamental question of Hamiltonian chaos, how the exponent $\gamma$ of the dynamics is related to the structure of the hierarchical phase space. Recently, it was argued by Chirikov and Shepelyansky that for asymptotically large times the exponent is independent of the specific system and parameter and is given by the universal exponent $\gamma=3$[@CS99]. Their arguments are based on the universal presence of critical tori in phase space and are supported by a numerical investigation of the kicked rotor at kicking strength $K=K_c=0.97163540631$. At this parameter value the golden torus is critical, i.e., it can be destroyed by an arbitrarily small perturbation. The self-similar vicinity of the critical golden torus \[see Fig. \[fig:density\](a)\] has been studied using renormalization methods[@kay83] and the asymptotic value $\gamma=3$ for the power-law decay of $P(t)$ was predicted long time ago [@HCM85; @alleCS]. The fact that it has never been observed led to the speculation that the universal decay should appear for larger times[@murray]. In Ref. [@CS99] a numerical approach allowed to estimate the onset of this decay in agreement with the presented data for $P(t)$. =8.4cm In addition to the sticking of trajectories in the vicinity of critical tori, the trapping of trajectories in island-around-island structures has been studied [@meiss86; @zaslavsky]. Zaslavsky et al. [@zaslavsky] showed that for the kicked rotor at $K=K^*=6.908745$ the phase space possesses an island-around-island structure of sequence $3-8-8-8\ldots$ \[see Fig. \[fig:density\](c)\]. They used this self-similarity to derive the trapping exponent $\gamma=2.25$ by renormalization arguments, which was recently supported by the numerics[@ZE00]. In fact, these renormalization approaches for single self-similar phase-space structures can be considered as special cases of the more general binary tree model by Meiss and Ott[@MO85]. In this model a chaotic trajectory can at any stage of the tree either go to a boundary circle (level scaling) or to the island-around-island structure (class scaling). The universal coexistence of the two routes of renormalization at any stage led to the exponent $\gamma=1.96$[@MO85]. In contrast, the recent findings claim that just [*one*]{} of these scalings is relevant for the trapping mechanism: While in Ref. [@CS99] it is argued that universally (and in particular for $K=K_c$) the level scaling should dominate, in Ref. [@zaslavsky] it is claimed that for $K=K^*$ the class scaling describes the trapping mechanism. In order to clarify these contradictions, we numerically investigate $P(t)$ for the kicked rotor at $K_c$ and $K^*$ for times larger than in previous studies. We find strong deviations from the predictions of the renormalization theories that only consider a single self-similar phase-space structure. In addition, our numerical approach allows to analyze where chaotic trajectories are trapped in phase space. For large times the majority of trajectories is [*not trapped*]{} in those phase-space regions, that are described by the simple renormalization theories. We thereby reveal the mistaken assumption of these theories. In particular for $K=K_c$, the self-similar vicinity of the critical golden torus does not dominate the trapping mechanism for large times and thus even in this ideal situation the proposed universal exponent $\gamma=3$ is not found. For $K=K^*$, the majority of long trapped trajectories does not follow the self-similar island-around-island structure, which leads to a smaller exponent than predicted. Although the phase space in both cases is dominated by exactly self-similar structures, our analysis shows that they do [*not*]{} dominate the dynamics. We use the standard map (kicked rotor) defined by $$\label{standardmap} q_{n+1}=q_n+p_n \,{\rm mod}\,2\pi\qquad p_{n+1}=p_n+K\sin q_{n+1} \,\, ,$$ which has a $2\pi$-periodic phase space in $p$ direction. We concentrate on two parameters: (i) The dynamics for $K=K_c$ is bounded in $p$ direction by the golden torus, which is critical [@kay83]. The route towards the critical golden torus is determined by the principal resonances given by the approximants of the golden mean $\sigma=(\sqrt{5}-1)/2$ and the scaling has been analyzed in detail [@kay83]. The dynamics along this route was described by a Markov chain leading to $\gamma=3.05$ [@HCM85] and alternatively via the scaling of the local diffusion rate leading to $\gamma=3$ [@alleCS]. (ii) The phase space for $K=K^*$ consists of two small accelerator modes embedded in an otherwise completely chaotic phase space. Each mode shows an island-around-island structure of sequence $3-8-8-8\ldots$ [@zaslavsky]. This exact scaling relation was used to predict the exponent $\gamma=2.25$. =8.4cm In order to check the predictions for $P(t)$, in case (i) we start several long trajectories initially located near the unstable fixed point $(q,p)=(0,0)$. We measure the times $\tau$ for which an orbit stays close to the critical torus by monitoring successive crossings of the line $p=0$, as was also done in Ref. [@CS99]. In case (ii) we start many trajectories at four different regions in the chaotic part of phase space[@remark1]. Whenever a chaotic trajectory is trapped to one of the island structures, it follows the dynamics of the accelerator mode and jumps to the neighboring unit cell in $p$ direction. We measure the time $\tau$ it continuously jumps one unit cell per iteration in the same direction. From the set of trapping times $\tau$ one determines the fraction $\tilde{P}(t)$ of orbits with $\tau\ge t$. This quantity decays with the same power-law exponent as the Poincaré recurrences $P(t)$ and was chosen for numerical convenience. The total computer time corresponds to (i) $15\cdot10^{12}$ and (ii) $8\cdot10^{12}$ iterations of the standard map. We have checked if our statistical data for large times are sensitive to the unavoidable finite numerical precision, by comparing data for double ($\approx$16 significant digits) and quadruple ($\approx$32 digits) precision. We found no difference and present a combination of both data sets in Fig. \[fig:tcube\]. In Fig. \[fig:tcube\] we compare our numerical findings for $P(t)$ for case (i) with the prediction $P(t>10^7)=3.9\cdot10^{12}\,t^{-3}$ extracted from Ref. [@CS99]. This power law is not compatible with our data, even though we are in the time regime, where it should be observable according to Ref. [@CS99]. For $10^5\le t\le 10^9$ we rather see an exponent $\gamma\approx1.9$[@uppercurve]. For case (ii) with $K=K^*$, we find a power-law decay of $P(t)$ with $\gamma=1.85$ (Fig. \[fig:zas\]) for various starting conditions [@remark1] contradicting the renormalization prediction $\gamma=2.25$ [@zaslavsky]. =8.4cm Our numerical results raise the question why the renormalization theories predict the wrong exponents. This is particularly surprising since the structure of the phase space for the specific parameters $K_c$ and $K^*$ is dominated by one exactly self-similar hierarchy. All these approaches rely on the assumption that this single self-similar phase-space structure also dominates the long-time trapping of chaotic trajectories. We will now check this assumption. To this end we calculate the density in phase space for trajectories, that are trapped for long times. For case (i) we show two examples in Fig. \[fig:density\](a,b). Figure \[fig:density\](a) shows a trajectory of length $t\approx5\cdot10^7$ that follows the route to the critical golden torus up to the principal resonance with winding number $55/144$. This is consistent with the renormalization theory according to the data presented in Ref. [@CS99]. In contrast, the trajectory shown in Fig. \[fig:density\](b) approaches the critical torus only up to the resonance $3/8$ and is predominantly trapped around a non-principal resonance. This trajectory is not captured by the renormalization theory, since it has the same length as the trajectory in Fig. \[fig:density\](a) and therefore should be trapped around the $55/144$ resonance or one of its neighbors. In order to quantify this observation we introduce the fraction $f(t)$ of trajectories with trapping time $t$ that follow the route of renormalization. Numerically, we determine $f(t)$ by considering trajectories with trapping times in an interval around $t$. We classify these trajectories, as was done in Fig. \[fig:density\], according to their phase-space densities. For $t>2\cdot 10^7$ this fraction decreases to zero (Fig. \[fig:tcube\], inset). This clearly demonstrates that for large times the majority of trajectories is [*not trapped*]{} in the self-similar phase-space regions approaching the critical golden torus. The assumption of the renormalization theories leading to $\gamma\approx3$ is violated and thus they are not applicable for predicting $P(t)$. From the ratio of the predicted $P(t)\sim t^{-3}$ for the trajectories trapped in the self-similar phase-space structure and the observed $P(t)\sim t^{-1.9}$ we expect their fraction to decay as $f(t)\sim t^{-1.1}$ for large times. This is confirmed by our numerical data in the inset of Fig. \[fig:tcube\]. We find that the majority of trajectories is trapped around non-principal resonances, which is in agreement with the binary tree model [@MO85]. This is in strong contrast to the conclusions of Ref. [@CS99] that are based on the computation of exit times from the vicinity of unstable fixed points of principal resonances. As for the investigation of local diffusion rates[@ruffo96] also the analysis of the mean exit time yields information only about trajectories trapped in the region studied. One cannot conclude, however, that these trajectories dominate the global trapping mechanism, as is convincingly shown by our numerics. We carry out the same analysis for case (ii), $K=K^*$. In Figure \[fig:density\](c,d) we show the phase-space densities of two long trajectories. Although both trajectories have the same length, only the trajectory in Fig. \[fig:density\](c) follows the self-similar island-around-island structure, while the trajectory shown in Fig. \[fig:density\](d) is trapped around other islands. The fraction $f(t)$ of trajectories following the route of renormalization decays to zero for large times (Fig. \[fig:zas\], inset). This decay is well described by the estimate $f(t)\sim t^{-2.25}/t^{-1.85}\sim t^{-0.4}$, i.e., the ratio of the predicted $P(t)\sim t^{-2.25}$ and the observed $P(t)\sim t^{-1.85}$. This shows that the renormalization theory for the island-around-island structure is not capable of explaining $P(t)$. It should be noted that this difference is not caused by the effect, that the finite precision of $K^*$ eventually leads to a breakdown of the self-similarity on very small scales. In conclusion, our analysis shows that even in the presence of an exactly self-similar phase-space structure it is not sufficient to describe the trapping mechanism of chaotic trajectories by only this structure, as was recently claimed in the literature. We find that additional island structures may dominate the trapping mechanism for large times and thus affect the power-law decay of $P(t)$. Our analysis supports qualitatively the tree model by Meiss and Ott, which allows for the coexistence of two routes of renormalization at any stage. Quantitatively, we find slightly smaller exponents. It remains an open question if there exists a universal asymptotic exponent for the trapping of chaotic trajectories in Hamiltonian systems. We thank R. Fleischmann for helpful discussions. M.W. acknowledges financial support by an EMBO fellowship. [1]{} L. Markus and K. R. Meyer, [*Generic Hamiltonian Dynamical Systems are neither Integrable nor Ergodic*]{}, Memoirs of the American Mathematical Society, No. 114 (American Mathematical Society, Providence, RI, 1974). A. J. Lichtenberg and M. A. Lieberman, [*Regular and Chaotic Dynamics*]{}, Appl. Math. Sciences 38, 2nd ed., (Springer-Verlag, New York, 1992); J. D. Meiss, Rev. Mod. Phys. [**64**]{}, 795 (1992). J. V. José and E. J. Saletan, [*Classical Dynamics*]{}, Cambridge University Press, (Cambridge, 1998). B. V. Chirikov and D. L. Shepelyansky, in [*Proceedings of the IXth Intern. Conf. on Nonlinear Oscillations, Kiev, 1981*]{} \[Naukova Dumka [**2**]{}, 420 (1984)\] (English Translation: Princeton University Report No. PPPL-TRANS-133, 1983); C. F. F. Karney, Physica [**8**]{} D, 360 (1983); B. V. Chirikov and D. L. Shepelyansky, Physica [**13**]{} D, 395 (1984); P. Grassberger and H. Kantz, Phys. Lett. [**113**]{} A, 167 (1985); Y. C. Lai, M. Ding, C. Grebogi, and R. Blümel, Phys. Rev. A [**46**]{}, 4661 (1992). T. Geisel, A. Zacherl, and G. Radons, Phys. Rev. Lett. [**59**]{}, 2503 (1987); R. Fleischmann, T. Geisel, and R. Ketzmerick, Phys. Rev. Lett. [**68**]{}, 1367 (1992); M. F. Shlesinger, G. M. Zaslavsky, and J. Klafter, Nature [**363**]{}, 31, (1993); G. M. Zaslavsky, [*Lévy Flights and Related Phenomena in Physics*]{}, (Springer, Berlin, Verlag, 1995); G. Zumofen and J. Klafter, Phys. Rev. E [**59**]{}, 3756, (1999). Y. C. Lai, R. Blümel, E. Ott, and C. Grebogi, Phys. Rev. Lett. [**68**]{}, 3491 (1992); R. Ketzmerick, Phys. Rev. B [**54**]{}, 10841 (1996); A. S. Sachrajda, R. Ketzmerick, C. Gould, Y. Feng, P. J. Kelly, A. Delage, and Z. Wasilewski, Phys. Rev. Lett. [**80**]{}, 1948 (1998); G. Casati, I. Guarneri, and G. Maspero, Phys. Rev. Lett. [**84**]{}, 63 (2000); R. Ketzmerick, L. Hufnagel, F. Steinbach, and M. Weiss, Phys. Rev. Lett. [**85**]{}, 1214 (2000); B. Huckestein, R. Ketzmerick, and C. Lewenkopf, Phys. Rev. Lett. [**84**]{}, 5504 (2000); L. Hufnagel, R. Ketzmerick, and M. Weiss, Europhys. Lett., [**22**]{}, 264, (2001); A. P. Micolich [*et al.*]{}, Phys. Rev. Lett. [**87**]{}, 036802, (2001). B. V. Chirikov and D. L. Shepelyansky, Phys. Rev. Lett. [**82**]{}, 528 (1999). R. S. MacKay, Physica D [**7**]{}, 283 (1983). J. D. Hanson, J. R. Cary, and J. D. Meiss, J. Stat. Phys. [**39**]{}, 327 (1985). B. V. Chirikov, Lect. Notes Phys. [**179**]{}, 29 (1983); B. V. Chirikov, in [*Proceedings of the International Conference on Plasma Physics, Lausanne, Switzerland, 1984*]{} (Commission of the European Communities, Brussels, Belgium, 1984), Vol. 2, p. 761; B. V. Chirikov and D. L. Shepelyansky, in [*Renormalization Group*]{}, edited by D. V. Shirkov, D. I. Kazakov, and A. A. Vladimirov (World Scientific, Singapore, 1988), p. 221. N. W. Murray, Physica D [**52**]{}, 220 (1991). J. D. Meiss, Phys. Rev. A [**34**]{}, 2375 (1986). G. M. Zaslavsky, M. Edelman, and B. A. Niyazov, Chaos [**7**]{}, 159 (1997). G. M. Zaslavsky and M. Edelman, Chaos [**10**]{}, 135 (2000). J. D. Meiss and E. Ott, Phys. Rev. Lett. [**55**]{}, 2741 (1985); J. D. Meiss and E. Ott, Physica D [**20**]{}, 387 (1986). We have started trajectories randomly placed on a line $p=$const away from the accelerator modes (upper curve in Fig. \[fig:zas\]). In order to increase statistics for large times, we have started randomly placed trajectories in 3 different small boxes close to the accelerator mode in positive direction. In principle, the asymptotic decay of $P(t)$ might depend on the initial box. We find, however, that this is not the case, as all four curves are identical for times $t>2\cdot10^3$. We also studied Poincaré recurrences for trajectories approaching the critical torus from the other side in the same way as in Ref. [@CS99]. In the range from $10^8<t<10^9$ we find a very slow decay, such that $P(t=10^9)$ is more than three orders of magnitude bigger than the prediction $P(t)=3.98\cdot10^{13}t^{-3}$ from Ref. [@CS99]. S. Ruffo and D. L. Shepelyansky, Phys. Rev. Lett. [**76**]{}, 3300 (1996).
--- abstract: | It has been known for 20 years that the absorbing gas in broad absorption line quasars does not completely cover the continuum emission region, and that partial covering must be accounted for to accurately measure the column density of the outflowing gas. However, the nature of partial covering itself is not understood. Extrapolation of the [*SimBAL*]{} spectral synthesis model of the [*HST*]{} COS UV spectrum from SDSS J0850+4451 reported by @leighly18 to non-simultaneous rest-frame optical and near-infrared spectra reveals evidence that the covering fraction has wavelength dependence, and is a factor of 2.5 times higher in the UV than in the optical and near-infrared bands. The difference in covering fraction can be explained if the outflow consists of clumps that are small and either structured or clustered relative to the projected size of the UV continuum emission region, and have a more diffuse distribution on size scales comparable to the near-infrared continuum emission region size. [ The lower covering fraction over the larger physical area results in a reduction of the measured total column density by a factor of 1.6 compared with the UV-only solution.]{} This experiment demonstrates that we can compare rest-frame UV and near-infrared absorption lines, specifically \*$\lambda 10830$, to place constraints on the uniformity of absorption gas in broad absorbing line quasars. author: - 'Karen M. Leighly' - 'Donald M. Terndrup' - 'Adrian B. Lucy' - Hyunseop Choi - 'Sarah C. Gallagher' - 'Gordon T. Richards' - Matthias Dietrich - Catie Raney title: 'The z=0.54 LoBAL Quasar SDSS  J085053.12+445122.5: II. The Nature of Partial Covering in the Broad-Absorption-Line Outflow' --- Introduction {#intro} ============ Broad absorption lines are found in the rest-frame UV spectra of a significant fraction of quasars [e.g., @weymann91; @gibson09]. Most often, these lines are blueshifted with velocities as high as tens of thousands $\rm km\, s^{-1}$, indicating the presence of powerful outflows. Optical spectra of $z\sim 2$ broad absorption line quasars (BALQs) include absorption lines from Ly$\alpha$, , , and , among others. Early on, scientists recognized that it might be possible to use these lines to determine the metallicity of the outflowing gas, thereby potentially constraining the physical conditions in the quasar central regions and the potential for enrichment of the IGM [see @hamann98a and references therein]. However, they quickly discovered that the implied metallicities were enormous [e.g., 20–100 times solar; @hamann98a]. Especially problematic were quasars with $\lambda 1118,1128$ absorption lines, as phosphorus is a relatively rare element with an abundance only $9.3\times 10^{-4}$ that of carbon [@grevesse07]. @hamann98a proposed that, instead, the absorber only partially covers the continuum source, so that an absorption line from a high-abundance ion such as C$^{+3}$ can be completely saturated without dropping to zero flux density, and a low-abundance ion such as P$^{+4}$ can show significant optical depth. Additional support for partial covering comes from doublet analysis in objects with relatively narrow absorption lines. The ratios of opacities of absorption lines from the same lower level are fixed by atomic physics. For example, due to the fine structure of the upper level, the absorption line at 1548Å (upper level configuration $^2P_{3/2}$, with degeneracy 4) will have approximately twice the opacity of the line at 1550Å (upper level configuration $^2P_{1/2}$, with degeneracy 2). If the apparent opacities are less than 2:1, then the presence of partial covering is inferred [e.g., @hamann01]. In fact, partial covering is routinely used to distinguish narrow absorption lines (NALs) intrinsic to the quasar from absorption lines from intervening gas which would completely cover the continuum source [e.g., @misawa07; @rh11; @ganguly13]. The spectropolarimetric properties of BALQs also provide evidence for partial covering. Often, the polarization is stronger in the BAL troughs of polarized BALQs [@ogle99; @dipompeo13], indicating that at least in some cases, scattered light fills in the troughs, or at least contributes to the continuum in the trough. Once partial covering was recognized to be nearly ubiquitous in quasars, investigators set about trying to account for it. Fortunately, many of the most prominent absorption lines come from lithium- or sodium-like ions (e.g., C$^{+3}$, N$^{+4}$, O$^{+6}$, Mg$^{+}$, Al$^{+2}$, Si$^{+3}$, P$^{+4}$, Ca$^{+}$); all of these ions share the atomic structure discussed above for C$^{+3}$, namely, a doublet transition from the ground state, with the optical depth ratios of 2:1 fixed by atomic physics. Optical depth measurements of both lines yield two equations for two unknowns (the covering fraction and the optical depth), implying that the true optical depth could be solved for exactly [e.g., @hamann01]. This method works well [e.g., @arav05; @arav08; @borguet12; @borguet13; @chamberlain15; @dunn10; @finn14; @gabel05; @hamann97; @hamann01; @hamann11; @hall03; @moravec17; @moe09; @rh11] as long as both lines are not saturated. The fact that the optical depths are 2:1 means that these lines have a rather limited dynamic range in optical depth over which they are useful. @leighly11 discussed a potentially very useful set of lines that arise from metastable helium, especially \*$\lambda 10380$, the $2S \rightarrow 2P$ transition, and \*$\lambda 3889$, the $2S \rightarrow 3P$ transition, which have an opacity ratio of 23:1. [ This high ratio makes these lines ideal for very high column density outflows, which are potentially the most interesting for identifying quasar activity that is likely to affect the host galaxy]{}. The true column density of hydrogen can be estimated using the Lyman series lines [e.g., @gabel05], although this can be difficult in cases where the Ly$\alpha$ forest is present and blended with the absorption lines of interest [ as is generally the case for quasars found at the epoch of peak quasar activity ($z=1$–3).]{} The partial covering analysis discussed above implicitly assumes that part of the continuum emission region is completely covered and the other part is completely bare. This “step function” [@arav05] partial covering is not the only possibility, and indeed, early on it was recognized that abundant ions tended to have higher covering fractions than lower abundance ions [@hamann01]. A popular second model, called the inhomogeneous absorber [@dekool02c; @arav05] or the power-law partial covering model [@arav05; @sabra05], posits that the optical depth has a power-law distribution over the continuum emission region. The power-law partial covering model can naturally account for the difference in apparent covering fractions among ions. The step function and power-law partial covering models can be distinguished if there are more than two absorption lines from the same lower level, and detailed analysis shows that the inhomogeneous absorber model is sometimes preferred [@dekool02c; @arav05]. Most of these analyses ignore the interesting question of the physical origin of partial covering. Absorption lines are observed along the line of sight to the continuum emission region in the central engine. From the point of view of an observer on Earth, the continuum emission region is spatially unresolved. But from the point of view of the absorber, the continuum emission region may be spatially resolved. Moreover, the continuum emission is assumed to come from an accretion disk, hotter in the center and cooler at larger radii, which means that the continuum emission region is resolved as a function of wavelength too. For example, a simple sum-of-blackbodies accretion disk model has radial temperature dependence $T\propto R^{-3/4}$. Therefore it is possible that the absorber, located in the vicinity of the torus at $\sim 1\rm \, pc$, for example, may present a higher covering fraction to the hot and compact central part of the accretion disk than to cooler parts at larger radii. Therefore, the covering fraction measured in the UV bandpass refers to a much smaller continuum emission region than the covering fraction in the optical and near-infrared bands. Analysis of partial covering as a function of wavelength would lead to an enhanced understanding of the geometry of the absorber in BALQs as well as constrain the relative angular sizes of the continuum emission regions as a function of wavelength. So, instead of only obtaining information along a single radial sight line, we would be able to investigate the angular distribution as well. A caveat is that we assume that negligible flux is scattered into our line of sight. To do this experiment, we clearly need to analyze lines widely separated in wavelength to probe different size scales of the accretion disk. However, we cannot obtain this information from just any pair of absorption lines, due to the fact that abundant ions have higher covering fractions than less abundant ions: the pair of lines should have about the same optical depth in the gas. @leighly11 showed that this criterion is fulfilled over a wide range of physical conditions by $\lambda \lambda 1118, 1128$ and the metastable helium lines, in particular \*$\lambda 10830$. These two lines probe dramatically different size scales of the accretion disk; for the sum-of-blackbodies model [e.g., @fkr02], the radius of the accretion disk emitting at 1 micron is a factor of 12 times larger than the radius of the accretion disk emitting at 1100Å (see §\[partial\_covering\]). In terms of area, this corresponds to a factor of 140, i.e., dramatically different size scales. The low redshift ($z=0.5422$) LoBAL quasar SDSS J085053.12$+$445122.5, hereafter referred to as SDSS J0850+4451, was discovered to have a \*$\lambda 3889$ absorption line in its SDSS spectrum [e.g., @luo13]. We obtained [*Gemini*]{} GNIRS and LBT LUCI near-infrared spectroscopic observations and identified the presence of a deep \*$\lambda 10830$ absorption line (§\[observations\] below). SDSS J0850$+$4451 was detected by [*GALEX*]{}, indicating that it was bright enough to be observed by [*HST*]{} using COS. The COS spectral analysis was described in @leighly18, hereafter Paper I, using a novel spectral synthesis program called [*SimBAL*]{}. The results of that analysis are summarized in §\[recap\]. We extrapolated the [*SimBAL*]{} best-fitting solutions to the optical and near-IR, finding that the predicted absorption was significantly deeper than observed (§\[extrapolation\]), and therefore apparently indicated differential partial covering. The [*HST*]{} and near-IR observations were not simultaneous, and we investigate the potential impact of variability on our result in Appendix \[variability\]. In addition, the host galaxy emits strongly at 1 micron, i.e., under the \*$\lambda 10830$ absorption line, so we performed spectral energy distribution (SED) fitting and image analysis to show that the contribution of the host galaxy to the 1-micron continuum is negligible and is therefore not filling in the absorption line (Appendix \[host\]). A quantitative analysis of the difference in covering fraction in the UV, optical, and near-IR is reported in §\[quantifying\], and an analysis of the difference in the covering fraction between the continuum and broad emission lines is discussed in §\[blr\]. [ A discussion of the nature of the power-law covering-fraction parameterization is given in §\[understanding\].]{} The implications of our results on our understanding of the physical properties of partial covering are discussed in §\[discussion\], and the summary of our principal results is given in §\[conclusions\]. Vacuum wavelengths are used throughout. Cosmological parameters used depend on the context (e.g., when comparing with results from an older paper), and are reported in the text. Observations and Data Reduction {#observations} =============================== We report data from taken at six different observatories. We obtained near-IR spectra at Gemini (§\[gemobs\]) and LBT (§\[lbt\]) to measure the properties of the \*$\lambda 10830$ line. To mitigate against absorption-line variability confounding the \* analysis, we obtained new optical spectra at MDM observatory (§\[mdmobs\]) contemporaneous with the near-IR observations, as well as near-IR photometry to estimate the host galaxy contribution through SED fitting. Subsequent optical spectra obtained at APO (§\[apoobs\]) and KPNO (§\[kpnoobs\]), combined with the SDSS and BOSS spectra (§\[sdss\]), were used to track absorption-line variability. The log of all of the observations of SDSS J0850$+$4451 analyzed in this paper is given in Table \[obslog\]. [lcccc]{} SDSS & 2002 Nov 27 & 9000.0 & 2472–5975 & $100\rm \, km\, s^{-1}$\ HST (WFC3 IR) & 2010 April 9 & 905.9 & 12500 & 0.13 arc sec/pixel\ LBT (LUCI) & 2010 Dec 12 & 1500.0 & 9512–15304 & $160 \rm \, km\, s^{-1}$\ MDM (CCDS) & 2011 Feb 11 & 9600.0 & 3121–4108 & $210 \rm \, km\, s^{-1}$\ [*Gemini*]{} (GNIRS) & 2011 Apr 23, 24; 2011 Jun 6 & 1520.0 & 5513–16466 & $240 \rm \, km\, s^{-1}$\ MDM (TIFKAM) & 2012 Dec 29 & 990, 720, 720 & 8105, 10700, 14265 & 1.0 arcsec\ APO (DIS) & 2014 Apr 12 & 4500.0 & 2206–6353 & 380, $400 \rm \, km\, s^{-1}$\ BOSS & 2015 Jan 20 & 3600.0 & 2345–6740 & $89\rm \, km\, s^{-1}$\ KPNO (KOSMOS) & 2015 Apr 24 & 3600.0 & 3804–6631 & $120 \rm \, km\, s^{-1}$\ [*Gemini*]{} GNIRS Observations {#gemobs} ------------------------------- SDSS J0850$+$4451 was observed using GNIRS[^1] on the Gillett Gemini Telescope using a standard cross-dispersed mode (the SXD camera with the $31.7 \rm \, l/mm$ grating) and a $0{\farcs}45$ slit. Observations were made on 23 April 2011, 24 April 2011, 26 May 2011, and 7 June 2011. The 26 May observation was deemed unusable due to detector noise, as the detector read mode had been mistakenly set at “Very Bright/Acq./High Bckgrd”, rather than “Very Faint Objects” mode. On 23 April 2011, $8\times 190$ second exposures were made, in an ABBA pattern. On 24 April 2011, $8 \times 190$ second exposures were made, in an ABBA pattern. On 6 June 2011, $4 \times 190$ second exposures were made, also in an ABBA pattern. A0 stars were observed at approximately the same airmass and adjacent to the object observation for telluric correction. The data were reduced using the IRAF [*Gemini*]{} package, coupled with the GNIRS XD reduction scripts, in the standard manner for near-infrared spectra, through the spectral extraction step. For telluric correction, the [*Gemini*]{} spectra of the source and the telluric standard star were converted to a format that resembled IRTF SpeX data sufficiently that the Spextool [xtellcor]{} package [@cushing04; @vacca03] could be used. LBT Observation {#lbt} --------------- SDSS J0850$+$4451 was observed using LBT LUCI[^2] on 12 December 2010. Six exposures were made, with the object offset along the slit between each observation. An A0 star was observed adjacent to the target observation at approximately the same airmass. There is no reduction pipeline for LUCI data, so the data were reduced by hand using IRAF. Because the target was much fainter than the sky lines, special care was taken to straighten the object trace and sky lines. Wavelength correction was obtained using sky lines. The telluric correction was performed using [xtellcor\_general]{}, the generalization of the [xtellcor]{} procedure for 1-D (versus cross-dispersed) spectra [@cushing04; @vacca03]. The LBT spectrum and the three [*Gemini*]{} spectra were combined. First, the four spectra were resampled onto a common wavelength range, and averaged without weighting. The GNIRS spectrum obtained on 23 April 2011 appeared to have the best signal-to-noise ratio and the best calibration, and the other spectra were normalized and tilted to conform with that one. In §\[selection\], we discuss an [*LBT*]{} observation of the quasar PG 1254$+$047. It was observed using LBT [*LUCI*]{} on 2013 Jan 3 for 960 seconds in eight exposures using an ABBA configuration. The A0 telluric star HD 116960 was observed immediately after the PG 1254$+$047 observation in four 12-second exposures for a total of 48 seconds. Standard methods for extraction and wavelength calibration were done using IRAF. The telluric correction was done using IRTF [xtellcor\_general]{} [@vacca03]. KPNO Observation {#kpnoobs} ---------------- We obtained $3\times1200\rm \, s$ optical spectra of SDSS J0850$+$4451 on the night of UT 24 April 2015 using the KOSMOS spectrograph [@martini14] on the Mayall Telescope at the Kitt Peak National Observatory. We employed the blue VPH grism and center slit, which yielded spectra from 3804 – 6631Å at 0.69 Å ${\rm pixel}^{-1}$. The slit width was $0{\farcs}9$, and typical seeing was about $1{\farcs}2$. The resolution of the spectra, as measured by telluric emission lines, was 2.6 pixels at the center of the spectra and 2.8 pixels at either end. The output images of the spectrograph had dimension $320 \times 4096$ pixels, read out through two amplifier sections of size $160 \times 4096$ pixels. All the data were contaminated by fixed pattern noise which was symmetric on the two amplifiers. The spectra were positioned along the slit so that they fully fell on one amplifier. The first step in the data processing was to apply an overscan correction on each amplifier, and then to remove the pattern noise by flipping the image section from the side not containing the spectrum and then subtracting it from the side that did. This also partially subtracted the night sky lines, which extended across both amplifiers. Other calibration steps were the subtraction of zero-exposure frames and the application of flat-field corrections; the latter were constructed from a combination of quartz lamps in the spectrograph and lamps illuminating a white spot on the inside of the telescope dome. After flat fielding, cosmic rays in the sky regions of the image were removed by hand using a filter that replaced pixel values more than 5$\sigma$ from the median with the median value. After cosmic-ray cleaning, the three exposures were averaged and the spectrum extracted in the usual fashion. Noise as a function of counts in the spectrum was estimated from the scatter after subtracting a highly smoothed spectrum. The spectrum was flux calibrated using observations of Feige 34 that were taken the same night as SDSS J0850$+$4451. MDM Observations {#mdmobs} ----------------- SDSS J0850$+$4451 was observed using the Boller & Chivens CCD Spectrograph (CCDS)[^3] on the Hiltner 2.4m telescope at MDM observatory on 11 Feb 2011 under photometric conditions. Eight 20-minute observations were made. The data were reduced in a standard manner using IRAF. SDSS J0850$+$4451 is a relatively faint target for 2MASS; the quality flags associated with this in the catalog are “BCB” indicating that the H band photometry is especially uncertain. Therefore, we also obtained deep JHK imaging observations in order to obtain the photometry for SED fitting with the goal of constraining the contribution of the host galaxy to the near-UV continuum (§\[sed\_fitting\]). We used TIFKAM[^4] [@depoy93] at the 2.4m Hiltner Telescope of the MDM observatory. We employed the f/5 reimaging camera, which delivered a field 5.1 arcmin over 1024 pixels at a frame scale of $0\farcs 30\ {\rm pixel}^{-1}$. About 10% of each frame was slightly vignetted from an out-of-alignment internal baffle, but this was corrected in the reduction process. The observations were obtained on the night of UT 2012 December 29. For each filter, we obtained a series of 90-second exposures with position offsets varied irregularly between exposures. The total exposure time in $J$ was 990 seconds, while in $H$ and $K$ the total was 720 seconds. After a small linearity correction, the pixel values in the images from each filter were scaled and then combined by a median to produce a sky frame; this was also corrected for dark current to generate a flat field. The corrected images were then combined by averaging to produce master images in each of $JHK$. Photometry of SDSS J0850$+$4451 was derived from these combined images with respect to the 2MASS catalog values for the other objects on each frame, which were almost always brighter than SDSS J0850$+$4451. SDSS J0850$+$4451 is one of the reddest objects in the nearby field, but its $J - H$ and $J - K$ colors are within the range spanned by the other objects. Evidence for significant color terms in the transformation to the 2MASS system was marginal, so in the end we derived a simple constant offset to transform from the instrumental TIFKAM “m” to 2MASS “M” magnitudes: $M = m + {\rm const}$. Errors in the photometry were derived from the standard deviation of the magnitudes on individual frames and propagation of the 2MASS catalog error values. The final values from our photometry were $J = 16.202 \pm 0.024$, $H = 15.507 \pm 0.023$ and $K = 14.763 \pm 0.028$. SDSS and BOSS Spectra {#sdss} --------------------- SDSS J0850$+$4451 was observed using SDSS on 27 Nov 2002. The MDM and SDSS spectra had very similar emission and absorption lines, so they were averaged over the segment including the \*$\lambda 3889$ line, in order to increase signal-to-noise ratio, after the MDM spectrum was tilted and scaled to match the SDSS spectrum. SDSS J0850$+$4451 was observed again using BOSS on 20 Jan 2015. As will be discussed in §\[obs\_var\], the continuum shows an unusual shape at the blue end of the spectrum (Fig. \[fig13\]). Because this observation was made relatively close in time to the KPNO observation (within 3 months), and since the KPNO shows no such unnatural shape, we suspect that a calibration problem is responsible rather than a real change in continuum shape. Note that this should not be the atmospheric differential refraction problem known to plague the BOSS spectrograph [@margala16] as correction for that issue was included in the DR14 pipeline. APO Observation {#apoobs} --------------- We obtained spectra on the night of UT 2014 April 12 using the Dual Imaging Spectrograph (DIS)[^5] spectrograph on the ARC 3.5m telescope at the Apache Point Observatory. This two-channel spectrograph uses a dichroic to simultaneously obtain spectra in a blue and red channel. On the blue side we used the B400 grating, which delivered spectra from 3402–5564 Å at a dispersion of $1.82$ Å pixel$^{-1}$; the red side used the R300 grating, yielding spectra from 5281–9796Å at 2.66Å pixel$^{-1}$. We observed with a $1\farcs5$ wide slit. The dispersed images in both channels underfill the CCD detectors spatially, so these wavelength ranges were set by determining the regions where the locations of the SDSS J0850$+$4451 spectrum could reliably be traced. The wavelength solution is not reliable for the first and last $\sim 100$ Å of each spectrum because of a lack of arc lamp lines near the edges of each image. The spectral resolution on the blue and red sides was 3.0 and 2.9 pixels FWHM, respectively. Image processing and spectral extraction were performed using the same techniques as for the KPNO spectra. Flux calibration was determined using spectra of Feige 34 obtained near in time to the SDSS J0850$+$4451 spectra. Continuum Modeling {#contmod} ================== Our focus in this paper is on the absorption lines. Therefore, we model the continuum with several components including the emission lines and divide by the result before performing the absorption-line modeling. As in Paper I, we first correct the spectrum for Milky Way reddening using $E(B-V)=0.024$ [@sf11], and for the cosmological redshift $0.5422$, estimated from the narrow \[\] line in the SDSS spectrum. The SDSS J10850$+$4451 near-infrared continuum shows the characteristic break between the optical power law originating in the accretion disk and the near-infrared bump due to hot dust. We used [*Sherpa*]{} [@freeman01] to model the entire continuum spectrum using a power law for the accretion disk continuum and a black body for the thermal dust emission, plus a modest contribution from Paschen recombination continuum near 8000Å. Both H$\alpha$ and Pa$\beta$ are present in the spectra. Both of these lines could be modeled with two Gaussians. H$\alpha$ is somewhat broad and mostly symmetric, while Pa$\beta$ is very broad with a prominent blue wing. Pa$\delta$ was also fit well with the same profile as Pa$\beta$. In the vicinity of the \*$\lambda 10830$ absorption line, the principal emission lines are \*$\lambda 10830$, and Pa$\gamma\, \lambda 10941$, plus some low-level emission longward of Pa$\gamma$, possibly attributed to $\lambda 11290$. We found that we could only obtain a satisfactory fit if \*$\lambda 10830$ has a shape more similar to H$\alpha$, i.e., as two Gaussians with FWHM and wavelength tied to the H$\alpha$ parameters, but with the flux free to vary, plus a narrower \* component with FWHM $1210\rm \, km\, s^{-1}$. It must be noted that a prominent sky line falls at the wavelength of the putative narrow component, so the properties and necessity of that component are uncertain. The resulting fit is shown in Fig. \[fig1\]. ![image](f1-eps-converted-to.pdf){width="4.5truein"} The combined SDSS and MDM spectrum (Fig. \[fig2\]) shows that SDSS J0850$+$4451 is a broad-line AGN with modest, broadened emission. The absorption lines are easily identified against the continuum. For the blue optical wavelengths, we used the composite spectra developed in @leighly11. For the region around , we used an emission spectrum extracted by us from the [*HST*]{} observation of I Zw 1 [@lm06]. Both were convolved with a Gaussian with a width of $2000\rm \, km\, s^{-1}$. We modeled the spectrum with the broad , a broken power law, a small amount of Balmer continuum, and broad Gaussians for the emission lines, as well as broad Gaussians for H$\delta$ and $\lambda 4865$. This continuum isolates the \*$\lambda 3889$, \*$\lambda 3188$, and absorption lines (Fig. \[fig2\]). ![image](f2-eps-converted-to.pdf){width="4.5truein"} Partial Covering Absorption in SDSS J0850$+$4451 {#absorption_modeling} ================================================ Summary of Paper I {#recap} ------------------ The goal of the multi-wavelength observations of SDSS J0850$+$4451 was to investigate the nature of partial covering in this object. In Paper I, we described the analysis of the [*HST*]{} COS spectrum of SDSS J0850$+$4451 using our novel spectral synthesis code [*SimBAL*]{}. We briefly review the most relevant aspects of that analysis and the results to set the stage for the partial-covering analysis described in this paper. The [*SimBAL*]{} analysis method uses large grids of ionic column densities extracted from [*Cloudy*]{} [@ferland13] models to create synthetic spectra as a function of velocity, covering fraction, ionization parameter, density, and a combination parameter $\log N_H - \log U$. We use the Markov Chain Monte Carlo code [ emcee]{}[^6] [@emcee] to compare the continuum-normalized [*HST*]{} spectrum with the synthetic spectra, using $\chi^2$ as the likelihood estimator. The results of the modeling process are posterior probability distributions of the model parameters, which were used to construct the best-fitting model spectrum and its uncertainties, and to extract best-fitting model parameters and uncertainties. From these, the physical parameters of the outflow, including the total column density, mass outflow rate, momentum flux, and kinetic luminosity were derived. We developed an innovative method to model the velocity dependence of the outflow parameters. We divided the trough into a specified number of velocity bins, where each bin is required to have the same width, but the physical parameters of the gas were allowed to vary in each bin. The central velocity of the highest-velocity bin and the bin width were fitted parameters. For SDSS J0850$+$4451, we ran models with from 7 to 12 bins in order to investigate systematic uncertainty associated with the number of bins; we found that the dependence on number of bins is small. In addition, we considered two models for the continuum that differ somewhat in the modeling of the Ly$\alpha$ and emission line region; see Paper I for details. We considered two [*Cloudy*]{} input spectral energy distributions, a relatively soft one that may be characteristic of quasars [@hamann11], and a hard one that may be more suitable for Seyferts [@korista97]. Finally we considered two cases for the metallicity, solar and $Z=3 Z_\odot$, both for the soft SED. [ For the enhanced metallicity models, we followed @hamann02: all metals were set to three times their solar value, while nitrogen was set to nine times the solar value, and helium was set to 1.14 times the solar value.]{} As discussed in Paper I, the results were largely independent of these differences in models. A number of results were robust to variations in our models. The trough spans $-6000$ to $-1000 \rm \, km\, s^{-1}$. We found significant structure in $\log N_H-\log U$ as a function of velocity, namely an enhancement in the column density by a factor of three around $-4000\rm \, km\, s^{-1}$. We refer to the this velocity-resolved feature as “the concentration”. Both the ionization parameter and the column density were larger at higher speeds. The covering fraction showed a strong decrease with speed. We estimated the bulk properties of the outflow from our results. The total column density of the outflowing gas $\log N_H$ lay between $22.4$ and $22.9 \rm \, cm^{-2}$, depending on the metallicity ($Z=3 Z_\odot$ and solar, respectively). The density-sensitive line \*$\lambda 1175$ constrained the distance of the outflow from the continuum-emission region to be between 1 and 3 parsecs. [ \*$\lambda 1175$ arises from three fine-structure levels, each of which has its own critical density [e.g., @gabel05 Fig. 5]. While the $J=0$ level is populated at relatively low densities, the $J=1$ becomes significantly populated toward $\log n=6\rm \, [cm^{-3}]$, increasing the opacity of the transition significantly.]{} Assuming that the whole outflow (i.e., including the velocity bins that were not represented in the \* line) lies at approximately the same distance from the central engine, we found that the mass outflow rate is 17–28 solar masses per year, the momentum flux is approximately equal to $L_{Bol}/c$, and the ratio of the kinematic to bolometric luminosity is 0.8–0.9%. This range is greater than 0.5% [@he10], generally taken to be the lower bound required for a quasar outflow to effectively contribute to quasar feedback in galaxy evolution scenarios. The ability to model the velocity dependence of physical properties, as well as extract the global outflow properties illustrates the power of the forward-modeling methodology used by [*SimBAL*]{}. Extrapolation to Longer Wavelengths {#extrapolation} ----------------------------------- In the near UV and optical spectrum, we observe absorption lines from , \*3188, and \*3889 (Fig. \[fig2\]). In the near-infrared spectrum, we observe \*10830 (Fig. \[fig1\]). We extrapolated best-fitting models from Paper I to longer wavelengths. As the solutions were largely independent of the number of bins, we chose the 11-bin models from Paper I as representative, and plot the results for the nominal soft SED, the hard SED, and the higher metallicity (and nominal soft SED) for each continuum model. The flux-density median model spectra are shown in Fig. \[fig13\]. ![image](f3-eps-converted-to.pdf){width="5.5truein"} This figure shows that the model that fits the UV over-predicts the and \* opacity. Generally speaking, the hard SED produces the worst fit, predicting far more opacity for all three lines than observed. [ This is because a harder SED produces a thicker [e.g., @casebeer06 Fig. 13] and hotter [e.g., @leighly07 Fig. 14] region; \* shows a mild dependence on temperature [@clegg87].]{} The enhanced metallicity model fits the \*$\lambda 3889$ line rather well, but it over-predicts the absorption. All models predict much more absorption at \*$\lambda 10830$ than is observed. At first glance, this result might imply that the covering fraction of the longer wavelength continuum emission region is lower than that of the shorter wavelength continuum emission region. This would allow more continuum emission to reach the observer, producing a shallower line. However, there are two factors that we needed to consider before we could draw this conclusion. First, it turns out that SDSS J0850+4451 has demonstrated absorption line variability, and our ground-based optical and near-infrared observations were not simultaneous with the [ *HST*]{} observation. We explore the potential effects of variability on our experiment in Appendix \[variability\]. We conclude that variability is unlikely to have caused the difference between the observed line depths and the extrapolated model line depths, although we cannot rule it out absolutely. Second, the \*$\lambda 10830$ line is located near 1 micron, the region of the spectrum where the host galaxy is the brightest. So it is conceivable that the continuum is diluted by the presence of the host galaxy, making the line appear shallower than it is. We explore this possibility in Appendix \[host\]. We conclude that the host galaxy contribution to the continuum under the \*$\lambda 10830$ line is negligible. Quantifying the Difference in Partial Covering {#quantifying} ---------------------------------------------- Having established that the difference in partial covering implied from the extrapolated best-fitting UV spectrum is not an artifact of variability or host galaxy contamination, we proceeded to investigate it quantitatively. As described in Paper I, we parameterized the partial covering using a power law, where $\tau = \tau_{max} x^{a}$. Here, $\tau$ is the integrated opacity of the line, and $\tau_{max}$ is proportional to $\lambda f_{ik} N({\rm ion})$, where $\lambda$ is the wavelength of the line, $f_{ik}$ is the oscillator strength, $N({\rm ion})$ is the ionic column density [e.g., @ss91], $x \in (0,1)$ represents the fractional surface area, and $a$, or more specifically $\log a$, is the parameter that is modeled. We chose this formalism because we compute the model spectrum line by line, and we require a scheme that is mathematically commutative. The power-law partial-covering model has been explored by @dekool02c [@sabra01; @arav05], and in several cases is has been found to provide a better fit than the step-function partial covering model [@dekool02c; @arav05]. As discussed in Paper I, the power-law covering fraction has the property that the fraction of the continuum covered depends on the opacity of the line, which means that the residual intensity can vary dramatically among lines with different opacity for the same value of $a$. So a particular value of $\log a$ will produce lines that are nearly black for a common ion, and lines that are quite shallow for a rare ion. In addition, as discussed by @sabra01 [e.g., their Fig. 1], a value of $a$ equal to 1 ($\log a=0$) corresponds to 50% coverage (for a line with total opacity equal to 1), while $a$ approaching zero corresponds to full coverage, and high values of $a$ correspond to a small fraction covered. Thus, the fitting parameter $\log a$ has an inverse behavior: it is smaller for a larger fraction covered, and larger for a smaller fraction covered. See §\[understanding\] for further discussion of inhomogeneous partial covering and the power law parameterization. To investigate the difference in covering fraction between the UV and the long-wavelength spectrum, we performed a [*SimBAL*]{} analysis of the continuum-normalized spectrum between 2500–4200Å and 9000–11500Å. We made the assumption that, of all the variables required in the [*SimBAL*]{} analysis of the [*HST*]{} COS spectrum, only the covering fraction varies. As discussed in Paper I, the resulting physical parameters of the outflow depend little on the number of bins used to span the line profile, so we present the results for the 11-bin case. The variable parameters in this analysis were the 11 values of $\log a$, i.e., the log of the covering fraction index as a function of velocity. The results are shown in Fig. \[fig19\]. The left panel shows, for reference, the results from fitting the full model to the UV data from Paper I. The partial covering parameter $\log a$ is plotted as a function of velocity for the 11-bin model for 6 combinations of continuum model, SED, and metallicity. The error bars show the 95% confidence intervals from the posterior distributions obtained for each of the model parameters. The middle panel shows the results for the fits of covering fraction at optical and near-IR wavelengths. The $\log a$ is clearly shifted to larger values, indicating a lower covering fraction. The median models overlaid on the data are shown in Fig. \[fits\]. While the reduced $\chi^2$ for the extrapolated models shown in Fig. \[fig13\] ranged from 1.6 to 3.3, indicating an unacceptable fit, the reduced $\chi^2$ for these models are all less than 1, indicating an acceptable fit. Physically, this result implies that the covering fraction along the line of sight to the optical and near-IR continuum emission region is lower than the covering fraction along the line of sight to the UV continuum emission region. [ We have measured the difference between the covering fractions in the UV and the optical through infrared bands. In principle, there could be a continual decrease in covering fraction as a function of wavelengths. We tried to detect a difference in covering fraction between the three bands: UV, optical (i.e., and \*$\lambda 3889$) and the infrared (\*$\lambda 10830$). We were unable to obtain any useful constraints because of limitations of the data; specifically, the lines are rather shallow and the signal-to-noise ratios are moderate. ]{} To quantify the difference between the covering fractions in the UV and at longer wavelengths, we fit a constant model to $\log a$ as a function of velocity for each of the six models. The $\log a$ varies as a function of velocity, and is not well constrained at low and high velocities where the absorption line is shallow, so we limited the fitting to range between $-4500\rm \, km\, s^{-1}$ and $-1500\rm \, km\, s^{-1}$. Computing the power of 10 of the resulting average values of $\log a$ results yields six estimates of $a$ each for the UV models and the long wavelength models respectively. How do we interpret the differences in $a$ between the UV and the long wavelengths? We want to know how much more of the continuum emission source is covered in the UV compared with near-infrared and optical wavelengths. To determine this, we return to the definition of the power law covering fraction, $\tau = \tau_{max}x^a$, where $0 < x <1$ represents the fractional surface area, and ask, at a particular value of $x$, what is the ratio of the fraction covered? Solving this equation for $x$ yields $x=(\tau/\tau_{max})^{1/a}$, and the fraction covered for a particular value of $\tau/\tau_{max}$ is given by $1-(\tau/\tau_{max})^{1/a}$. So in terms of $x$, we want to determine $$\frac{f_{UV}}{f_{long}}=\frac{1-(\tau/\tau_{max})^{1/a_{UV}}}{1-(\tau/\tau_{max})^{1/a_{long}}}$$ where the “long” subscript refers to the optical through infrared wavelengths. A limiting value is given by $\tau \to \tau_{max}$, but the ratio becomes indeterminate. It turns out that the ratio of the fractions covered approaches the ratios of the $a$ values as $\tau$ approaches $\tau_{max}$[^7]. That is, for indices of $a_{UV}$ and $a_{long}$ in the UV and near-infrared, respectively, the ratio of the fractions covered will approach $a_{long}/a_{UV}$. The results are shown in the right panel in Fig. \[fig19\]. ![image](f4-eps-converted-to.pdf){width="6.5truein"} ![image](f5-eps-converted-to.pdf){width="5.5truein"} While the three different models (the nominal SED, the hard SED, and the metals$\times 3$ case) yield covering fractions in both the UV and at longer wavelengths that follow essentially the same shape as a function of velocity (middle panel in Fig. \[fig19\]), the normalizations for the different models are slightly different. Specifically, the hard SED model yields a consistently larger covering-fraction index parameter, indicating a lower covering fraction. This is because the hard SED produces a thicker Strömgren sphere [e.g., @casebeer06 Fig. 13] and a hotter region [e.g., @leighly07 Fig. 14], and given that the fraction of neutral helium in the metastable state increases with temperature [@clegg87], more \* is predicted per metal ion from the hard SED. The near-infrared spectrum has better signal-to-noise ratio than the optical spectrum, and the \*$\lambda 10830$ is deep compared with \*$\lambda 3889$ or , so the \*$\lambda 10830$ drives the fit. Therefore, it is no surprise that the near-infrared covering fraction obtained from the hard SED simulations is lower than the others. As discussed in Paper I, the hard SED produces the least satisfactory fit to the [*HST*]{} COS spectrum. Therefore, we reject the relatively high covering-fraction ratio derived from the hard SED, and take as the representative value of the ratio of fraction of the UV continuum covered to the fraction of the optical through near-infrared continuum covered to be 2.5. ### Spatial Non-Uniformity of the Physical Conditions of the Gas {#nonuniform} We have assumed that the only difference between the UV absorption lines and the optical/infrared absorption lines is the covering fraction. But because the infrared continuum emission region is so much larger than the UV continuum emission region (we estimate the area ratios to be $A_{10700}/A_{1100} = 140$ in §\[size\_scales\]), it is possible that the physical conditions of the gas are also different. The extrapolation of the UV solution to the optical and infrared absorption lines shown in Fig. \[fig13\] reveals the general shape is similar, and therefore the physical conditions are probably not dramatically different. In particular, the “mitten” shape of the \*$\lambda 10830$ line is reproduced in the extrapolated solution. However, the “thumb” of the mitten, originating in absorption near $\sim -2000\rm \, km\, s^{-1}$ is longer in the extrapolated solution than in the data, suggesting that on average, the outflowing gas with velocity near $-2000\rm\, km\, s^{-1}$ covering the infrared continuum emitting region has somewhat higher opacity than that covering the UV continuum emission region. We attempted to quantify the possible difference in physical conditions by fitting the optical and infrared $I/I_0$ spectrum with a model in which the ionization parameter, $\log N_H-\log U$, and covering fraction parameter $\log a$ were allowed to vary. There are no density diagnostic lines in that region of the spectrum, so we froze those parameters at the best fitting values from the UV model. We also froze the velocity offset and velocity width of the bins. Not surprisingly, the results are not very conclusive because there is not enough information among the and \* lines to constrain the physical conditions. The ionization parameter is particularly poorly constrained. The $\log N_H-\log U$ is consistent with the UV solution within the concentration (between $-4400$ and $-3200\rm \, km\, s^{-1}$). At lower velocities, the $\log N_H-\log U$ is higher and the covering fraction parameter $\log a$ is larger (lower covering fraction) in the long wavelength solution compared with the UV solution, but with so few lines to constrain the solution, it is clear that these parameters are highly covariant. Despite our failure to constrain the physical conditions at long wavelengths, the similarities and differences between the extrapolated UV solution and the observed long wavelength absorption lines suggest intriguing constraints on the spatial uniformity of the absorbing gas. What About the Broad-line Region? {#blr} --------------------------------- We conclude that the absorber in SDSS J0850$+$4451 presents a larger covering fraction to the UV emission region compared with the near-infrared continuum emission region, indicating the presence of structure in the absorbing outflow. Size scales are discussed in detail in §\[size\_scales\], but it is expected that the broad line region should be located at a comparable or larger radius than the near-infrared-emitting accretion disk. The [*HST*]{} spectrum and continuum models, reproduced from Paper I, are shown for reference in Fig. \[hst\_spectrum\]. The rest wavelengths of prominent emission lines are marked. The onset of the outflow is at low enough velocity and the lines are deep enough that is clear that the broad line region is substantially absorbed. Comparison of this figure with Fig. \[fig13\] or Fig. \[fits\] shows that the near-UV, optical and near-infrared absorption lines are not as deep as the UV absorption lines (e.g., ), giving the impression that the broad line region is fully absorbed, i.e., has a higher covering fraction than the near-infrared continuum emission region, a result that does not make sense considering the relative expected size scales (see §\[size\_scales\]). ![image](f6-eps-converted-to.pdf){width="6.5truein"} This impression is mistaken, due to the nature of the power-law covering fraction parameterization. As discussed in Paper I, in the power-law covering fraction parameterization, the fraction of the source covered, or alternatively, the residual intensity, depends on the total opacity of the line [ [see also @arav05]]{}. The prominent UV lines, including , , and , have relatively high opacities, since the ions that produce these lines are very abundant in the region of the ionized slab. The ions producing \*$\lambda 10830$ and \*$\lambda 3889$, which are also found in the region, are rarer, since they come from metastable helium [see @leighly11 for a discussion]. is a low-ionization line, and only starts to become commonplace as the hydrogen ionization front is approached [e.g., Fig. 10 in @lucy14], so it also has relatively low opacity in SDSS 0850$+$4451 since its LoBAL classification means that the hydrogen ionization front is not present in the outflow (i.e., versus FeLoBALs, where the hydrogen ionization front is expected to be present). Therefore, is also expected to not be a very optically thick line. Therefore, it is possible that the broad-line region has a lower covering fraction than the UV continuum, even though casual examination of the spectrum suggests otherwise. We discuss inhomogeneous partial covering and the power-law covering fraction parameterization further in §\[understanding\]. ![image](f7-eps-converted-to.pdf){width="6.5truein"} We test this scenario by fitting all of the spectra: the [*HST*]{} COS spectrum analyzed in Paper I that samples the UV band, the combined SDSS and MDM spectra described in §\[mdmobs\] and §\[sdss\] (sampling the near-UV and optical, between 2500Å and 4000Å), and the combined LBT and Gemini spectra described in §\[gemobs\] and §\[lbt\] (sampling the near-IR, between 9000Å and 11500Å). Although we now have developed a method to fit the continuum and line emission simultaneously with the absorption (Leighly et al., in preparation), for direct comparison with Paper I, we separate the line and continuum contributions to our continuum models and fit with the normalizations of these components fixed. As shown in Paper I, there is little dependence on the number of bins used to span the troughs, so the 11-bin model was chosen as representative. Three sets of 11 parameters modeled the covering fractions of the UV, the long wavelengths, and the broad line region, respectively. The UV continuum covering fraction was modeled using $\log a$ as in Paper I. The long wavelength continuum was modeled using $\Delta \log a_{long}$, and a prior was used to constrain these parameters to be greater than zero, i.e., making the physically reasonable assumption that the covering fraction of the longer wavelength continuum is lower than the covering fraction of the UV continuum (as shown in §\[quantifying\]), and keeping in mind that a larger value of $a$ corresponds to a smaller covering fraction. To be specific, the covering fraction parameter in a particular velocity bin applied to the long wavelength continuum was $\log a + \Delta \log a_{long}$, where $\log a$ is the value applied to the same velocity bin in the UV, and $\Delta \log a_{long}$ is the model parameter. Finally, the broad lines were modeled with an additional $\Delta \log a_{lines}$, thereby making the physically reasonable assumption that the fraction covered is at least as small as that of the long wavelength continuum. Thus, the covering fraction applied to the line emission was $\log a +\Delta\log a_{long}+\Delta\log a_{lines}$. Overall, the fits are good despite the increase in bandpass. The reduced $\chi^2$ computed over the points where the median model experienced opacity (see Paper I; this modified $\chi^2$ is used because the continuum is not allowed to vary) are found to be, for the first and second continuum models, respectively: 1.41 and 1.59 for solar metallicity and soft SED, 1.55 and 1.54 for the hard SED, and 1.15 and 1.18 for the soft SED and $\times 3$ metallicity. The values for the solar metallicity and hard SED are larger than the ones obtained for the UV-only models of Paper I [see @leighly18 Fig. 5], but are comparable for the enhanced metallicity model, indicating that the $\times 3$ metallicity and soft SED model is preferred. Despite the additional constraints imposed by the inclusion of the long-wavelength spectra, the fit in the UV band is still good (Fig. \[uv\_fit\]). ![image](f8-eps-converted-to.pdf){width="5.5truein"} Fig. \[long\_wavelengths\] shows the results in near-UV to near-IR spectra. Here, the spectra have been normalized by the continuum model to facilitate comparison with extrapolation analysis presented in §\[quantifying\]. Comparison with Fig. \[fits\] show that the fits are good and overall very similar to one another, although small differences are found from line to line. We conclude that the model presented in this section describes the full bandpass well. The covering fraction results are shown in Fig. \[fig9\]. The left-most panel shows the results for fitting the UV continuum and lines together from Paper I. The results from the new model presented in this paper are shown in the right three panels. The second-from-the-left panel shows the covering fraction for the UV continuum alone. The covering fraction index is somewhat smaller than the Paper I result, indicating a somewhat larger covering fraction for the UV continuum than found in Paper I. This is especially true around $-2000\rm \, km\, s^{-1}$, where the line emission is prominent. The second-from-the-right panel shows the $\Delta \log a$ for the near-UV, optical, and near-IR wavelengths. The difference is particularly strong and robust near $-4000\rm \, km\, s^{-1}$, the location of the enhanced region of $\log N_H -\log U$ referred to in Paper I as “the concentration”. This result makes sense, since the ions that produce the long-wavelength lines are found deeper in the photoionized slab and are therefore most prominent in the velocities defined by the concentration. They are also coincident with the \*$\lambda 1175$ feature discussed in Paper I, e.g., Fig. 6. The $\Delta \log a$ value is close to 0.4, the value obtained in §\[quantifying\]. The right-hand panel shows the $\Delta \log a_{lines}$ for the emission-line spectrum. These are, for the most part, consistent with $\Delta \log a_{lines}$ equal to zero. This can be interpreted as evidence that the broad line emission has the same covering fraction as the long wavelength continuum emission region. However, we note that in this model each velocity bin is fit by 3 covering fraction parameters. It seems reasonable to suspect that the data are over-fit, i.e., there are potentially too many covering-fraction degrees of freedom in each velocity bin, resulting in covariance among model parameters. ![image](f9-eps-converted-to.pdf){width="6.5truein"} Allowing the covering fractions for the UV continuum, the long wavelength continuum, and the broad-line region continuum to vary independently causes the solution to shift compared with the UV-only models presented in Paper I. We find that these shifts are minor and the physical parameters describing the outflow are nearly the same. Fig. \[fig10\] shows the outflowing-gas physical parameters as a function of velocity; the results for the UV-only model fits from Paper I are reproduced for comparison. The results for the fitted ionization parameter $\log U$, the column density parameter $\log N_H-\log U$, and the derived parameter $\log N_H$ are roughly consistent between the two models, with small changes at low velocities where the broad emission lines dominate. The density $\log n$ appears to be much different for velocities higher and lower than that of the concentration (centered near $-4000 \rm \, km\, s^{-1}$), but as discussed in Paper I, there are no density-dependent lines at those velocities and the density is unconstrained. ![image](plot_results_covfrac_corrected-eps-converted-to.pdf){width="5.0truein"} ![image](plot_derived_two_cont_lines_fix_covfrac-eps-converted-to.pdf){width="2.5truein"} Finally, we show the results of the derived parameters including the total column density, the radius of the outflow, the mass outflow rate, the momentum flux, and the ratio of kinetic to bolometric luminosity in Fig. \[derived\]. For comparison, we show the results from Paper I as well. In §\[size\_scales\] we show that the infrared continuum emission region is 140 times larger than the UV continuum emission region. Therefore, the appropriate covering fraction to use for the UV–to–near-IR model is the one for the largest size scale, i.e., $\log a_{long}$ or $\log a_{lines}$, since the properties relevant for the larger area should dominate the outflow, at least as far as we can tell from the information we have. Therefore, we use $\log a_{lines}$ to weight the column densities, resulting in a reduction in the estimated total column density for the enhanced-metallicity models by a factor of $\sim 1.6$ compared with the results of Paper I to $\log N_H=22.19^{+0.058}_{-0.056}$ and $22.18^{+0.045}_{-0.043}\rm \, [cm^{-2}]$ for the first and second continuum models respectively (1-sigma errors). The factor of 1.6 is lower than than the ratio of the two covering fractions which was estimated in §\[quantifying\] to be 2.5. The difference is that the value of 2.5 was extracted from the well-sampled data in the center of the velocity profile, from $-4500$ to $-1500\rm \, km\, s^{-1}$, while the column density was obtained from the whole profile. If we extract the column density from those that range of velocities only, the difference is a factor of 2.5, as expected. Other parameters shift due to the reduction in column density and small shifts in the best fit. Specifically, the radius of the outflow is found to be $\log R=0.47^{+0.03}_{-0.04}$ and $0.34\pm 0.04 \rm \, [pc]$, the mass outflow rate is $\log \dot M= 1.07 \pm 0.07$ and $0.88^{+0.05}_{-0.06} \rm \, [M_{\odot}\, yr^{-1}]$, and the log of the ratio of the kinetic to bolometric luminosity is $-2.20\pm 0.09$ and $-2.41\pm 0.07$, for the first and second continuum enhanced-metallicity models, respectively. Notably, the kinetic luminosity for the enhanced metallicity models decreases to 0.39–0.63% for the second and first continuum models. This range straddles the 0.5% value taken to be a conservative cutoff for effective galaxy feedback [@he10]. Therefore, SDSS J0850$+$4451 does not appear to be undergoing strong feedback from the BAL outflow. To summarize, we have shown that there exists in SDSS J0850$+$4451 a hierarchy of partial covering. The spectra are consistent with a model in which the covering fraction parameter $\log a$ to the optical and near-IR continuum is about 0.4 higher than to the UV continuum (i.e., consistent with the analysis presented in §\[fits\], and implying a covering fraction that is lower by a factor of about 2.5). The covering fraction to the broad line region is mostly consistent with that of the long-wavelength continuum, and therefore the broad line region has a lower covering fraction than the UV continuum. In addition, while in Paper I we found only mild support for the preference for high metallicity, the support is much stronger here, given that the reduced $\chi^2$ values evaluated over the non-zero opacity portions of the spectra are larger than 1.2 for the solar metallicity and hard SED models, and only the models with $Z=3Z_\odot$ are acceptable. [ Finally, taking into account the lower covering fraction over the larger area results in a reduction in the total column density and other outflow parameters including the kinetic luminosity. ]{} Understanding the Power-law Partial Covering Parameterization of Inhomogeneous Partial Covering {#understanding} =============================================================================================== The traditional form of partial covering, wherein a fraction of the emission region is covered uniformly by the absorber and the remainder is not covered, is easy to understand intuitively: one needs to only imagine an eclipse. Inhomogeneous partial covering is much less intuitive. Because partial covering seems to be extremely important in shaping the spectrum of SDSS 0850$+$4451, as well as other objects modeled using [*SimBAL*]{}, we explore the nature of partial covering in this section. Four factors must be considered in order to understand how absorption lines are shaped: the concept of inhomogeneous partial covering itself, the mapping of the output of the photoionization models (ionic column densities) to the power-law parameterization, the opacity of the particular line, and the relative brightness of the background source. We will explore each of these in turn. Note that substantial previous discussions of inhomogeneous partial covering are given in @dekool02c [@arav05; @sabra05]. Inhomogeneous Partial Covering and the Power Law Parameterization ----------------------------------------------------------------- The concept of inhomogeneous partial covering can be illustrated using a toy model [e.g., @dekool02c]. Fig. \[toy\] shows linear gray-scale images for two examples of distributions of “clouds”. Each cloud was constructed with opacity in the center of the two-dimensional cloud projection set to $\tau_0=1$. The left image illustrates the case where there are many clouds (500) and each cloud has a steep radial opacity profile ($r^{-1.5}$). The right image illustrates the case where there are fewer clouds (150) and each cloud has a flat opacity profile ($r^{-0.5}$). The distribution of optical depths is given in the right panel. As might be expected, many clouds with a steep opacity profile yield low opacity across a large fraction of the continuum source, and a small fraction of the continuum is covered by a high opacity. In contrast, few clouds with flat opacity profiles yield significant opacity across a large fraction of the continuum source. ![image](plot_inhomogeneous_toy-eps-converted-to.pdf){width="6.5truein"} The toy model is useful for illustrating the concept of partial covering, but given that we do not know anything about the “clouds” except their approximate size (§\[partial\_covering\]), we use a power-law parameterization for fitting. The power law parameterization is given by $\tau=\tau_{max} x^a$ where $x$ is the fractional surface area, as above, and $a$ is the fit index. Fig. \[indices\] illustrates the opacity for different values of $a$. A small value of $a$ corresponds to relatively high opacity over a large fraction of the continuum emission region. A large value of $a$ corresponds to a low opacity over most of the continuum emission region, and a high opacity over a small fraction. ### [*Cloudy*]{} and the Power Law Parameterization As discussed by @sabra05, the power-law opacity profile $\tau(x)=\tau_{max} x^a$ yields the following residual intensity equation: $$I(\lambda)=\frac{1}{a}\frac{1}{\tau_{max}^{1/a}}\Gamma(1/a)P(1/a,\tau_{max}),$$ where $\Gamma$ and $P$ are the complete and incomplete Gamma functions, respectively. This is the equation that is used in [ *SimBAL*]{}. [*Cloudy*]{} computes photoionization equilibrium in a slab of gas; there is no provision in the software for partial covering. How the ionic column densities produced by the [*Cloudy*]{} simulations map to the power-law opacity profile is a matter of interpretation. There are at least two possibilities: the opacity of an ion calculated using [*Cloudy*]{} corresponds to the [*average*]{} opacity across the continuum emission region (i.e., $N_{ion} \Rightarrow \bar\tau$, where $\bar\tau=\int_0^1 \tau_{max} x^a dx = \tau_{max}/(1+a)$), or the opacity of the ion maps to the maximum opacity (i.e., $N_{ion} \Rightarrow \tau_{max}$). These two methods produce indistinguishable results when the covering fraction is high ($a$ is low), but lead to somewhat different interpretations of partial covering, somewhat different implementations in [*SimBAL*]{}, and different line profile behaviors, as we discuss below. For the $N_{ion} \Rightarrow \bar\tau$ case, we must first obtain $\tau_{max}$ using $\tau_{max}=\bar{\tau}(1+a)$. Thus, the opacity of an ion computed by [*Cloudy*]{} is multiplied by $1+a$ [*before*]{} the spectrum is computed in [*SimBAL*]{}. For the $N_{ion} \Rightarrow \tau_{max}$ case, the opacity computed by [*Cloudy*]{} is used directly as $\tau_{max}$ by [*SimBAL*]{} to compute the spectrum, and the fitted column density is then corrected for the portion that is not covered by dividing by $1+a$ [*after*]{} the [*SimBAL*]{} computation (referred to as the covering-fraction-weighted column density here and in Paper I). There is no difference when $a$ is small, simply because $\bar{\tau}$ approaches $\tau_{max}$. But when $a$ is large, $\bar{\tau}$ is much less than $\tau_{max}$. This fact is illustrated in Fig. \[indices\], where the run of opacity as a function of fractional surface area is shown by the solid lines for a range of $a$ values, and the average opacity is shown by the dashed lines. For large values of $a$, the average value is much less than the maximum value. ![image](plotting_indices_3-eps-converted-to.pdf){width="4.0truein"} If the proportions of ions were uniform as a function of column density of the [*Cloudy*]{} slab, it might seem that there would be no difference between the two interpretations: either the average opacity is scaled up by $1+a$ before the spectrum is constructed, or the inferred column density is corrected by dividing by $1+a$ after the spectrum is constructed. The proportionally of the ionic populations is the assumption that is implicitly made by the $N_{ion} \Rightarrow \bar\tau$ method, since it assumes that the optically thickest part of the inhomogeneous partial covering is adequately modeled by $\tau_{max}=(1+a)\bar\tau$. However, it is readily apparent that the ionic column densities do not increase in proportion with the hydrogen column density [e.g., @hamann02 their Fig. 1]. As ionizing photons are removed from the photoionizing continuum by transmission through the gas, the proportions of different types of ions change. This is especially true when approaching the hydrogen ionization front where low-ionization ions such as Mg$^+$ start to become common. These low-ionization lines can be very important in constraining the column density. Indeed, in SDSS J0850$+$4451, it is the \* that constrains the $\log N_H-\log U$ of the simulation . For large $a$, it is more important to model the ionic proportions in the high-column density centers of the “clouds,” which is done by the $N_{ion} \Rightarrow \tau_{max}$ method, but not the $N_{ion} \Rightarrow \bar\tau$ method. Further tests show subtle but significant differences in behavior that lead us to reject the $N_{ion} \Rightarrow \bar\tau$ interpretation. We created a mock line list to test the differences between the two methods. The mock line list includes a strong line, a weak line, and a blend of four weak lines (Fig. \[investigate\]). The weak lines all have the same line strength (i.e., same $\lambda f_{ik} N_{ion}$), and the strong line is a factor 20 times larger. Thus, the total opacity of the blend is 5 times smaller than that of the strong line. The left panel shows the synthetic line profiles for a range of $\log a$ values for the $N_{ion} \Rightarrow \bar\tau$ method (top panel) and the $N_{ion} \Rightarrow \tau_{max}$ method (bottom panel). The right panel shows the depth of each feature as a function of $\log a$. As expected, the depths of all features decrease with the increase of $\log a$. The difference is seen in the relative decrease of the features for the two methods. For the $N_{ion} \Rightarrow \tau_{max}$ method, the depths of the lines decrease together, maintaining the order of the total opacity. That is, the strong line is always deeper than the blend, which is always deeper than the weak line. This makes sense, because the total opacity of the strong line is 5 times that of the blend, which is in turn 4 times that of the weak line. However, for the $N_{ion} \Rightarrow \bar\tau$ case and $\log a > 0.7$, the depth of the blend is larger than the depth of the strong line. This is unphysical, since the total opacity of the blend is smaller than the opacity of the strong line. This result occurs because, as mentioned above, in this method, opacities from [*Cloudy*]{} are multiplied by $1+a$ to obtain $\tau_{max}$ before the spectrum is made, and the $1+a$ factor dominates over the actual opacity of the lines for sufficiently high $a$. The same result is obtained if line equivalent width is measured instead of line depth. This problem is most noticeable when modeling overlapping-trough FeLoBALs, where a large $\log a$ means that blends of iron multiplets that are predicted to have low opacity still produce significant optical depth due to the dominance of the $1+a$ factor. ![image](covfrac_investigate_10_3_sq_1-eps-converted-to.pdf){width="5.0truein"} [*SimBAL*]{} uses the second method, i.e., $N_{ion} \Rightarrow \tau_{max}$. We have run a few tests using $N_{ion} \Rightarrow \bar\tau$ on SDSS J0850$+$4451, and we obtain commensurate total column densities (so the derived parameters do not change significantly), but slightly lower log likelihoods (worse fits). This preference for the $N_{ion} \Rightarrow \tau_{max}$ method makes sense for SDSS J0850$+$4451, as the high opacity cores of the clouds that yield sufficient opacity in weak lines such as \* strongly constrain the $\log N_H -\log U$ best fit. But given the unphysical results produced by the $N_{ion} \Rightarrow \bar\tau$ method for blended lines as discussed above, we see no reason to investigate this method further. The Effect of the Total Optical Depth of a Line ----------------------------------------------- In a slab of ionized gas, the column densities of different ions can be dramatically different. The opacity to can be very large, since this ion is abundant and the transition is easily excited. The opacity to other ions can be very low. In the case of , the opacity is low because phosphorus has low elemental abundance compared with carbon. Other ions may have low opacity because they are found at the very back of the matter-bounded slab; for example, for SDSS J0850$+$4451, and \* fall into this category. Finally, other ions may have low opacity because they have low oscillator strengths. An example of this category is \*$\lambda 3889$, which has $f_{ik}=0.064$. Many of the lithium-like ions have oscillator strengths that are much higher; e.g., has $f_{ik}=0.608$ and $0.303$ for its doublet lines. These different total optical depths combine with inhomogeneous partial covering to yield different [*effective*]{} covering fractions for different ions. Physically, we can interpret the dependence of effective covering fraction on line opacity in the power-law partial covering parameterization if we imagine an inhomogeneous absorber distributed over the emission region, which is resolved from the point of view of the absorber. For this thought experiment, we do not need to specify the physical form of the inhomogeneity. C$^{+3}$ is a very common ion in photoionized gas, and so it is probable that any line of sight through the inhomogeneous absorber would encounter an optically thick column of . Thus the covering fraction to would be close to 100%. In contrast, P$^{+4}$ is rare in photoionized gas, due to its low abundance, and only a few lines of sight through thicker clumps would encounter sufficient P$^{+4}$ to produce significant absorption. So the effective covering fraction of would be smaller. The same would hold true for other ions that are rare. We illustrate this behavior by plotting the opacity $\tau(v)$ as a function of fractional surface area for and in Fig. \[opacities\]. We show the results for the [ *SimBAL*]{} fit solutions shown in Fig. \[uv\_fit\] for two bins corresponding to offset velocities $-4000 \rm \, km\, s^{-1}$ (i.e., in the concentration) and $-2000\rm \, km\, s^{-1}$ (on the flank of the broad emission lines). We plot $\tau(v)$ because the opacity from the [*Cloudy*]{} simulation is distributed evenly across a velocity bin in the tophat opacity model; here we use $\Delta v=511\rm \, km\, s^{-1}$, the value obtained as the best fit. @ss91 relate $\tau(v)$ to the ionic column density $N_{ion}(v)$ through $\tau(v)=2.654\times 10^{-15} f\lambda N_{ion}(v)$ where $f$ is the oscillator strength of the transition, $\lambda$ is the wavelength of the line in Angstroms, and $N_{ion}(v)$ is in $\rm atoms\, cm^{-2} (km \, s^{-1})^{-1}$. The results are seen in Fig. \[opacities\]. @arav05 suggest that the fraction of the surface area with opacity greater than 0.5 provides a good fiducial number for the [*effective*]{} covered fraction. At $-4000 \rm \, km\, s^{-1}$ (solid colored lines), the UV continuum covering fraction is $\log a \sim 0.4$, and the effective covered fraction is 100%. At $-2000 \rm \, km\, s^{-1}$ (dashed colored lines), the UV continuum covering fraction parameter is still $\log a \sim 0.4$, but the long wavelength and broad-line-region covering fractions are $\log \sim 0.8$. The figure indicates that at $-2000 \rm \, km\, s^{-1}$, effective covering fraction of the emission line region is only about 60%. However, the absorption line appears much deeper in the spectrum because the continuum is effectively completely covered, and the wing of the line makes up only 25% of the total flux at $-2000\rm \, km\, s^{-1}$. ![image](plot_opacities-eps-converted-to.pdf){width="5.5truein"} The right panel of Fig. \[opacities\] shows the results for . Because $P^{+4}$ is a rare ion, the opacity for smaller than that of . At $-4000\rm \, km\, s^{-1}$, the effective covering fraction is about 80%; is a shallower line than . The same would be true for other rare ions. At $-2000\rm \, km\, s^{-1}$, the opacity is lower than the fiducial minimum, and no line is observed. This is expected; the solution found by [*SimBAL*]{} yields a lower $\log N_H - \log U$ at $-2000 \rm \, km\, s^{-1}$ compared with $-4000 \rm \, km\, s^{-1}$; the gas is not optically thick enough to produce significant . So the line is observed to be narrower than the line. Finally, we display the effective covering fractions for several lines in Fig. \[effective\]. In this figure we used the MCMC results and computed the effective covering fraction for each of five absorption lines using the $\tau=0.5$ criterion proposed by @arav05, and the appropriate $\log a$ value: the UV continuum $\log a$ for , , and continuum, the long-wavelength $\log a$ for and \*, and the broad-line region $\log a$ for BLR. This plot shows that although the BLR has a higher $\log a$ (lower covering fraction) than the UV continuum or the long wavelengths, the effective covering fraction for the BLR is larger than that of , or \* due to the much greater opacity of . ![image](plot_effective_cov_fracs_both-eps-converted-to.pdf){width="2.5truein"} The Effect of the Brightness of the Background Source ----------------------------------------------------- The rest-UV quasar spectrum is composed of the continuum emission, presumably from an accretion disk, and emission lines. Depending on the object, most of the lines have moderate equivalent widths, with the exception of Ly$\alpha$, which can be very strong. $\lambda 1238$ and Ly$\alpha$ are separated by $\sim 5500 \rm \, km\, s^{-1}$, so for outflows with velocities much larger than this value, the line will have the Ly$\alpha$ emission lines as well as the accretion disk continuum as a background source. Thus, the line can be filled in by Ly$\alpha$, or Ly$\alpha$ can appear as a spike in the trough, simply as a consequence of the large intensity of the Ly$\alpha$ line. Different covering fractions for the continuum and emission lines, as might be expected for a relatively compact outflow, also contributes to Ly$\alpha$ leakage. An example of these phenomena is seen in a composite spectrum of strong quasars [@capellupo17 their Fig. 9]. This composite spectrum shows deep absorption in , , Ly$\alpha$, , and , but the absorption is quite shallow. Taken at face value, this result might suggest that the absorber is characterized by low ionization, given that is a high-ionization line; however, the presence of strong and especially , which is known to indicate a high ionization parameter [@leighly09] refutes that idea. [*SimBAL*]{} modeling of individual objects explicitly demonstrates that the absorption line can be diluted by a strong Ly$\alpha$ emission line [@leighly_aas19 also Hazlett et al. in prep]. In summary, inhomogeneous partial covering, modeled here using the power-law parameterization, produces a range of covering-fraction phenomenology that depends both on the covering fraction parameter, but also on the brightness of the background source, as well as the abundance of the ions which in turn depends on the physical conditions in the gas which are solved for using [ *SimBAL*]{}. Discussion ========== Size Scales in SDSS J0850$+$4451 {#size_scales} -------------------------------- We have demonstrated that the fraction of the continuum emission region covered is about 2.5 times smaller in the near-infrared compared with the UV in SDSS J0850$+$4451. We have also found that the fraction of the broad line region covered is consistent with the fraction of the near-infrared continuum emission region covered. To understand the implications of these results on the structure of the broad absorption line outflow, we first examine the size scales of the continuum emission region, the broad line region, and the torus, and compare those with the location of the absorber, established in Paper I to be 1–3 parsecs from the central engine. We used a simple sum-of-black-bodies accretion disk model [@fkr02] to estimate the sizes of the continuum emission regions. The black hole mass was shown to be $1.6\times 10^9\rm \, M_\odot$ in Paper I. The log bolometric luminosity was estimated by @luo13 to be $46.1 \rm \, [erg\, s^{-1}]$. We assumed a standard accretion efficiency of $\eta =0.1$. Using these values, we estimated an accretion rate of $\dot M_{acc}=2.2 \rm \, M_\odot \, yr^{-1}$. [ This was reported to be smaller than the outflow rate from the wind by a factor of $\sim 8$ by @leighly18, but that value is revised to a factor of $\sim 4$ from the results presented in this paper as a consequence of the lower covering fraction over the larger region (§\[blr\]). ]{} We first estimated the continuum emission region sizes (radii) for four wavelengths: 1100Å and 1600Å (values that span the [*HST*]{} spectrum), 2770Å (corresponding to the absorption line), and 10700Å (corresponding to the \* line). We used the Wien displacement law and the $T(R)=3GM$M$/8\pi R^3 \sigma$ run of temperature for a sum-of-blackbodies accretion disk to estimate the temperatures at which the Planck function should be a maximum at these wavelengths. These values are 0.0016, 0.0027, 0.0056, and 0.034 pc respectively. We find that the radius increases with wavelength as a powerlaw with an index of $4/3$ as expected for a sum-of-blackbodies accretion disk. Thus, the radius of the continuum emission region absorbed by is 21 (441) times smaller than the radius (area) of the continuum emission region absorbed by \*$\lambda 10830$. This analysis ignores the fact that blackbodies at other temperatures will contribute to the flux at any given wavelength, and that the emission at a given wavelength in a disk needs to be weighted by the radius [e.g., Fig. 5.7, @fkr02]. Fig. \[accretion\_disk\] shows the radius-weighted flux density at the four wavelengths. The radius at which the emission is maximum is marked, but since there is considerable emission outside of that radius, we identified the size of the accretion disk at each wavelength to be the radius at which the flux density falls to $1/e$ of the maximum value (i.e., roughly the half-light radius). Those radii are 0.0015, 0.0020, 0.0035, and 0.018 parsecs at 1100, 1600, 2770, and 10700Å respectively. These values are comparable to although smaller than the Wein displacement-estimated values above, with radius increasing with wavelength as a powerlaw with an index of $1.16$. [ Using the radii defined above, we find that the ratio of the area of the accretion disk emitting substantially at 10700Å to the area emitting 1100Å is 140.]{} These values do not fully account for the difference in continuum emission as a function of radius, because the radius-weighted flux density falls off faster with radius for shorter wavelengths. For example, the slope of the power law tangent to the $1/e$ point increases from $-2.44$ at 1100Å to $-1.45$ at 10700Å. This means that is there is quite a large region of accretion disk where the near-infrared continuum is emitting strongly but the far-UV continuum emission is negligible, but the same cannot be said of the near-UV (near 2770Å) versus the far-UV ($<1600$Å). So, while we can expect to be able to measure a difference in the covering fractions between 1100Å and 10700Å, there is too much emission overlap between the 1100Å and the 2770Å continuum-emitting regions to be able to detect a difference in covering fraction. Thus, we need the long wavelength absorption from \*$\lambda 10830$ to do these covering-fraction experiments. [ These sizes depend on the black hole mass and accretion rate relative to Eddington. SDSS 0850$+$4415 has perhaps a relatively large black hole mass and relatively low accretion rate compared with the expectation for broad absorption line quasars [e.g., @boroson02]; either a lower black hole mass or a large accretion rate relative to Eddington would predict a hotter accretion disk. The dependence of the ratio of the 10700Å emission region area to the 1100Å emission region area is explored in Fig. \[accretion\_disk\]. We find that hotter disks predict a much larger ratio of areas, perhaps implying that a more significant difference in the UV versus \* covering fractions and/or physical conditions might be expected for smaller black hole masses and higher accretion rates. We return to this point in §\[selection\].]{} ![image](plot_disk-eps-converted-to.pdf){width="6.5truein"} It is interesting to visualize how the accretion disk would appear from the perspective of an observer at the location of the absorber. In Paper I we found that the absorber is constrained to lie in the vicinity of the torus, about 1–3 parsecs from the central engine. At this distance, the 1100Å continuum emission region (diameter) would subtend 3.5–10.5 arcminutes, while the 10700 Å emission would subtend 0.7–2.1 degrees, a bit larger than the full moon. [ The size scales and other results computed based on the sum-of-blackbodies accretion disk should be used with some caution, as it can only approximately model the broad-band spectral energy distribution of quasars. It would be interesting to estimate size scales using more sophisticated accretion disk models, such as the one by @done12. ]{} In Paper I, we estimated the radius of the H$\beta$ emission to be $0.13^{+0.024}_{-0.021}\rm \, pc$, using the reverberation-mapping regression measured by @bentz13. As discussed in Paper I, @luo13 fit the H$\beta$ emission-line profile with a relativistic Keplerian disk model, obtaining inner and outer radii of 450 and 4700 $r_g$ respectively. For our derived black hole mass, these values correspond to $r_{in}=0.035\rm\, pc$ and $r_{out}=0.37 \rm \, pc$ respectively, consistent with the reverberation-mapping estimate. For reference, $r_{in}$ is $10\times$ larger than the $1/e$ radius of the 2770Å emitting region. We can also estimate the location of the emission region using the regression presented by @lira18 [Eq. 1]. The flux density at 1345Å is $2.4\times 10^{-15}\rm \, ergs\, s^{-1}\, cm^{-2}$Å$^{-1}$, corresponding to $\lambda L_\lambda = 3.7\times 10^{45} \rm \, erg\, s^{-1}$. The radius is therefore estimated to be $0.028^{+0.039}_{-0.019}\rm \, pc$, where the errors come from the uncertainty on the regression parameters. Using these values, we find that the H$\beta$ emission region is 4.6 times larger than the emission region. This is somewhat larger than but comparable to the values found from reverberation mapping. The Seyferts NGC 5548, NGC 3783, and NGC 7496 show an H$\beta$ lag about 1.8 to 2.8 times the lag [@pw00; @op02; @wanders97; @collier98]. The exception is the double-peaked object 3C 390.3, which showed an inverted relationship, with the lag about twice the H$\beta$ lag. The size scales for SDSS J0850$+$4451 are shown in Fig. \[fig20a\] as a function of black hole mass, where we have assumed that the systematic uncertainty in single-epoch black hole mass is 0.43 dex [@vp06]. Besides the continuum and emission-line radii discussed above, we have also graphed the estimated location of $R_{\tau_K} = 0.46 \rm \, pc$, the hot inner edge of the torus [@kishimoto07] computed in Paper I, as well as the estimated radius of the outflow measured in Paper I. The plot shows the expected hierarchy of size scales. Interestingly, the near-infrared continuum emission overlaps the emission region. ![image](f13-eps-converted-to.pdf){width="4.0truein"} Partial Covering in SDSS J0850+4451 {#partial_covering} ----------------------------------- Armed with the quasar size scales, we can discuss the implications of our results on simple scenarios for partial covering in SDSS J0850$+$4451 and BALQs in general. In one possible scenario for partial covering (left panel of Fig. \[fig20\]), the size scale of the outflow is large compared with the UV continuum emission region. It would then essentially completely cover the far UV-continuum emission region, but only partially cover the near-IR emission region. This scenario is ruled out because it predicts that covering fraction to the UV continuum would be 100%, and that is not the case. Alternatively, the absorbing clumps are small, but have internal structure on the size scale of the 1100Å continuum emission region, and the clumps are diffusely distributed on large size scales (middle panel of Fig. \[fig20\]). In this scenario, similar to the one posited by @hamann01 [their Fig. 6], each clump might present a distribution of column densities to the continuum source, as would be expected for, e.g., a spherical clump. Each clump would behave as a photoionized slab, with the effective column density and covering fractions of various ions depending on both the abundance of the ion, and where the ion is located within the clump (e.g., on the surface, as might be expected for a high-ionization ion, or buried deep, as expected for a low-ionization ion). Thus, partial covering to the UV continuum is achieved by the structure of the clumps (presenting a range of thicknesses to the illuminating continuum); such a model seems roughly consistent with the power law covering fraction [@dekool02c]. A lower covering fraction to the infrared continuum is achieved by assuming that these clumps are sparsely distributed on larger size scales. If this scenario is correct, we can estimate the sizes of the individual clumps by dividing the covering-fraction-weighted column density by the density, assuming that the clumps are approximately spherical. There are several additional assumptions that need to be made, however. First, it is probably not reasonable to assume that one clump produces the absorption spanning the whole outflow, i.e., $5000\rm \, km\, s^{-1}$, especially since the covering fraction for a single ion is observed to vary across the trough profile. We assume, somewhat arbitrarily, that a clump spans one velocity bin. We also assume, again somewhat arbitrarily, that velocity bin with the thickest outflow (at $-4000\rm \, km\, s^{-1}$) is most representative. The other bins that have similar covering fraction may be the same size but physically thinner (lower column density and more pancake-like). The bins that have lower covering fractions may have have the same size but a sparser distribution, i.e., a smaller number of clouds across the continuum emission region. This scenario is by no means unique; other configurations could be constructed that are consistent with the analysis results. The spectra are not very sensitive to the density; this is illustrated by @leighly18 in their Figure 11. All we can say with any certainty is that the presence of \* means that the density is greater than the critical density for that ion, $\log n \sim 10^6\rm \, cm^{-3}$. Thus, the cloud size is highly degenerate with density. We illustrate this covariance for the second continuum model and enhanced metallicity run via the corner plot shown in Fig. \[corner\]. For that simulation, a typical cloud size (diameter) is $5\times 10^{-4}\rm \, pc = 1.5 \times 10^{15}\rm \, cm$. ![image](corner_metals_3_extra_lya-eps-converted-to.pdf){width="3.5truein"} The size scales for the 1100Å and 10700Å continuum emission regions were found in §\[size\_scales\] to be 0.0015 and $0.018\rm \, pc$, respectively. Assuming that we see the accretion disk face on, and that the inhomogeneous partial covering for the 1100Å continuum emission region is accounted for by weighting the column density with the covering fraction, we find that for the $-4000\rm \, km\, s^{-1}$ bin, approximately 35 clouds are sufficient to cover the emission region. The 10700Å continuum emission region was shown to be 140 times larger, but the covering fraction was found to be 2.5 times lower (§\[quantifying\]), indicating that $\sim 2000$ clouds would be sufficient to cover the 10700Å emitting accretion disk, if observed face on. Another possibility is that the clumps are very small, much smaller than the UV continuum emission region (right panel of Fig. \[fig20\]). In this scenario, the individual clumps should be clustered on size scales smaller than the optical–infrared continuum emission region because if uniformly distributed, identical covering fractions in the UV and long wavelengths would be observed. We note that the difference in distribution is rather modest for this object; in §\[quantifying\], we inferred that the UV continuum is a factor of only 2.5 times more densely covered than the optical through near-infrared continuum. ![image](f14-eps-converted-to.pdf){width="6.5truein"} Additional information can be obtained from expected time scales of variability. At a distance of 1–3 parsecs from a $1.6\times 10^{9}\rm \, M_\odot$ black hole, the Keplerian velocity is between $1500$ and $2600\, \rm km\, s^{-1}$. Considering the 2770Å $1/e$ continuum diameter as a crossing size scale, we find that an orbiting cloud could cross the continuum emitting region in between 2.6 and 4.6 years. This seems to be similar to the time scale of the variability inferred from the absorption line changes (Appendix \[obs\_var\]), although we note that the pattern of variability perhaps suggests that ionization changes may be responsible (Appendix \[predicted\]), rather than covering fraction changes. In contrast, it would take 13–24 years to cross the $1/e$ near-infrared continuum emission region, possibly predicting less variability in the \*$\lambda 10830$ absorption line. Finally, while our principal focus has been on the difference in covering fraction between the UV and near-infrared continuum emission regions, we also made measurements of the covering fraction of the broad-line region. We found that there is no strong evidence for a difference in covering fraction between the broad-line region and the long-wavelength-continuum emission region. While it is difficult to know how robust this result is given the large number of degrees of freedom in the models, we note that this result is consistent with the size scales shown in Fig. \[fig20a\], where the radius of the near-infrared continuum emission region is completely included in the range of emission regions estimated based on the the @lira18 regressions. Selection Effects and Other Objects {#selection} ----------------------------------- We have demonstrated, using [*HST*]{} COS and optical and near-infrared spectra of SDSS J0850$+$4451, that a difference in covering fraction to the UV and near-infrared continuum emission regions is observed and is quantifiable. We note, however, that this experiment had significant selection effects built in. [ We chose SDSS J0850$+$4451 for [*HST*]{} COS observations because it showed strong \* absorption, and because of the similarity in opacity between \*3889 / \*10830 and predicted over a broad range of physical conditions [@leighly11 Fig. 15, Section 4.4.1], we could practically guarantee that would be present. ]{} The ratio of the covered portion of the UV continuum-emitting region to the covered portion of the near-infrared continuum-emitting region, was found to be 2.5. In principle, the covered portion of the infrared continuum-emitting region could be much lower. That is, since the $1/e$ radius of the near-infrared continuum emission region (under \*) is 140 times larger than the radius of the UV continuum emitting region (under ), objects could exist that have significant absorption, but no \* absorption. In fact, there are two examples where this seems to be the case. The $z=1.010$ quasar PG 1254$+$047 is known to host absorption; it is best known as the original quasar [@hamann98a]. Our [*LBT*]{} observation of this object is described in §\[lbt\]. A segment of the PG 1254$+$047 spectrum is shown in Fig. \[fig21\], along with the apparent optical depths of the and lines, digitized from @hamann98a Fig. 3 and shifted to the wavelengths appropriate for \*$\lambda 10830$ absorption. No evidence for absorption is observed, perhaps implying that the UV absorber does not occult an appreciable amount of the near-infrared-emitting continuum, and indicating a relatively compact absorber. However, the [*HST*]{} observation was made in 1993, and the LBT [*LUCI*]{} observation was made 20 years later, and it is quite possible that the broad-line absorption has changed its properties, becoming optically thin enough that appreciable \* is not expected, or that the absorber has completely disappeared. ![image](f15-eps-converted-to.pdf){width="3.5truein"} Another example is provided by the low-luminosity broad absorption line object WPVS 007. @leighly09 reported the emergence of a broad absorption line outflow in the 2003 November 6 [*FUSE*]{} spectrum of this object; a broad absorption line was among the lines that were observed. @grupe13 presented an near-infrared spectrum obtained in 2004 September 12. Quantitative analysis of the lack of a \*$\lambda 10830$ line yielded a conservative estimate of the apparent metastable helium log column density of 12.9 \[$\rm cm^{-2}$\]. In comparison, the apparent log column density of in the [*FUSE*]{} spectrum was 15.4 \[$\rm cm^{-2}$\] [@leighly09]. The opacity represented as a function of velocity is given by $\lambda f_{ik} N_{ion}(v)$ [@ss91], implying that the opacity due to at least 41 times the opacity due to \*$\lambda 10830$, rather than comparable as predicted by photoionization models [@leighly11]. This result seems to indicate a dramatic difference in the coverage of the UV compared with the near-infrared regions in WPVS 007. However, WPVS 007 is known to have variable absorption lines [@leighly09; @leighly15], leading to some doubt about this robustness of this conclusion. The near-infrared observation was made after the emergence of the broad absorption lines, and broad absorption lines were observed in subsequent [*HST*]{} observations [@leighly15] and so it seems unlikely that variability explain the lack of \* absorption. [ A potential interesting difference between WPVS 007 and PG 1254$+$047 versus SDSS 0850$+$4451 is illustrated in Fig. \[accretion\_disk\]. We found that because the black hole masses are smaller in these two objects, their disks are hotter, and the ratio of the area emitting 10700Å to the area emitting 1100Å is larger. The black hole mass and accretion rate for WPVS 007 were estimated by @leighly09 to be $M_{BH}=4.1\times 10^6\rm \, M_\odot$ and $L/L_{Edd}=0.096$. The black hole mass and accretion rate for PG 1254$+$047 were estimated by @sabra01 to be $M_{BH}=1\times 10^8\rm \, M_\odot$ and $L/L_{Edd}=0.8$. Using the sum-of-blackbodies disk model described in §\[size\_scales\] we found that the ratio of the accretion disk area emitting 10700Å to the area emitting 1100Å is 370 and 350 for WPVS 007 and PG 1254$+$047 respectively, much larger than the value of 140 found for SDSS 0850$+$4451. Despite the caveats regarding the sum-of-blackbodies accretion disk model discussed in §\[size\_scales\], these numbers suggest that it might be reasonable to expect that if outflows are not uniform, then we might be more likely to observe this lack of uniformity in objects with larger area ratios. It is possible that this fact explains the lack of \*$\lambda 10830$ in WPVS 007 and PG 1254$+$047. On the other hand, both of these objects have smaller black hole masses than SDSS 0850$+$4451, and therefore smaller size scales. If [*SimBAL*]{} analysis of their spectra were to yield a similar solution as for SDSS 0850$-$4451, then the clouds might have similar sizes, and fewer would cover the emission regions. Detailed analysis of more objects would be necessary to deconvolve these effects. ]{} Summary and Future Prospects {#conclusions} ============================ Summary of Results ------------------ This is the second of two papers investigating the outflow in the low-redshift LoBAL quasar SDSS J0850$+$4451. The first paper described application of the novel spectral synthesis code [ *SimBAL*]{} to the [*HST*]{} COS spectrum. We found that the absorber is located about 1–3 parsecs from the central engine, among other results. This paper describes extrapolation of the [*SimBAL*]{} solution to long wavelengths, and the implications for the nature of partial covering in this object. Our principal results follow. 1. In §\[extrapolation\], we showed that the extrapolation of the best-fitting spectral synthesis model of the UV spectrum of SDSS J0850$+$4451 obtained in Paper I to long wavelengths indicates that the , \*$\lambda 3889$, and \*$\lambda 10830$ lines are all predicted to be significantly deeper than observed, implying that the smaller UV continuum emission region experiences a higher covering fraction than the larger optical / near-infrared continuum emission region. 2. In Appendix \[variability\], we discussed the observed variability in the absorption lines, and concluded that the variability is unlikely to have produced the difference in UV and optical/near-infrared covering fractions, but it cannot be ruled out absolutely. In Appendix \[host\] we presented analysis of broad-band photometry and an archival [*HST*]{} near-infrared-band image of SDSS J0850$+$4451, and showed that the contribution of the host to the near-infrared continuum is negligible. Therefore, dilution of the \*$\lambda 10830$ absorption line by the host galaxy continuum cannot be responsible for the difference in UV and optical / near-infrared covering fractions. 3. In §\[quantifying\], we found that the absorber covers about 2.5 times more of the far UV continuum emission region than the optical through near-infrared continuum emission region. 4. In §\[blr\], we performed [*SimBAL*]{} modeling of the UV-through-infrared spectra, using three sets of covering fractions: one for the UV continuum, one for the near-UV through near-infrared continuum, and one for the broad emission lines. We found that the near-UV through near-infrared continuum covering fraction results were consistent with the constrained modeling presented in §\[quantifying\], and that the covering fraction of the broad-line region is mostly consistent with the covering fraction of the optical through near-infrared continuum. [ Considering that the projected size of the infrared continuum emission region is much larger than the UV continuum emission region, we revise the estimated bulk outflow properties from Paper I downward to account for the lower covering fraction. For the statistically-preferred enhanced metallicity model, the estimated column density of the outflow is $\log N_H=22.19\rm \, [cm^{-2}]$, the radius of the outflow is $2.2$–$3.0\rm \, pc$, and the mass outflow rate is $\dot M=8-12\rm \, M_\odot\, yr^{-1}$. Finally, the ratio of the kinetic to bolometric luminosity is 0.4–0.6%. This range straddles the 0.5% value taken to be a conservative cutoff for effective galaxy feedback [@he10]. Therefore, SDSS J0850$+$4451 does not appear to be undergoing strong feedback from the BAL outflow.]{} 5. In §\[understanding\], we discussed inhomogeneous partial covering and the power-law parameterization used by [*SimBAL*]{}. Four factors that must be considered in order to understand how absorption lines are shaped: the concept of inhomogeneous partial covering itself, the mapping of the output of the photoionization models (ionic column densities) to the power-law parameterization, the opacity of the particular line, and the relative brightness of the background source. In particular, we show how the observed absorption lines depend on the value of the covering fraction parameter $\log a$ but also on the abundance of the ions. So, a rare ion such as P$^{+4}$ can produce a shallower absorption line against the UV continuum than a common ion such as C$^{+3}$ against the broad-line region, even though the covering fraction is lower for the latter. 6. In §\[size\_scales\] and §\[partial\_covering\], we examined the size scales of the continuum emission region (accretion disk), the broad-line region, the torus, and the outflow (estimated to be 1–3 parsecs in Paper I). To explain the partial covering in the UV (established in Paper I), and the difference in covering fractions between the UV and long wavelengths, we suggest a model in which the outflow consists of clumps that are individually structured or very small relative to the UV continuum emission region size scale, and are themselves clustered on size scales comparable to the near-infrared continuum emission region size scale. 7. In §\[selection\], we note SDSS J0850$+$4451 was chosen for this experiment based on the previous observation of \*$\lambda 10830$ absorption, and that in principle, there may be objects which show strong UV absorption but no infrared absorption. We discuss two examples where this seems to be the case, but note that the UV and infrared observations were not simultaneous (and in the case of PG 1254$+$47 were separated by 20 years), so variability cannot be ruled out. Conclusions and Future Prospects {#future} -------------------------------- In this paper, we show that broad absorption lines widely separated in wavelength can be used to investigate the nature of partial covering. This experiment shows that we need not be limited to knowledge about outflows along the radial light of sight to the continuum emission region, but we can also learn about the angular distribution of the outflowing clumps. It would be interesting to investigate other objects using this method. The ideal experiment would involve an object known to have absorption, so that absorption from \*$\lambda 10830$ would also be predicted [@leighly11]. This requires the object to have a redshift of less than $\sim 1.2$ (depending on the velocity of the outflow) so that \*$\lambda 10830$ can be observed from the ground. The redshift requirement means that UV spectrum would need to be observed using [*HST*]{}. The UV and near-infrared observations should be contemporaneous, in order avoid uncertainty due to variability. Depending on the quasar luminosity, an near-infrared image may be advisable to quantify the host galaxy contribution to the 1-micron continuum. KML acknowledges very useful conversations with Carolin Villforth and Hermine Landt about the host galaxy. KML acknowledges useful discussion with the current [*SimBAL*]{} group: Hyunseop Joseph Choi, Collin Dabbieri, Amy Griffin, Francis MacInnis, Adam Marrs, and Cassidy Wagner. KML thanks OU undergraduate Collin McLeod for working out the limit in §\[quantifying\]. KML gratefully acknowledges John Wisniewski’s donation of APO time to the OU astronomy group, and thanks him for taking the 2014 observation as part of the Advanced Observatory Methods class. Support for [*SimBAL*]{} development was provided by NSF Astronomy and Astrophysics Grant No. 1518382. Support for program 13016 was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. [ The [*SimBAL*]{} team acknowledges partial funding for the server “Balthazar” from the OU Research Council and the Homer L. Dodge Department of Physics and Astronomy.]{} DT acknowledges the Homer L. Dodge Department of Physics and Astronomy of the University of Oklahoma for graciously hosting his sabbatical visit in 2017. ABL is supported by NSF DGE-1644869 and Chandra DD6-17080X. ABL thanks the LSSTC Data Science Fellowship Program; their time as a Fellow has benefited this work. SCG thanks the Natural Science and Engineering Research Council of Canada. Based on observations obtained at the [*Gemini*]{} Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the [*Gemini*]{} partnership: the National Science Foundation (United States), the National Research Council (Canada), CONICYT (Chile), the Australian Research Council (Australia), Ministério da Ciência, Tecnologia e Inovação (Brazil) and Ministerio de Ciencia, Tecnología e Innovación Productiva (Argentina). The LBT is an international collaboration among institutions in the United States, Italy and Germany. LBT Corporation partners are: The University of Arizona on behalf of the Arizona university system; Istituto Nazionale di Astrofisica, Italy; LBT Beteiligungsgesellschaft, Germany, representing the Max-Planck Society, the Astrophysical Institute Potsdam, and Heidelberg University; The Ohio State University, and The Research Corporation, on behalf of The University of Notre Dame, University of Minnesota and University of Virginia. This work is based on observations obtained at the MDM Observatory, operated by Dartmouth College, Columbia University, Ohio State University, Ohio University, and the University of Michigan. TIFKAM was funded by The Ohio State University, the MDM consortium, MIT, and NSF grant AST-9605012. The HAWAII-1R array upgrade for TIFKAM was funded by NSF Grant AST-0079523 to Dartmouth College. Based on observations obtained with the Apache Point Observatory 3.5-meter telescope, which is owned and operated by the Astrophysical Research Consortium. Based in part on observations at Kitt Peak National Observatory, National Optical Astronomy Observatory (through time exchange with Ohio State University), which is operated by the Association of Universities for Research in Astronomy (AURA) under cooperative agreement with the National Science Foundation. The authors are honored to be permitted to conduct astronomical research on Iolkam Du’ag (Kitt Peak), a mountain with particular significance to the Tohono O’odham. Absorption Line Variability {#variability} =========================== Observed Variability {#obs_var} -------------------- @vivek14 presented the results of time variability studies of and absorption lines in a sample of 22 LoBAL quasars. Their sample included SDSS J0850$+$4451. They obtained several spectra at the IUCAA Girawali Observatory (IGO) in 2010 and 2011. While there was no variability among the 2010 and 2011 observations, there was significant variability when compared with the 2002 SDSS spectrum: the equivalent width of the absorption line increased from $12.6 \pm 0.4$Å to (weighted mean) value of $18.9 \pm 0.6$Å. This result spurred us to obtain two additional spectra in April 2014 (APO), and April 2015 (KPNO) to measure the \*$\lambda 3889$ and absorption lines. The descriptions of these data sets are given in §\[apoobs\] and §\[kpnoobs\]. Combined with the MDM spectrum from 2011, a digitized @vivek14 spectrum from 2010, and the BOSS spectrum from January 2015, we can examine the absorption line variability observed over 12.5 years (observed frame, 8.1 years rest frame). We focus on the and \*$\lambda 3889$ lines, as they are deepest, and model the two regions separately. With the goal of quantifying the absorption variability, we fit all spectra containing each line simultaneously using [*Sherpa*]{} [@freeman01]. The continuum model is similar to the one described in §\[contmod\] in most cases. The exception was the 2015 BOSS spectrum which displays an unusual continuum shape at shortest wavelengths. We modeled the continuum of that spectrum with a 3rd order polynomial. The KPNO spectrum, taken three months after the BOSS spectrum, shows a normal AGN continuum, and therefore, we suspect that the BOSS spectrum suffered bad flux calibration. As noted above, this should not be the well-known BOSS spectrograph differential refraction problem [@margala16], as that is now corrected for in the pipeline. For the region, we tie the widths of the emission lines together between spectra, and model the absorption with a single Gaussian opacity profile. Our first model left the central wavelength and width of the opacity model independent among the epochs. The reduced $\chi^2$ was 0.85 for 7138 degrees of freedom. In order to make the simplest comparison of opacity (the factor we are most interested in), we try a second model with the position and widths tied together. The $\chi^2$ was slightly worse (0.87 for 7146 degrees of freedom), but an application of the F-test indicated only 17% chance that the difference was significant. For the \*$\lambda 3889$ region, we tie the widths of the \[\]$\lambda 3870$ emission lines together among the spectra, and model the \* absorption line with a single Gaussian opacity profile. In this case, the fits are indistinguishable when the absorption line widths and positions are free or tied together (in both cases, $\chi^2_\nu=0.96$ for 7156 and 7164 degrees of freedom, respectively). ![image](f16-eps-converted-to.pdf){width="6.5truein"} The resulting fits are shown in Fig. \[fig14\]. The resulting apparent optical depths for and \*$\lambda 3889$ are presented in Fig. \[fig15\]. Also marked are the dates of the near-IR observations (using LBT LUCI and [*Gemini*]{} GNIRS) and UV observation ([*HST*]{} COS). This plot clearly shows that while the MDM observation of \*$\lambda 3889$ was contemporaneous with the near-infrared observations using LBT and [*Gemini*]{}, there are unfortunately no ground-based observations within one year of the [ *HST*]{} COS observation. ![image](f17-eps-converted-to.pdf){width="6.5truein"} After an initial increase in opacity by a factor of 1.8, the absorption line decreased to a value slightly less than half the value observed in 2002. Thus, the apparent opacity of the line was observed to vary by a factor of more than four. The \*$\lambda 3889$ displayed less variability (factor of 2). However, we do not have a measurement of the \*$\lambda 3889$ line in 2010 when the was so strong, and when we compare the only the results from observations where both lines were observed, the relative variability is similar. We conclude that the degree of variability is the same for both lines, or larger for . We note that the spectra that we took are not accurately fluxed, so we cannot compare the absorption variability with the continuum variability directly. To compare the continuum variability with the opacity, we downloaded the Catalina Sky Survey (CSS)[^8] data for this object. The data, shown in Fig. \[fig15\], consists of the error-weighted average per night. @vivek14 also analyze these data. It is noteworthy that the magnitudes plotted in their Appendix B are about 0.5 magnitude fainter than the data we downloaded. The origin of this difference is not understood. We speculate that the difference originates in an updated calibration of the CSS data to the Johnson [*V*]{} band. We also plot the V-band magnitude from the SDSS observation, obtained using color correction terms from @jester05[^9]. The uncertainty was taken to be the color transformation RMS residual. The photometry shows that SDSS J0850$+$4451 is very modestly variable; the standard deviation on the photometry points is 0.11 mag, corresponding to a factor of 10%. However, quasars are known to be bluer when brighter, so a larger degree of variability may be present in the photoionizing continuum. Comparing the photometry with the opacity, we see that in general the line opacities are inversely related to the flux. More specifically, the V-band flux was relatively low when the opacity was highest, in 2010, and since then, the V-band magnitude has tended to increase, and the line opacities to decrease. This suggests that at the time of the [*HST*]{} COS observation, the and \*$\lambda 3889$ opacities were no higher than observed during the 2002 SDSS observations. This implies that, if anything, the and \* lines were shallower the ones in the spectra that we analyze in this paper. Therefore the discrepancy between the extrapolated spectral synthesis models and the observed troughs shown in Fig. \[fig13\] would be potentially larger. In summary, the opacity and flux trends suggest that the differences in partial covering that we measure are not an artifact of variability. However, because we do not have contemporaneous optical and infrared spectra at the time of the [*HST*]{} spectrum, we cannot rule this explanation out absolutely. Predicted Variability Patterns {#predicted} ------------------------------ There has been an explosion in variability studies of broad absorption line quasars in recent years [e.g., @capellupo11; @capellupo12; @capellupo13; @filizak13; @vivek14; @mcgraw15; @mcgraw18]. Such studies can be used to constrain the distance of the absorber from the central engine, and thereby constrain feedback metrics; for an example of such an analysis and additional references, see @mcgraw18. The distance can be constrained if there are changes in covering fraction by assuming that Keplerian motion carries the absorber across the continuum source; @mcgraw18 found typical distances $r\lesssim 1$–10 pc in their sample of quasars. If there are changes in ionization parameter, the distance can be constrained by comparing the observed variability time scale with the recombination time scale; @mcgraw18 found a typical range of $r \lesssim 100$–1000 pc. These variability studies have taken a general, qualitative, and order-of-magnitude approach to the estimation of the origin of BAL variability. [*SimBAL*]{}, in contrast, can place qualitative constraints on the origin of variability. We use the [*SimBAL*]{} simulation grids to predict the kinds of variability that might be observed in , \*$\lambda 3889$, and \*$\lambda 10830$ absorption lines as a function of a change in ionization parameter $\log U$ (equivalent to a change in ionizing flux for a fixed density), a change in the column density $\log N_H$, or a change in the covering fraction, parameterized by $\log a$, that might be equivalent to transverse motion across the source. We chose the 11-bin models from Paper I for the second continuum for the nominal soft SED, the hard SED, and the enhanced metallicity cases. We varied one parameter at a time by the same amount in each of the eleven velocity bins. At each deviation interval away from the best fit, we created a synthetic spectrum, and then estimated the apparent column density of , \*$\lambda 3889$, and \*$\lambda 10830$, by integrating over the line profile, using Eq. 9 in @ss91. The results are shown in Fig. \[fig16\]. ![image](f18-eps-converted-to.pdf){width="7.0"} An increase in $\log U$ results in a decrease in the opacity for all lines, but especially for , because the regions in the slab where Mg$^+$ and excited-state neutral helium lie are matter bounded. So an increase in the ionization effectively pushes the regions where these ions would be present beyond the back (i.e., opposite of the illuminated face) of the cloud. Increasing the thickness of the clouds causes the opacity for all the lines to increase, but, again, especially for . The interpretation is the same as above. The column densities for the best fitting model is matter-bounded for these ions, so increasing the total column density increases the column density of these ions. The powerlaw covering fraction model has the property that all opacities increase geometrically. There are slight differences in slope but those are not significant. Finally, it is important to note that in all cases, the \*$\lambda 10830$ line varies in concert with the \*$\lambda 3889$ line. The data are insufficient to determine which of these scenarios (variable ionization parameter, variable column density, or variable covering fraction) is true. But the pattern of variability observed seems to weakly support variation in ionization parameter, since, as discussed in §\[obs\_var\], there is a possible anti-correlation observed between the opacity and continuum flux, and a greater variability in compared with \*$\lambda 3889$. @vivek14 suggested that ionization variability might have occurred in SDSS J0850+4451. The Host Galaxy Contribution to the Near-Infrared Continuum\[host\] =================================================================== As noted above, the observed \*$\lambda 10830$ absorption line is much shallower than predicted by the model of the far-UV spectrum. This could mean that the covering fraction to the near-infrared continuum is lower than it is to the ultraviolet continuum. But another possibility is that the host galaxy is contributing a significant amount of the observed near-infrared continuum, making the \*$\lambda 10830$ line appear shallower than it really is. In this section, we estimate the plausible contribution of the host galaxy to the near-infrared continuum to address this second possibility. We found that the contribution of the host galaxy is negligible. SED Fitting {#sed_fitting} ----------- @lucy14 addressed the question of the host galaxy contribution to the near-infrared continuum of FBQS J1151$+$3822 via SED modeling of broadband photometry. They fit data from SDSS, 2MASS, and WISE with a power law, elliptical galaxy template, and two blackbodies to model both the warm ($T \approx 1200\ {\rm K}$) torus and cooler ($T \approx 300\ {\rm K}$) dust. We follow that procedure here, but we use our new $JHK_s$ photometry, add NUV photometry from [*GALEX*]{}, and include two ultraviolet continuum points derived from the [*HST*]{} COS spectra (Paper I). The SED is shown as black filled points in the upper panel of Fig. \[sed\_fitting\_fig\], along with the original 2MASS values in red; these were not used in the SED fitting. We first obtained an estimate of the reddening in SDSS J0850$+$4451 by building a simple model consisting only of the @richards06 continuum. This model has only two parameters: a multiplicative scaling of the Richards continuum and the reddening intrinsic to the quasar. This model was fit to all points excluding those from [*WISE*]{}, and yielded $E(B - V) = 0.03 \pm 0.03$. The WISE points were excluded because, compared to the Richards continuum, the torus in SDSS J0850+4451 is considerably less bright, suggesting that this object is deficient in hot dust [@hao10; @hao11; @lyu17]. Also shown in the top panel of Fig. \[sed\_fitting\_fig\] is the decomposition of the SED with components indicated in the figure legend. This solution has reddening $E(B - V) = 0.0$, meaning that the solution hit the lower bound of allowed values. The galaxy contribution at $\lambda = 10830\ {\rm \AA}$ is $0.09 \pm 0.03$%. The power-law index is $-1.53$, which is not unusual for quasars [@krawczyk15]. The residuals to the fit are shown in the lower panel of the figure, where these are defined as $\Delta = $ (data $-$ model) / model. The model fit is not particularly good, with reduced $\chi^2 = 50$, but [ much of this is likely to be due to the simplicity of the model; variability may play a role as well (see Fig. \[fig15\]).]{} These results show that the contribution of the host galaxy emission to the continuum under the \*$\lambda 10830$ absorption line is negligible. ![image](f19.pdf){width="4.5truein"} Image Analysis of the SDSS J0850+4451 Host Galaxy {#image} ------------------------------------------------- Coincidentally, an [*HST*]{} program[^10] made WFC3 observations in the near-infrared of SDSS J0850+4451; the information about this observation is found in Table \[observations\]. We used these data to provide a complementary estimate of the contribution of the host galaxy to the infrared continuum emission and the photometry. [*HST*]{} images of quasars are dominated by the quasar, so it is necessary to use the image point spread function (PSF) to isolate the emission of the host galaxy. A PSF made from observations of a star works better than a simulated one [e.g., @canalizo07], and fortunately, a PSF star observation was made in association with this program. The star was the white dwarf GRW+70D5824, an object chosen to have $B-V$ color similar to the average color of the sample. It was observed with three different exposure times in each filter in order to obtain a broad dynamic range. The star was placed on the same part of the detector as the science observations. [*HST*]{} has variable focus (so-called “breathing”). Fortunately, the star and the quasar were observed during times when the focus was nearly the same. The PSF is undersampled by the detector, but that is accounted for by dithering the telescope during the observation, and combining the individual images using the MultiDrizzle[^11] software, which also corrects for distortion. Cosmic-ray rejection can remove photons from the core of the image [@riess11]. To determine whether this problem is present in the multidrizzled images, we performed photometric analysis on the flat-fielded images of the star. We obtained a mean and standard deviation of the net count rates for the star of $1.500 \pm 0.009 \times 10^{-15} \rm \, erg\, s^{-1}\, cm^{-2} \,$Å$^{-1}$. The aperture correction for the F125W filter from 2 arc seconds to infinity is 0.029 [@kalirai09], resulting in a final estimated flux density of $1.539 \pm 0.009 \times 10^{-15} \rm \, erg\, s^{-1}\, cm^{-2} \,$Å$^{-1}$. The 2MASS $m_J$ is 13.248, corresponding to a flux density of $1.57 \times 10^{-15} \rm erg\, s^{-1}\, cm^{-2} \,$Å$^{-1}$, less than 2% from the measured value. In contrast, a similar extraction of the multidrizzled images (i.e., after cosmic-ray correction) yields an flux estimate of $1.392 \pm 0.006 \times 10^{-15} \rm \, erg\, s^{-1}\, cm^{-2}\,$Å$^{-1}$, 11% below the 2MASS value, indicating that indeed, cosmic-ray rejection had removed source photons from the core of the image. We performed roughly the same analysis on the image of SDSS J0850+4451, but since we wished to compare with the MDM photometry, we first blurred the image by convolution with a Gaussian to correspond to the 1.25 arc second seeing. The resulting measurement of the flux density was $8.75 \pm 0.18 \times 10^{-17}\, \rm erg\, s^{-1}\, cm^{-2}\, $Å$^{-1}$. In contrast, the multidrizzled image yields a measurement of the flux density of $6.90 \pm 0.12 \times 10^{-17}\, \rm erg\, s^{-1}\, cm^{-2}\, $Å$^{-1}$, about 20% lower. Nevertheless, this difference should only influence the core of the quasar PSF, and not the extended host galaxy emission. The image analysis was done using [ Sherpa]{}[^12] [@freeman01]. This software requires three input images: the target image, an error image, and a point spread function image. The point spread image was constructed using the five observations of the PSF star: three with exposures of 5.865 seconds, one with exposure of 11.729, and one with exposure 23.458 seconds. We examined the images and the data quality arrays and found no evidence for saturation. Therefore, an exposure-weighted average of these images was used for the PSF. We smoothed the PSF image using the method outlined by @canalizo07. To prepare the error images, we followed the procedure outlined in [*The DrizzlePac Handbook*]{}[^13]. We modeled a circular region within 71 pixels (9.23 arc seconds), excluding three faint sources near the edges of the region. Initially, we chose a Gaussian model and a constant background. This model did not provide a good fit to the image, with a reduced $\chi^2_\nu= 4.05$. We next tried a model consisting of a Gaussian profile, a Sersic model, and a constant. The resulting reduced $\chi^2_{\nu}$ was 1.34, a dramatic improvement in fit over the Gaussian plus constant model that indicates clear evidence for the detection of the host galaxy. Statistically, the Gaussian plus Sersic model provided a good fit. However, examination of the radial profile showed positive residuals between 2 and 5 arc seconds from the center, suggesting that there is an additional larger-scale but fainter component. We model this component with an additional Sersic profile, but because the error bars are large, we fixed the index to one (appropriate for a disk galaxy) and the ellipticity to zero. The $\chi^2$ decreased a small amount, to $\chi^2_\nu = 1.27$, indicating that this component is not statistically necessary. The fit parameters are given in Table \[image\_fit\_results\], and the image, best fitting model, residuals, and profile are shown in Fig. \[fig17\]. ![image](f20-eps-converted-to.pdf){width="6.5truein"} Noting the problems with cosmic-ray rejection in the core of the PSF discussed above, we performed the same analysis but ignored the four central pixels. The reduced $\chi^2_\nu$ for this model was 0.58. The same residuals were observed in the radial profile, so we added another Sersic model as above, producing a reduced $\chi^2_\nu$ of 0.51. The model parameters are given in Table \[image\_fit\_results\], and fitting results are shown in Fig. \[fig17\]. [lCC]{} Gaussian FWHM (pixels) & 1.36\^[+0.006]{}\_[-0.0.05]{} & 0.73\^[+0.12]{}\_[-0.002]{}\ Gaussian Amplitude (counts s$^{-1}$) & 13329 & 4250\^[+17]{}\_[-1100]{}\ 1 Sersic $R_0$ (pixels) & 5.0\^[+0.06]{}\_[-0.04]{} & 4.4\^[+0.31]{}\_[-0.04]{}\ 1 Sersic $R_0$ (kpc) & 4.1\^[+0.05]{}\_[-0.03]{} & 3.6\^[+0.25]{}\_[-0.03]{}\ 1 Sersic Eccentricity & 0.16\^[+0.009]{}\_[-0.008]{} & 0.12\^[+0.02]{}\_[-0.009]{}\ 1 Sersic Theta (radians) & 0.910.02 & 1.0\^[+0.04]{}\_[-0.12]{}\ 1 Sersic Amplitude (counts s$^{-1}$) & 2.0\^[+0.05]{}\_[-0.07]{} & 2.5\^[+0.04]{}\_[-0.17]{}\ 1 Sersic $n$ & 0.21 0.02 & 0.53\^[+0.01]{}\_[-0.13]{}\ 2 Sersic $R_0$ (pixel) & 9.8\^[+0.4]{}\_[-0.5]{} & 17\^[+2]{}\_[-3]{}\ 2 Sersic $R_0$ (kpc) & 7.9\^[+0.34]{}\_[-0.40]{} & 14\^[+1.6]{}\_[-2.6]{}\ 2 Sersic Amplitude (counts s$^{-1}$) & 0.3\^[+0.04]{}\_[-0.03]{} & 0.07\^[+0.06]{}\_[-0.012]{}\ Constant (counts s)$^{-1}$ & 0.010\^[+0.0007]{}\_[-0.0006]{} & 0.006 0.001\ \[image\_fit\_results\] [*Sherpa*]{} allows us to save the unconvolved model component images. The count rates from each component were obtained by summing over these images, and converted to flux densities using the inverse sensitivity. These values are given in Table \[image\_fit\_results\]. We note that the value obtained from the fit with the four central pixels excluded ($8.5 \times 10^{-17}\rm \, erg\, s^{-1}\, cm^{-2}\, $Å$^{-1}$) is comparable to the value obtained using aperture photometry on the flat-fielded images, as described above ($8.75 \times 10^{-17}\rm \, erg\, s^{-1}\, cm^{-2}\, $Å$^{-1}$), indicating that our model accounts for the emission in the image. Also, the flux densities from the galaxy component are essentially the same whether we ignore or keep the four central pixels, indicating that the issues stemming from the imperfect cosmic-ray rejection did not affect the estimate of the host galaxy flux. We convolved the model component images with the seeing and applied an aperture to estimate the flux from each component contributing to our photometry and spectroscopy. We did this for two cases. The first case pertains to the MDM JHK photometry described in §\[mdmobs\]. The seeing during those observations was about 1.25 arc seconds (about 4 pixels) and the extraction aperture was 3.3 arc seconds in radius. The second case pertained to the [*Gemini*]{} GNIRS spectroscopy. The slit width was 0.45 arc seconds, and the flux was extracted in an aperture 6 pixels in radius, corresponding to a total length of 1.8 arc seconds. The orientation of the slit varied among the different spectroscopic observations, so we chose a range of orientation angles to bracket the minimum and maximum fluxes. The seeing during the several [*Gemini*]{} observations was not available. Our program observing conditions requirement was “85% to poor”, corresponding to 0.85–1.55 arc seconds. We assumed a seeing value of 0.91 arc seconds, corresponding to 3 pixels. The results are given in Table \[image\_model\_flux\_densities\]. [lccc]{}\ Gaussian & 5.6 & 5.6 & 4.1–4.5\ 1 Sersic & 0.66 & 0.66 & 0.20–0.22\ 2 Sersic & 0.77 & 0.72 & 0.09\ Sersic+Sersic & 1.4 & 1.4 & 0.30–0.31\ Total & 7.1 & 7.0 & 4.4–4.8\ \ Gaussian & 7.0 & 7.0 & 5.6–5.9\ 1 Sersic & 0.90 & 0.90 & 0.29–0.30\ 2 Sersic & 0.57 & 0.41 & 0.03\ Sersic+Sersic & 1.5 & 1.3 & 0.32–0.33\ Total & 8.5 & 8.3 & 5.9–6.2\ \[image\_model\_flux\_densities\] The results are displayed in Fig. \[fig18\], which shows our spectroscopy and photometry in the observed frame. The 2017 eBOSS spectrum was scaled to the SDSS 2002 spectrum by multiplying by a factor of 0.91, and the [*Gemini*]{} spectrum was scaled to the result via the overlapping H$\alpha$ line. The SDSS photometry and near-IR photometry from MDM were overlaid. A 5 Gyr-old elliptical galaxy template [@polletta07] was shifted to the observed frame and reddened corresponding to Milky Way extinction, and then scaled to the “Sersic+Sersic” flux values listed in Table \[image\_model\_flux\_densities\]. This graph clearly shows that the contribution of the host galaxy to the continuum under the \*$\lambda 10830$ line is negligible. ![image](f21-eps-converted-to.pdf){width="5.5truein"} How Does the Host Galaxy in SDSS J0850+4451 Compare with Other Quasar Host Galaxies? {#host_comparison} ------------------------------------------------------------------------------------ As a final check of our analysis, we briefly compare the host galaxy properties with those from other low-redshift quasars. For a fair comparison, we recall the estimated black hole mass of $1.6 \times 10^9\rm \, M_\odot$ from Paper I. @bentz2009b [@bentz2009a] analyze [*HST*]{} images from reverberation-mapped AGN and quasars. All of these objects have smaller black hole masses than estimated for SDSS J0850+4451 ($\log M_{bh}=9.2$), but several objects are close, including PG 0804$+$761, PG 1226$+$023, PG 1426$+$015, and PG 1700$+$518, which have log black hole masses of 8.84, 8.95, 9.11, and 8.89, respectively. We compared the derived properties of SDSS J0850$+$4451 with those obtained for these four galaxies. The Sersic radial scale factor, between $\sim 4$ and $\sim 11 \rm \, kpc$, is similar to the four comparison objects, which have scale factors between 3.3 to 12 kpc. The best fitting Sersic index is very low for SDSS J0850$+$4511, between 0.2 and 0.5 for the smaller component, and fixed to 1 for the larger component. This may not be physical, noting that the inner region is not well constrained due to the PSF. The indices for the comparison sample range from 1.0 to 5.6. Integrating over the template scaled to the total galaxy model flux and shifted into the rest frame yields a total log luminosity of $10.9\rm \, [L_\odot]$, which is again similar to the values of the comparison sample, which range from 10.6 to $11.2 \rm \, [L_\odot]$ [@bentz2009a]. @landt11 [their Figure 1] present a graph showing the enclosed luminosity density at 5100Å rest frame from a sample of galaxies as a function of the extraction aperture. At the redshift of SDSS J0850+4451, the [*Gemini*]{} GNIRS aperture encloses $31.7\rm \, kpc^2$. The log luminosity density at 5100Å obtained from the scaled 5-Gyr-old elliptical template was $39.8\rm \, [erg\, s^{-1}$Å$^{-1}]$. This lies approximately 0.35 dex lower than the regression line in @landt11 Figure 1, corresponding to a factor of $\sim 2$, but within the scatter around the regression line. We therefore conclude that the host galaxy in SDSS J0850$+$4511 is in no way anomalous but is instead typical of a galaxy in a low-redshift quasar, and that our conclusion that the host galaxy contribution to the continuum under the \*$\lambda 10830$ line is negligible is robust. [84]{} natexlab\#1[\#1]{} , N., [Kaastra]{}, J., [Kriss]{}, G. A., [et al.]{} 2005, , 620, 665 , N., [Moe]{}, M., [Costantini]{}, E., [et al.]{} 2008, , 681, 954 , M. C., [Peterson]{}, B. M., [Netzer]{}, H., [Pogge]{}, R. W., & [Vestergaard]{}, M. 2009, , 697, 160 , M. C., [Peterson]{}, B. M., [Pogge]{}, R. W., & [Vestergaard]{}, M. 2009, , 694, L166 , M. C., [Denney]{}, K. D., [Grier]{}, C. J., [et al.]{} 2013, , 767, 149 , B. C. J., [Arav]{}, N., [Edmonds]{}, D., [Chamberlain]{}, C., & [Benn]{}, C. 2013, , 762, 49 , B. C. J., [Edmonds]{}, D., [Arav]{}, N., [Benn]{}, C., & [Chamberlain]{}, C. 2012, , 758, 69 , T. A. 2002, , 565, 78 , G., [Bennert]{}, N., [Jungwiert]{}, B., [et al.]{} 2007, , 669, 801 , D. M., [Hamann]{}, F., [Shields]{}, J. C., [Halpern]{}, J. P., & [Barlow]{}, T. A. 2013, , 429, 1872 , D. M., [Hamann]{}, F., [Shields]{}, J. C., [Rodr[í]{}guez Hidalgo]{}, P., & [Barlow]{}, T. A. 2011, , 413, 908 —. 2012, , 422, 3249 , D. M., [Hamann]{}, F., [Herbst]{}, H., [et al.]{} 2017, , 469, 323 , D. A., [Leighly]{}, K. M., & [Baron]{}, E. 2006, , 637, 157 , C., [Arav]{}, N., & [Benn]{}, C. 2015, , 450, 1085 , R. E. S. 1987, , 229, 31P , M., [Dalton]{}, G., [Maddox]{}, S., [et al.]{} 2001, , 328, 1039 , S. J., [Horne]{}, K., [Kaspi]{}, S., [et al.]{} 1998, , 500, 162 , M. C., [Vacca]{}, W. D., & [Rayner]{}, J. T. 2004, , 116, 362 , M., [Korista]{}, K. T., & [Arav]{}, N. 2002, , 580, 54 , D. L., [Atwood]{}, B., [Byard]{}, P. L., [Frogel]{}, J., & [O’Brien]{}, T. P. 1993, in , Vol. 1946, Infrared Detectors and Instrumentation, ed. A. M. [Fowler]{}, 667–672 , M. A., [Brotherton]{}, M. S., & [De Breuck]{}, C. 2013, , 428, 1565 , C., [Davis]{}, S. W., [Jin]{}, C., [Blaes]{}, O., & [Ward]{}, M. 2012, , 420, 1848 , J. P., [Bautista]{}, M., [Arav]{}, N., [et al.]{} 2010, , 709, 611 , G. J., [Porter]{}, R. L., [van Hoof]{}, P. A. M., [et al.]{} 2013, , 49, 137 , N., [Brandt]{}, W. N., [Hall]{}, P. B., [et al.]{} 2013, , 777, 168 , C. W., [Morris]{}, S. L., [Crighton]{}, N. H. M., [et al.]{} 2014, , 440, 3317 , D., [Hogg]{}, D. W., [Lang]{}, D., & [Goodman]{}, J. 2013, , 125, 306 , J., [King]{}, A., & [Raine]{}, D. J. 2002, [Accretion Power in Astrophysics: Third Edition]{} , P., [Doe]{}, S., & [Siemiginowska]{}, A. 2001, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 4477, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, ed. J.-L. [Starck]{} & F. D. [Murtagh]{}, 76–87 , J. R., [Kraemer]{}, S. B., [Crenshaw]{}, D. M., [et al.]{} 2005, , 631, 741 , R., [Lynch]{}, R. S., [Charlton]{}, J. C., [et al.]{} 2013, , 435, 1233 , R. R., [Jiang]{}, L., [Brandt]{}, W. N., [et al.]{} 2009, , 692, 758 , N., [Asplund]{}, M., & [Sauval]{}, A. J. 2007, , 130, 105 , D., [Komossa]{}, S., [Scharw[ä]{}chter]{}, J., [et al.]{} 2013, , 146, 78 , P. B., [Hutsem[é]{}kers]{}, D., [Anderson]{}, S. F., [et al.]{} 2003, , 593, 189 , F. 1998, , 500, 798 , F., [Barlow]{}, T. A., [Junkkarinen]{}, V., & [Burbidge]{}, E. M. 1997, , 478, 80 , F., [Kanekar]{}, N., [Prochaska]{}, J. X., [et al.]{} 2011, , 410, 1957 , F., [Korista]{}, K. T., [Ferland]{}, G. J., [Warner]{}, C., & [Baldwin]{}, J. 2002, , 564, 592 , F. W., [Barlow]{}, T. A., [Chaffee]{}, F. C., [Foltz]{}, C. B., & [Weymann]{}, R. J. 2001, , 550, 142 , H., [Elvis]{}, M., [Civano]{}, F., & [Lawrence]{}, A. 2011, , 733, 108 , H., [Elvis]{}, M., [Civano]{}, F., [et al.]{} 2010, , 724, L59 , P. F., & [Elvis]{}, M. 2010, , 401, 7 , S., [Schneider]{}, D. P., [Richards]{}, G. T., [et al.]{} 2005, , 130, 873 , J. S., [MacKenty]{}, J., [Bohlin]{}, R., [et al.]{} 2009, [WFC3 SMOV Proposal 11451: The Photometric Performance and Calibration of WFC3/IR]{}, Tech. rep. , M., [H[ö]{}nig]{}, S. F., [Beckert]{}, T., & [Weigelt]{}, G. 2007, , 476, 713 , K., [Baldwin]{}, J., [Ferland]{}, G., & [Verner]{}, D. 1997, , 108, 401 , C. M., [Richards]{}, G. T., [Gallagher]{}, S. C., [et al.]{} 2015, , 149, 203 , H., [Elvis]{}, M., [Ward]{}, M. J., [et al.]{} 2011, , 414, 218 , K., [Terndrup]{}, D., [Gallagher]{}, S. C., & [Richards]{}, G. 2019, in American Astronomical Society Meeting Abstracts, Vol. 233, American Astronomical Society Meeting Abstracts \#233, 242.39 , K. M., [Cooper]{}, E., [Grupe]{}, D., [Terndrup]{}, D. M., & [Komossa]{}, S. 2015, , 809, L13 , K. M., [Dietrich]{}, M., & [Barber]{}, S. 2011, , 728, 94 , K. M., [Halpern]{}, J. P., [Jenkins]{}, E. B., & [Casebeer]{}, D. 2007, , 173, 1 , K. M., [Hamann]{}, F., [Casebeer]{}, D. A., & [Grupe]{}, D. 2009, , 701, 176 , K. M., & [Moore]{}, J. R. 2006, , 644, 748 , K. M., [Terndrup]{}, D. M., [Gallagher]{}, S. C., [Richards]{}, G. T., & [Dietrich]{}, M. 2018, ArXiv e-prints , P., [Kaspi]{}, S., [Netzer]{}, H., [et al.]{} 2018, ArXiv e-prints , A. B., [Leighly]{}, K. M., [Terndrup]{}, D. M., [Dietrich]{}, M., & [Gallagher]{}, S. C. 2014, , 783, 58 , B., [Brandt]{}, W. N., [Alexander]{}, D. M., [et al.]{} 2013, , 772, 153 , J., [Rieke]{}, G. H., & [Shi]{}, Y. 2017, , 835, 257 , D., [Kirkby]{}, D., [Dawson]{}, K., [et al.]{} 2016, , 831, 157 , P., [Elias]{}, J., [Points]{}, S., [et al.]{} 2014, in , Vol. 9147, Ground-based and Airborne Instrumentation for Astronomy V, 91470Z , S. M., [Shields]{}, J. C., [Hamann]{}, F. W., [et al.]{} 2015, , 453, 1379 , S. M., [Shields]{}, J. C., [Hamann]{}, F. W., [Capellupo]{}, D. M., & [Herbst]{}, H. 2018, , 475, 585 , T., [Charlton]{}, J. C., [Eracleous]{}, M., [et al.]{} 2007, , 171, 1 , M., [Arav]{}, N., [Bautista]{}, M. A., & [Korista]{}, K. T. 2009, , 706, 525 , E. A., [Hamann]{}, F., [Capellupo]{}, D. M., [et al.]{} 2017, , 468, 4539 , P. M., [Cohen]{}, M. H., [Miller]{}, J. S., [et al.]{} 1999, , 125, 1 , C. A., & [Peterson]{}, B. M. 2002, , 572, 746 , B. M., & [Wandel]{}, A. 1999, , 521, L95 , M., [Tajer]{}, M., [Maraschi]{}, L., [et al.]{} 2007, , 663, 81 , G. T., [Lacy]{}, M., [Storrie-Lombardi]{}, L. J., [et al.]{} 2006, , 166, 470 , A. G. 2011, [An Independent Determination of WFC3-IR Zeropoints and Count Rate Non-Linearity from 2MASS Asterisms]{}, Tech. rep. , P., [Hamann]{}, F., & [Hall]{}, P. 2011, , 411, 247 , B. M., & [Hamann]{}, F. 2001, , 563, 555 —. 2005, ArXiv Astrophysics e-prints , B. D., & [Sembach]{}, K. R. 1991, , 379, 245 , E. F., & [Finkbeiner]{}, D. P. 2011, , 737, 103 , W. D., [Cushing]{}, M. C., & [Rayner]{}, J. T. 2003, , 115, 389 , M., & [Peterson]{}, B. M. 2006, , 641, 689 , M., [Srianand]{}, R., [Petitjean]{}, P., [et al.]{} 2014, , 440, 799 , I., [Peterson]{}, B. M., [Alloin]{}, D., [et al.]{} 1997, , 113, 69 , R. J., [Morris]{}, S. L., [Foltz]{}, C. B., & [Hewett]{}, P. C. 1991, , 373, 23 [^1]: http://www.gemini.edu/sciops/instruments/gnirs [^2]: http://abell.as.arizona.edu/$\sim$lbtsci/Instruments/LUCIFER/lucifer.html [^3]: http://www.astronomy.ohio-state.edu/MDM/CCDS/ [^4]: http://www.astronomy.ohio-state.edu/MDM/TIFKAM/ [^5]: http://www.apo.nmsu.edu/arc35m/Instruments/DIS/ [^6]: http://dan.iel.fm/emcee/current/ [^7]: This is shown using L’Hôpital’s Rule for $$\lim_{x \to c} \frac{f(x)}{g(x)}.$$ If the value $\frac{f(c)}{g(c)}$ is an indeterminate form, i.e., $\frac{0}{0}$ or $\frac{\infty}{\infty}$, then the following equality holds: $$\lim_{x \to c} \frac{f(x)}{g(x)} = \lim_{x \to c} \frac{f^\prime(x)}{g^\prime(x)}.$$ Here, $f(x)$ and $g(x)$ are $1-(\tau/\tau_{max}^a)$ for the UV and long wavelength continua respectively, and $x \to c$ corresponds to $\tau \to \tau_{max}$. [^8]: http://nesssi.cacr.caltech.edu/DataRelease/ [^9]: http://www.sdss3.org/dr8/algorithms/sdssUBVRITransform.php [^10]: PI: Canalizo, “The Nature of Low-Ionization BAL QSOs”, program number 11557 [^11]: http://www.stsci.edu/hst/HST\_overview/documents/multidrizzle/multidrizzle\_cover.html [^12]: http://cxc.harvard.edu/sherpa4.9/ [^13]: Page 89, http://documents.stsci.edu/hst/HST\_overview/documents/DrizzlePac/drizzlepac.pdf
--- abstract: 'The goal of this study is to develop and analyze multimodal models for predicting experienced affective responses of viewers watching movie clips. We develop hybrid multimodal prediction models based on both the video and audio of the clips. For the video content, we hypothesize that both image content and motion are crucial features for evoked emotion prediction. To capture such information, we extract features from RGB frames and optical flow using pre-trained neural networks. For the audio model, we compute an enhanced set of low-level descriptors including intensity, loudness, cepstrum, linear predictor coefficients, pitch and voice quality. Both visual and audio features are then concatenated to create audio-visual features, which are used to predict the evoked emotion. To classify the movie clips into the corresponding affective response categories, we propose two approaches based on deep neural network models. The first one is based on fully connected layers without memory on the time component, the second incorporates the sequential dependency with a long short-term memory recurrent neural network (LSTM). We perform a thorough analysis of the importance of each feature set. Our experiments reveal that in our set-up, predicting emotions at each time step independently gives slightly better accuracy performance than with the LSTM. Interestingly, we also observe that the optical flow is more informative than the RGB in videos, and overall, models using audio features are more accurate than those based on video features when making the final prediction of evoked emotions.' author: - | Ha Thi Phuong Thao Dorien Herremans Gemma Roig\ Singapore University of Technology and Design\ 8 Somapah Rd, Singapore 487372\ [thiphuongthao\[email protected], dorien\[email protected], [email protected]]{} bibliography: - 'egbib.bib' title: Multimodal Deep Models for Predicting Affective Responses Evoked by Movies --- Introduction {#sec:intro} ============ Human emotional experiences can be evoked by watching audio-visual content, such as movies. In psychology, evoked emotions have been extensively studied [@zentner2008emotions; @koelsch2010towards; @baumgartner2006emotion; @gabrielsson2001emotion]. Being able to automatically predict which emotions multimedia content might evoke has a wide range of applications. For instance, it can be a tool for multimedia producers in advertisement or film industry. Yet, in computer science, most of the current and previous research focuses on emotion recognition of people in videos, and they are based on facial expressions and audio signals [@kahou2016emonets; @hu2017learning; @ebrahimi2015recurrent]. Predicting the viewers’ emotion evoked by videos and multimedia content has received little attention so far. For measuring affective responses, some researchers have proposed to use emotional categories and suggested that the number of distinct emotional categories may vary from two to twenty seven [@cowen2017self; @picard2000affective]. Although the precise categories can be different in studies, the two most common ones are arousal and valence, which were originally proposed in  [@russell1980circumplex], and have been used in most predictive models,  [@samara2016feature; @zlatintsi2017cognimuse; @herremans2017imma; @lang1995emotion]. Arousal ranges from calm to exciting, while valence represents how positive or negative the emotion is [@picard2000affective]. @russell1980circumplex also proposed another factor, namely dominance, which refers to the sense of “control” or “attention”  [@picard2000affective]. Dominance, however, has been known to introduce complexity in the annotation process and is difficult to computationally predict [@zlatintsi2017cognimuse], hence, it is often omitted. In this work, we propose a model for predicting evoked emotion from videos. We use the two-dimensional model of affective content, in which arousal and valence are predicted separately. For doing so, we leverage the power of deep convolutional neural networks (CNNs), a type of network that has led to considerable advances in image classification [@krizhevsky2012imagenet; @simonyan2014very; @he2016deep; @szegedy2015going] and action recognition [@ji20123d; @simonyan2014two; @carreira2017quo]. We apply deep CNNs for extracting image and motion features from static RGB frames and optical flow fields respectively. This approach is similar to the two-stream ConvNet architecture in [@simonyan2014two], in which spatial and temporal networks are integrated for action recognition. The spatial network is used to capture features from scenes and objects in videos, while the temporal network carries information about the motion of camera and objects across frames. The audio features are computed using OpenSMILE [@EybenOpenSMILE], and include intensity, loudness, cepstrum, linear predictor coefficients, pitch, voice quality, among others. We explore two models based on deep neural networks that ingest the extracted features. The first one consists of fully connected layers without memory on the time component, as shown in Figure \[fig:RGB\_OF\_Audio\_2FC\_upgraded\]. The second model that we explore uses long-short term memory (LSTM) structures, which are recurrent neural networks with memory that can learn temporal dependencies between observations. LSTMs have been successfully applied in sequence prediction problems including emotion recognition of subjects in a video [@fan2016video; @pini2017modeling]. We perform experiments on the extended COGNIMUSE dataset [@malandrakis2011supervised], which consists of 12 movie clips: seven half-hour continuous movie clips from the original COGNIMUSE dataset [@evangelopoulos2013multimodal] and five extra half-hour Hollywood movie clips. Valence and arousal are annotated in the range of $\left[-1, 1\right]$ by several subjects. We perform a throughout analysis of the importance of the video and audio feature set. Our results suggest that audio contains most of the evoked emotion content, and for videos, motion is more important than the RGB frames. We also observe that our model with fully connected layers outperforms the model with an LSTM structure, as well as a previous approach introduced with the extended COGNIMUSE dataset [@malandrakis2011supervised]. Related work {#sec:Related_work} ============ In computer vision, emotion prediction from video has been mostly studied from the perspective of predicting facial expressions of humans in the videos,  [@Kanade:2005:FEA:2101315.2101317; @Cohn_automatedface]. Yet, predicting evoked emotion has received surprisingly little attention so far. In most current research in video-based emotion recognition [@levi2015emotion; @fan2016video; @kaya2017video; @zheng2018multimodal; @kahou2016emonets], multimodal approaches have been applied to integrate information from different modalities. The recent breakthrough of deep convolutional neural networks (CNNs) for object recognition [@krizhevsky2012imagenet; @simonyan2014very; @he2016deep] has been adapted to the problem of emotion recognition in videos, in which deep CNNs such as VGG [@simonyan2014very] and Inception-ResNet v1 [@szegedy2017inception] are used to extract facial expression features from RGB frames  [@fan2016video; @zheng2018multimodal; @pini2017modeling]. In addition to features extracted from still frames, motion plays an important role in emotion recognition. Actions represented by facial muscles or body actions in videos may be estimated using optical flow [@mase1991recognition; @simonyan2014two]. @mase1991recognition recognizes four basic facial expressions including surprise, happiness, disgust and anger based on optical flows. The appearance and action information can be handled separately in two streams using CNNs [@simonyan2014two; @wang2015towards] or extracted simultaneously using deep three-dimensional convolutional networks [@fan2016video; @pini2017modeling]. Many studies have shown that there is a powerful connection between sound and emotion [@Meyer56; @panksepp2002emotional; @zentner2008emotions; @doughty2016practices; @herremans2016], therefore, it is necessary to add the audio modality into emotion recognition models. Audio features can be extracted using toolkits such as OpenSMILE [@EybenOpenSMILE], YAAFE [@mathieu2010yaafe] or deep neural networks such as SoundNet [@aytar2016soundnet], AlexNet, Inception, ResNet architectures [@hershey2017cnn]. In this study, we opted for the OpenSMILE toolkit as it has shown its efficiency in emotion recognition models from voice. Another important aspect for predicting evoked emotion from movies might be the temporal component. To deal with sequences, recurrent neural networks have been successfully used in many applications. However, they are limited by their long-term dependency, where the state information is integrated over time resulting in gradient exploding/vanishing when training them to learn long-term dynamics. In order to overcome this limitation, the LSTM cell was first introduced by @hochreiter1997long and later simplified in [@graves2013generating; @zaremba2014learning]. LSTMs have a good performance in a wide range of sequence processing research and have also been widely used in video-based emotion recognition [@fan2016video; @pini2017modeling; @zheng2018multimodal]. Most previous work focuses on predicting the emotion of humans in a video instead of evoked/experienced emotion from viewers of videos. This is in part due to the lack of available labeled datasets. Recently, to fill this gap, @zlatintsi2017cognimuse introduced the COGNIMUSE dataset, which is a multimodal video dataset including seven half-hour Hollywood movie clips. @malandrakis2011supervised use the extended version of this dataset, which includes 12 movies clips and classify emotion in terms of seven valence and arousal categories at the frame level using independent hidden Markov models (HMMs). A wide range of visual and audio features are extracted, but finally only a small feature set including mel-frequency cepstral coefficients (MFCCs), their derivatives, maximum color value and maximum color intensity is selected. A follow up approach including a mixture of expert models to select the audio and video features dynamically is introduced by @goyal2016multimodal, and @sivaprasad2018multimodal, who improve their results by using the LSTM structure, yet they predict valence and arousal only every 5 seconds instead of at every frame. In [@malandrakis2011supervised; @zlatintsi2017cognimuse], instead of predicting continuous values of affective content, they discretize those values in equally spaced ranges, and assign a label to each of them. They show that predicting labels and then interpolate them to continuous values gives better results than regressing the emotion values. In this work, we use the extended COGNIMUSE dataset to learn multimodal models for predicting evoked/experienced emotion, in terms of valence and arousal, from videos. Interestingly, this dataset also contains video frames in which people do not appear, hence previous approaches based on facial expression recognition cannot be applied. In the following, we elaborate on our approach and describe our hybrid multimodal model, which is based on deep neural networks. In the result section, we provide an analysis of the features and components that contribute most to the prediction accuracy. Approach {#sec:Approach} ======== We propose a multimodal approach that uses both video and audio features for emotion prediction. For the former, we use pre-trained CNNs to extract image features from static RGB frames and motion features from optical flow fields. The latter is based on features from the OpenSMILE toolkit [@EybenOpenSMILE]. Each of these feature sets are passed through fully connected layers for dimensionality reduction and representation adaptation to emotion prediction, before being concatenated to create audio-visual features. The weights of these fully connected layers are learned jointly with our proposed network architecture during training. We explore two network architectures. The first model uses fully connected layers without temporal memory component (Figure \[fig:RGB\_OF\_Audio\_2FC\_upgraded\]), while the second one is based on LSTMs to take the sequential dependency of emotion into account (Figure \[fig:RGB\_OF\_Audio\_LSTM\]). Both of these two approaches are followed by a fully connected layer and a softmax layer to classify arousal and valence separately, as it has been shown that those are orthogonal emotion characteristics [@russell1980circumplex], and their accuracy prediction suffers when being estimated jointly. Details on each of the components of our proposed models are further discussed below. In the experimental section, we report the results of an analysis of the importance of each feature for evoked emotion prediction in videos. The code to reproduce our results and with the models’ implementation is available at: <https://github.com/ivyha010/emotionprediction>. Visual feature extraction {#subsec:Video} ------------------------- We extract spatial features from the static RGB frames and motion cues from the optical flow of consecutive frames, similarly to the two-stream ConvNet approach in [@simonyan2014two]. The spatial component provides information about objects and scenes in single still RGB frames. The temporal component contains information about motion. Each of these components are extracted using pre-trained CNN. This approach has similarities to the two-stream ConvNets used for action recognition, as those also use both optical flow and still RGB frames as input [@simonyan2014two]. Yet, in [@simonyan2014two], each of the CNN streams is relatively shallow in comparison with CNNs trained on ImageNet [@he2016deep; @simonyan2014very; @krizhevsky2012imagenet]. It is advantageous for us to use a pre-trained CNN as our dataset is relatively small. #### Semantic content from still frames For the spatial component, image features from still RGB frames are extracted using a CNN with pre-trained weights on the ImageNet dataset for object classification task, namely ResNet-50 [@he2016deep]. We extract the representation from the second-to-last layer, after forward passing the image to all the layers except for the last fully-connected classification layer. #### Optical Flow In our framework, we estimate a dense optical flow using PWC-Net [@sun2018pwc] pre-trained on MPI Sintel final pass dataset [@butler2012naturalistic]. It is a CNN model designed according to three principles: pyramid processing, warping, and the application of a cost volume. It computes the optical flow fields between pairs of successive frames. We use PWC-Net since it has a smaller size than FlowNet2.0 [@ilg2017flownet], which makes it easier to train, however, it outperforms many other dense optical flow methods such as SPyNet [@ranjan2017optical], DC Flow [@xu2017accurate] and flow fields  [@bailer2015flow] on MPI Sintel final pass. The estimated optical flow fields are transformed into integers in \[0, 255\] as in [@wang2015towards] to store them in two channels of JPEG images, values in the third channel are set to 255. We use a stack of 10 sequential optical flows, as it carries more motion information than the optical flow between two consecutive frames, as suggested in [@wang2015towards]. The stack serves as the input to the ResNet-101 model that has been pre-trained on the ImageNet classification task, except for the first convolutional layer and the last classification layer, which had been fine-tuned to be able to ingest 10 stacks of sequential optical flows to predict action recognition on UCF-101 [@wang2015towards][^1]. We remove the last fully connected classification layer in the ResNet-101 model and freeze the rest. We thus extract a 2,048-dimensional feature vector from every stack of 10 optical flows. Audio feature extraction {#subsec:Audio} ------------------------ The audio present in movie clips typically consists of a combination of speech, music, and sound effects meant to engage the audience in the stories that filmmakers want to deliver. In our proposed system, audio features are extracted using the OpenSMILE toolkit with a frame size of 400ms with a hop size of 40ms. The frame size corresponds to a time period of a stack of 10 optical flows as shown in Figure \[fig:Frame\_OF\_Audio\_block\]. We use a configuration file named “[*emobase2010*]{}”, which is based on INTERSPEECH 2010 paralinguistics challenge [@schuller2010interspeech] to extract 1,582 features including low-level descriptors (pitch, loudness, jitter, MFCCs, mel filter-bank, line spectral pairs) with their delta coefficients, functionals, the number of pitch onsets, and the duration in seconds [@eyben2016open]. This set of audio features is relatively large in comparison to those created by other OpenSMILE configuration files. Fusion of extracted features ---------------------------- We analyze the effect of multimodal inputs including image features, motion features and audio features on predicting evoked emotion that viewers actually encounter when watching movie clips. Image features are extracted from single RGB frames while motion features and audio features come from 10-optical flow stacks and 400ms-audio segments respectively, as shown in Figure \[fig:Frame\_OF\_Audio\_block\]. Each extracted feature is normalized using min-max normalization following this formula: $\text{V}_{i}^{norm} = \frac{\text{V}_{i}-\min \left(\textbf{V}\right) }{\max \left(\textbf{V}\right)-\min \left(\textbf{V}\right)}$, in which $\text{V}_{i}$ is the $i$-th data point in vector $\textbf{V}$ that contains the same feature element for all data points. In order to reduce the dimension of the extracted feature vectors, we pass the extracted features of each modality to a fully connected layer of 128 units as showed in Figures \[fig:RGB\_OF\_Audio\_2FC\_upgraded\] and \[fig:RGB\_OF\_Audio\_LSTM\]. The weights of this layer are learned during training and optimized for predicting emotion. Then, the output of these fully connected layers are concatenated before being fed into another two fully connected layers as described in Figure \[fig:RGB\_OF\_Audio\_2FC\_upgraded\] or the LSTM structure in Figure \[fig:RGB\_OF\_Audio\_LSTM\]. Models for emotion recognition {#subsec:Models} ------------------------------ We propose two variants of the model for emotion classification. The first one includes only fully connected layers without memory on the time component, while the second one takes the sequential dependency of emotion responses into account by using a LSTM structure. Each of these models are created for arousal and valence separately. Both of these approaches are followed by a fully connected layer and a softmax layer to classify arousal and valence. Since valence and arousal are real values in the range $[-1,1]$, we convert the prediction problem into classification problem by quantizing the real values into 7 bins. In this way, we are be able to use the cross-entropy loss, which gives better results in practice than optimizing the mean squared error loss. The same binning has been performed in [@malandrakis2011supervised], which allows us to benchmark our results. #### Model with no sequential memory The audio-visual features are fed into two fully connected layers after the fusion of extracted features. We use 64 units per layer, as described in Figure \[fig:RGB\_OF\_Audio\_2FC\_upgraded\]. The outputs of the two fully connected layers are then passed to a smaller fully connected layer consisting of 7 units followed by a softmax layer that provides the final probability output for each of the seven binned emotion responses. #### Model with sequential memory We implemented an LSTM in order to incorporate the time dependencies when predicting the affective response of viewers watching movies. The basic architecture of an LSTM cell includes a cell $c_{t}$, remembering values over time and three gates: input gate $i_{t}$, forget gate $f_{t}$, and output gate $o_{t}$. The LSTM cell can be described using the following equations [@donahue2015long]: $$\begin{aligned} f_{t} &= \textit{sigmoid} \left( W_{xf} x_{t} + W_{hf} h_{t-1} + b_{f} \right) \label{eq:LSTM-cell-1} \\ i_{t} &= \textit{sigmoid} \left( W_{xi} x_{t} + W_{hi} h_{t-1} + b_{i} \right) \label{eq:LSTM-cell-2} \\ g_{t} &= \textit{tanh} \left( W_{xc} x_{t} + W_{hc} h_{t-1} + b_{c} \right) \label{eq:LSTM-cell-3} \\ c_{t} &= f_{t} \odot c_{t-1} + i_{t} \odot g_{t} \label{eq:LSTM-cell-4} \\ o_{t} &= \textit{sigmoid} \left( W_{xo} x_{t} + W_{ho} h_{t-1} + b_{o} \right) \label{eq:LSTM-cell-5} \\ h_{t} &= o_{t} \odot \textit{tanh} \left( c_{t} \right) \label{eq:LSTM-cell-6}\end{aligned}$$ in which $x_{t}$ is the input ($t = 1, \dots, T$), $T$ is the input sequence length, $h_{t} \in \mathbb{R}^{N}$ is the hidden state with $N$ being the number of hidden units; $ W_{xf}$, $ W_{hf}$, $W_{xi}$, $W_{hi}$, $W_{xc}$, $W_{hc}$, $W_{xo}$ and $W_{ho}$ are matrices of weights; $b_{f}$, $b_{i}$, $b_{c}$ and $b_{o}$ are biases; $\textit{sigmoid}$ is the sigmoid function $sigmoid (x) = \frac{1}{1+e^{-x}}$; $\odot$ is element-wise product. The forget gate is the first and most important gate, which resets the LSTM cell state using a sigmoid function (Equation \[eq:LSTM-cell-1\]). The input gate decides which values will be updated using a sigmoid function (Equation \[eq:LSTM-cell-2\]), and a $\tanh$ function is used to create a vector $g_{t}$ of new updated values (Equation \[eq:LSTM-cell-3\]). The cell state is computed from the forget gate, the previous cell state, input gate and the vector of new updated values (Equation \[eq:LSTM-cell-4\]). At the output gate, a sigmoid function is used to decide which part of the cell state is going to be the final output (Equation \[eq:LSTM-cell-5\]). The cell state is put through a $\tanh$ function to convert the values into the range $\left[ -1, 1 \right]$ and multiplied by the output (Equation \[eq:LSTM-cell-6\]). In our network, we use a two-layer LSTM structure, each with a hidden size of 64 units. The LSTM model works on overlapping input sequences, which are sequences of audio-visual feature vectors, and provides only one output for each sequence of inputs. We use a sequence length of $5$ time steps, which is equivalent to 2 seconds. The 64-dimensional output of the last time step of the LSTM model is passed through a fully-connected layer of seven units followed by a softmax layer as shown in Figure $\ref{fig:RGB_OF_Audio_LSTM}$. Experimental Set-up {#sec:Experimental_Setup} =================== We detail the dataset and the implementation of our experimental set-up in what follows. #### Dataset We report results on the extended COGNIMUSE dataset [@malandrakis2011supervised], which consists of seven half-hour continuous movie clips from the COGNIMUSE dataset [@evangelopoulos2013multimodal] and five half-hour additional Hollywood movie clips. This dataset includes annotation for sensory and semantic saliency, events, cross-media semantics and emotion, however, in this study we focus on the emotion annotation. Emotion is represented in continuous arousal and valence values in the range $\left[ -1, 1 \right]$. There are three types of annotated emotions: intended, expected and experienced emotion [@zlatintsi2017cognimuse], however, expected emotion is computed from the experienced emotion annotations, therefore, only intended and experienced emotions are rated in this dataset. We focus mainly on experienced emotion, which is equivalent to the evoked emotion, and described in terms of valence and arousal values computed as the average of twelve annotations. To be able to compare to previous work, we also report results on intended emotions, which represent the intention of film makers, and are also annotated in terms of valence and arousal values, computed as the average of three annotations done by the same expert at frame level. In both cases, the emotion values (valence and arousal), which range between $-1$ and $1$, are quantized into seven bins as suggested in [@malandrakis2011supervised]. This enables us to tackle the problem as a labeling task, which results in better results. #### Data pre-processing The movie clips all have a frame rate of 25 frames per second, but vary in frame resolution. Seven movies in the dataset have a height under $214$ pixels, therefore, we resize their raw RGB video frames to meet the size requirement of $224$ for each dimension of images to be fed into the ResNet-50 pre-trained on ImageNet. For movies with larger frame size, we take a random crop of a $224 \times 224$ region. In all cases, we scale the RGB channels by substracting the mean and dividing by the standard deviation of the RGB frames from the ImageNet dataset. For the optical flow network, we keep the original size of the RGB frames as the input, and rescale the optical flow outputs to match the size of $224 \times 224$ required by the ResNet-101. #### Evaluation metrics The proposed models are evaluated based on leave-one-out cross-validation, in which the accuracy and accuracy$\pm 1$ (i.e, predictions of the class label adjoined to the real class is also considered as correct) are used for emotion classfication. We refer to this evaluation as the ”discrete case“. We also compute the mean absolute error (MAE), mean squared error (MSE) and Pearson correlation coefficient with respect to the ground truth by converting the discrete predicted outputs of valence and arousal to continuous values (i.e., ”the continuous case” in the tables below). This is done by following Malandrakis’ approach [@malandrakis2011supervised], in which a low pass filter is applied on classification outputs to eliminate the noise before using the Savitzky-Golay filter [@savitzky1964smoothing] and rescaling into the range $\left[-1,1 \right]$. #### Implementation details The models that classify valence and arousal separately into seven classes are trained using stochastic gradient descent (SGD) with a learning rate of $0.005$, a weight decay of $0.005$, and the softmax function with a temperature of $T = 2$. We train the models for $200$ epochs, each with a batch size of $128$ and early stopping with a patience of 25 epochs. For the LSTM, we set a fixed sequence length equal to 5. All the models are implemented in Python 3.6 and the experiments were run on a NVIDIA GTX 1070. -- -------------- ---------------------- ---------- ---------- ------------- Accuracy (%) Accuracy $\pm$ 1 (%) MAE MSE Correlation 49.04 92.84 0.17 0.05 0.31 51.08 93.90 0.18 0.05 0.34 51.10 95.67 0.15 0.04 0.44 **53.32** **94.75** **0.15** **0.04** **0.46** 48.64 95.28 0.37 0.17 0.43 -- -------------- ---------------------- ---------- ---------- ------------- -- -------------- ---------------------- ---------- ---------- ------------- Accuracy (%) Accuracy $\pm$ 1 (%) MAE MSE Correlation 38.60 90.24 0.20 0.06 0.05 42.35 90.12 0.19 0.06 0.15 42.53 89.01 0.19 0.06 0.15 **43.10** **90.51** **0.19** **0.06** **0.18** 37.20 89.22 0.22 0.07 0.05 -- -------------- ---------------------- ---------- ---------- ------------- -- -------------- ---------------------- ---------- ---------- ------------- Accuracy (%) Accuracy $\pm$ 1 (%) MAE MSE Correlation 27.63 64.89 0.34 0.20 0.42 28.39 66.89 0.35 0.21 0.46 28.98 66.43 0.35 0.21 0.40 30.81 72.90 0.28 0.13 0.59 **31.20** **72.94** **0.27** **0.13** **0.62** 30.80 71.69 0.41 0.22 0.58 24.00 57.00 0.32 0.17 0.54 -- -------------- ---------------------- ---------- ---------- ------------- -- -------------- ---------------------- ---------- ---------- ------------- Accuracy (%) Accuracy $\pm$ 1 (%) MAE MSE Correlation 24.87 59.23 0.35 0.18 0.27 26.75 59.36 0.38 0.24 0.21 24.54 56.28 0.37 0.21 0.16 29.53 65.56 0.33 0.19 0.20 **30.33** **66.95** **0.32** **0.19** **0.25** 22.54 57.63 0.44 0.26 0.17 24.00 64 0.37 0.24 0.23 -- -------------- ---------------------- ---------- ---------- ------------- Experimental Results {#sec:results} ==================== The models are trained and validated on the experienced and intended emotion annotations in the extended COGNIMUSE dataset. The results are summarised in Table \[tab:Arousal\_results\_Experienced\] and Table \[tab:Valence\_results\_Experienced\] for experienced emotion prediction, and in Table \[tab:Arousal\_results\_Intended\] and Table \[tab:Valence\_results\_Intended\] for intended emotion prediction. #### Analysis of the importance of each audio-visual feature component We analyze the effect of the different modalities on classifying emotion of viewers. We use the network with the fully connected layers for this analysis (Figure \[fig:RGB\_OF\_Audio\_2FC\_upgraded\]). The architecture is kept the same, varying only the input using only one of the following features: RGB frame features, optical flow features, audio, and combinations of those. We observe in Tables \[tab:Arousal\_results\_Experienced\] and \[tab:Valence\_results\_Experienced\] that models based on audio features have a higher classification accuracy than other modalities (image and motion), when predicting experienced emotion. This may indicate that either the audio features have a larger influence on emotions than visual features, or the audio features used are better suited for emotion prediction than the video features. In fact, the extended Cognimuse dataset includes famous Hollywood movie clips, and hence, speech, sound effects as well as music are used by filmmakers to describe the inner thoughts of characters in movies and deliver some messages to audience. By using a fusion of all feature modalities, we are able to reach the highest performance, slightly improving the results with only audio features, for both arousal and valence classification. Similar conclusions can be drawn for predictions of intended emotion (Tables \[tab:Arousal\_results\_Intended\] and \[tab:Valence\_results\_Intended\]). Yet, we observe that for both models, fully connected layers and LSTM structure, the accuracy for predicting experienced emotion is higher than that of intended emotion. However, the Pearson correlation between the predicted and ground-truth experienced valence is lower than in the case of intended emotion. This corresponds with the inter-annotator agreement statistics shown in Table $11$ in [@zlatintsi2017cognimuse], which indicate that individual experienced annotations are highly subjective and can vary between annotators, whereas intended annotations are of one subject only. ![Continuous arousal (a) and valence (b) values for experienced emotion of the “[*Gladiator*]{}” movie clip from the extended COGNIMUSE dataset.[]{data-label="fig:visualization_GLA_Experienced"}](Arousal_Valence_Experienced_GLA.png) ![Continuous arousal (a) and valence (b) values for experienced emotion of the “[*Ratatouille*]{}” movie clip from the extended COGNIMUSE dataset.[]{data-label="fig:visualization_RAT_Experienced"}](Arousal_Valence_Experienced_RAT.png) #### Analysis of the importance of temporal memory As described above, we propose two models, one with only fully connected layers and the other with the LSTM structure. Using a fusion of features extracted from RGB frames, optical flow, and audio, the model with fully connected layers has a higher accuracy than the LSTM approach in all predicted emotion values. It is surprising that the LSTM does not bring any improvement in the accuracy performance. We believe this may be because in both the fully connected model and the LSTM, we use audio and optical flow features that span 400ms in the past, and this might be sufficient to carry the emotional content of the current time point. ![Continuous arousal (a) and valence (b) values for intended emotion of the “[*Gladiator*]{}” movie clip from the extended COGNIMUSE dataset.[]{data-label="fig:visualization_GLA_Intended"}](Arousal_Valence_GLA.png) ![Continuous arousal (a) and valence (b) values for intended emotion of the “[*Ratatouille*]{}” movie clip from the extended COGNIMUSE dataset.[]{data-label="fig:visualization_RAT_Intended"}](Arousal_Valence_RAT.png) #### Comparison to state-of-the-art results We compare our approach to @malandrakis2011supervised for emotion prediction, in which several video, audio and music features are used as inputs of Hidden Markov Models to estimate valence and arousal values separately. Results are shown in Tables \[tab:Valence\_results\_Intended\] and \[tab:Arousal\_results\_Intended\] for valence and arousal prediction respectively. We compare these results for intended emotion values, as previous work used these annotations to report their results. Our model outperforms the previous research, even when single feature modalities are used in our case. We note that while we treat all videos equally, @malandrakis2011supervised do not extract image features from cartoon movies, as they argue that the video at the image level is very different from other movies. Using optical flow and pre-trained CNNs comes with an advantage since image features capture the semantic information and the motion, regardless of the intensity, color, and if it is a cartoon or a movie with real people. #### Visualization of the predicted Valence/Arousal values We visualize the ground truth and predicted continuous arousal and valence values for two movie clips namely, “Gladiator” (a movie with actors) and “Ratatouille” (an animated movie) for both experienced (Figures \[fig:visualization\_GLA\_Experienced\] - \[fig:visualization\_RAT\_Experienced\]) and intended emotion (Figures \[fig:visualization\_GLA\_Intended\]-\[fig:visualization\_RAT\_Intended\]) predicted by our model with fully connected layers. We observe that the arousal predictions closely match the ground truth for intended and experienced emotion for both movies. The Pearson correlation coefficients for arousal and valence are $0.77$ and $0.75$ respectively for ”Gladiator“ and $0.74$ and $0.41$ respectively for “Ratatouille”. We notice that the prediction and ground truth curves are less correlated for the valence dimension. The Pearson correlation coefficients of intended and experienced emotion in terms of arousal and valence are $0.34$ and $0.24$ respectively for “Gladiator”, while those coefficients are $0.16$ and $0.29$ respectively for the ”Ratatouille” movie clip. Conclusion ========== In this study, we presented a multimodal approach to predict evoked/experienced emotions from videos. This approach was evaluated using both experienced and intended emotion annotations from the extended COGNIMUSE dataset. In contrast to many existing studies, we do not predict emotion from faces in the videos, but rather focus on the emotion that the film *evokes* in its viewers. We trained multiple models, both with an without LSTM component, and evaluated their performance when using different input modalities (only audio features, only motion features, only image features) and their combinations. The resulting models show a very good performance for the audio based models, which may indicate that either the audio features are better able to capture the evoked emotion than the video features, or that audio may have a bigger influence on emotions than images. When combining all features, we are able to reach the highest performance. We also compared the effect of taking into consideration the sequential dependency of emotion by using an LSTM based model, with a model that does not include a temporal component but uses only fully connected layers. While both models provide high-accuracy prediction for the arousal dimension, the model with only fully connected layers achieves a significantly higher performance for the valence prediction task. In future research, this model may be further improved upon by including more audio / video features and exploring other neural network architectures. Acknowledgements {#acknowledgements .unnumbered} ---------------- This work was funded by the SUTD-MIT IDC grant (IDG31800103), SMART-MIT grant (ING1611118-ICT), and MOE Academic Research Fund (AcRF) Tier 2 (MOE2018-T2-2-161). H.T.P.T. was also supported by the SUTD President’s Graduate Fellowship. [^1]: Pre-trained model available at: https://github.com/jeffreyhuang1/two-stream-action-recognition
--- abstract: 'We study the computational complexity of two well-known graph transversal problems, namely [Subset Feedback Vertex Set]{} and [Subset Odd Cycle Transversal]{}, by restricting the input to $H$-free graphs, that is, to graphs that do not contain some fixed graph $H$ as an induced subgraph. By combining known and new results, we determine the computational complexity of both problems on $H$-free graphs for every graph $H$ except when $H=sP_1+P_4$ for some $s\geq 1$. As part of our approach, we introduce the [Subset Vertex Cover]{} problem and prove that it is polynomial-time solvable for $(sP_1+P_4)$-free graphs for every $s\geq 1$.' author: - Nick Brettell - Matthew Johnson - Giacomo Paesani - Daniël Paulusma bibliography: - 'mybib.bib' title: 'Computing Subset Transversals in $H$-Free Graphs[^1]' --- Introduction ============ The central question in Graph Modification is whether or not a graph $G$ can be modified into a graph from a prescribed class ${\cal G}$ via at most $k$ graph operations from a prescribed set $S$ of permitted operations such as vertex or edge deletion. The *transversal* problems [Vertex Cover]{}, [Feedback Vertex Set]{} and [Odd Cycle Transversal]{} are classical problems of this kind. For example, the [Vertex Cover]{} problem is equivalent to asking if one can delete at most $k$ vertices to turn $G$ into a member of the class of edgeless graphs. The problems [Feedback Vertex Set]{} and [Odd Cycle Transversal]{} ask if a graph $G$ can be turned into, respectively, a forest or a bipartite graph by deleting vertices. We can relax the condition on belonging to a prescribed class to obtain some related *subset transversal* problems. We state these formally after some definitions. For a graph $G=(V,E)$ and a set $T \subseteq V$, an [*(odd) $T$-cycle*]{} is a cycle of $G$ (with an odd number of vertices) that intersects $T$. A set $S_T\subseteq V$ is a [*$T$-vertex cover*]{}, a [*$T$-feedback vertex set*]{} or an [*odd $T$-cycle transversal*]{} of $G$ if $S_T$ has at least one vertex of, respectively, every edge incident to a vertex of $T$, every $T$-cycle, or every odd $T$-cycle. For example, let $G$ be a star with centre vertex $c$, whose leaves form the set $T$. Then, both $\{c\}=V\setminus T$ and $T$ are $T$-vertex covers of $G$ but the first is considerably smaller than the second. See Figures \[subset-house\] and \[f-example\] for some more examples. Here are the problems: [.99]{} <span style="font-variant:small-caps;">[[Subset Vertex Cover]{}]{}</span>\ ----------------- ---------------------------------------------------------------------------- *    Instance:* [a graph $G=(V,E)$, a subset $T\subseteq V$ and a positive integer $k$.]{} *Question:* [does $G$ have a $T$-vertex cover $S_T$ with $|S_T|\leq k$?]{} ----------------- ---------------------------------------------------------------------------- [.99]{} <span style="font-variant:small-caps;">[[Subset Feedback Vertex Set]{}]{}</span>\ ----------------- ---------------------------------------------------------------------------- *    Instance:* [a graph $G=(V,E)$, a subset $T\subseteq V$ and a positive integer $k$.]{} *Question:* [does $G$ have a $T$-feedback vertex set $S_T$ with $|S_T|\leq k$?]{} ----------------- ---------------------------------------------------------------------------- [.99]{} <span style="font-variant:small-caps;">[[Subset Odd Cycle Transversal]{}]{}</span>\ ----------------- ---------------------------------------------------------------------------- *    Instance:* [a graph $G=(V,E)$, a subset $T\subseteq V$ and a positive integer $k$.]{} *Question:* [does $G$ have an odd $T$-cycle transversal $S_T$ with $|S_T|\leq k$?]{} ----------------- ---------------------------------------------------------------------------- (-1,1)–(0,2)–(1,1)–(-1,1)–(-1,-1)–(1,-1)–(1,1); (-1,1) circle \[radius=4pt\] (0,2) circle \[radius=4pt\] (1,1) circle \[radius=4pt\] (-1,-1) circle \[radius=4pt\] (1,-1) node\[regular polygon,regular polygon sides=4,draw,fill=black,scale=0.6pt\] ; at (0,1) ; (0,2)–(1.16,-1.6)–(-1.9,0.6)–(1.9,0.6)–(-1.16,-1.6)–(0,2) (-2.85,0.9)–(-1.74,-2.4)–(1.74,-2.4)–(2.85,0.9)–(0,3)–(-2.85,0.9) (-1.9,0.6)–(-2.85,0.9) (-1.16,-1.6)–(-1.74,-2.4) (1.16,-1.6)–(1.74,-2.4) (1.9,0.6)–(2.85,0.9) (0,2)–(0,3); (-1.9,0.6) circle \[radius=5pt\] (-1.16,-1.6) circle \[radius=5pt\] (1.16,-1.6) circle \[radius=5pt\] (1.9,0.6) circle \[radius=5pt\] (0,2) node\[regular polygon,regular polygon sides=4,draw,fill=black,scale=0.7pt\] (-2.85,0.9) node\[regular polygon,regular polygon sides=4,draw,fill=white,scale=0.7pt\] (-1.74,-2.4) node\[regular polygon,regular polygon sides=4,draw,fill=black,scale=0.7pt\] (1.74,-2.4) node\[regular polygon,regular polygon sides=4,draw,fill=white,scale=0.7pt\] (0,3) circle \[radius=5pt\]; (2.85,0.9) circle \[radius=5pt\]; (0,2)–(1.16,-1.6)–(-1.9,0.6)–(1.9,0.6)–(-1.16,-1.6)–(0,2) (-2.85,0.9)–(-1.74,-2.4)–(1.74,-2.4)–(2.85,0.9)–(0,3)–(-2.85,0.9) (-1.9,0.6)–(-2.85,0.9) (-1.16,-1.6)–(-1.74,-2.4) (1.16,-1.6)–(1.74,-2.4) (1.9,0.6)–(2.85,0.9) (0,2)–(0,3); (-1.9,0.6) circle \[radius=5pt\] (-1.16,-1.6) circle \[radius=5pt\] (1.16,-1.6) circle \[radius=5pt\] (1.9,0.6) circle \[radius=5pt\] (2.85,0.9) circle \[radius=5pt\] (0,2) node\[regular polygon,regular polygon sides=4,draw,fill=black,scale=0.7pt\] (-2.85,0.9) node\[regular polygon,regular polygon sides=4,draw,fill=black,scale=0.7pt\] (-1.74,-2.4) node\[regular polygon,regular polygon sides=4,draw,fill=white,scale=0.7pt\] (1.74,-2.4) node\[regular polygon,regular polygon sides=4,draw,fill=black,scale=0.7pt\] (0,3) circle \[radius=5pt\]; The [Subset Feedback Vertex Set]{} and [Subset Odd Cycle Transversal]{} problems are well known. The [Subset Vertex Cover]{} problem is introduced in this paper, and we are not aware of past work on this problem. On general graphs, [Subset Vertex Cover]{} is polynomially equivalent to [Vertex Cover]{}: to solve [Subset Vertex Cover]{} remove edges in the input graph that are not incident to any vertex of $T$ to yield an equivalent instance of [Vertex Cover]{}. However, this equivalence no longer holds for graph classes that are [*not*]{} closed under edge deletion. As the three problems are [[NP]{}]{}-complete, we consider the restriction of the input to special graph classes in order to better understand which graph properties cause the computational hardness. Instead of classes closed under edge deletion, we focus on classes of graphs closed under vertex deletion. Such classes are called [*hereditary*]{}. The reasons for this choice are threefold. First, hereditary graph classes capture many well-studied graph classes. Second, every hereditary graph class ${\cal G}$ can be characterized by a (possibly infinite) set ${\cal F}_{\cal G}$ of forbidden induced subgraphs. This enables us to initiate a [*systematic*]{} study, starting from the case where $|{\cal F}_{\cal G}|=1$. Third, we aim to extend and strengthen existing complexity results (that are for hereditary graph classes). If ${\cal F}_{\cal G}=\{H\}$ for some graph $H$, then ${\cal G}$ is *monogenic*, and every $G\in {\cal G}$ is *$H$-free*. Our research question is: [*How does the structure of a graph $H$ influence the computational complexity of a subset transversal problem for input graphs that are $H$-free?*]{} As a general strategy one might first try to prove that the restriction to $H$-free graphs is [[NP]{}]{}-complete if $H$ contains a cycle or an induced claw (the 4-vertex star). This is usually done by showing, respectively, that the problem is [[NP]{}]{}-complete on graphs of arbitrarily large girth (the length of a shortest cycle) and on line graphs, which form a subclass of claw-free graphs. If this is the case, then it remains to consider the case where $H$ has no cycle, and has no claw either. So $H$ is a *linear forest*, that is, the disjoint union of one or more paths. [**Existing Results.**]{} As [[NP]{}]{}-completeness results for transversal problems carry over to subset transversal problems, we first discuss results on [Feedback Vertex Set]{} and [Odd Cycle Transversal]{} for $H$-free graphs. By Poljak’s construction [@Po74], [Feedback Vertex Set]{} is [[NP]{}]{}-complete for graphs of girth at least $g$ for every integer $g\geq 3$. The same holds for [Odd Cycle Transversal]{} [@CHJMP18]. Moreover, [Feedback Vertex Set]{} [@Sp83] and [Odd Cycle Transversal]{} [@CHJMP18] are [[NP]{}]{}-complete for line graphs and thus for claw-free graphs. Hence, both problems are [[NP]{}]{}-complete for $H$-free graphs if $H$ has a cycle or claw. Both problems are polynomial-time solvable for $P_4$-free graphs [@BK85], for $sP_2$-free graphs for every $s\geq 1$ [@CHJMP18] and for $(sP_1+P_3)$-free graphs for every $s\geq1$ [@DFJPPP19]. In addition, [Odd Cycle Transversal]{} is [[NP]{}]{}-complete for $(P_2+P_5,P_6)$-free graphs [@DFJPPP19]. Very recently, Abrishami et al. showed that [Feedback Vertex Set]{} is polynomial-time solvable for $P_5$-free graphs [@ACPRS20]. We summarize as follows ($F{\subseteq_i}G$ means that $F$ is an induced subgraph of $G$; see Section \[s-pre\] for the other notation used). \[t-known\] For a graph $H$, [Feedback Vertex Set]{} on $H$-free graphs is polynomial-time solvable if $H{\subseteq_i}P_5$, $H{\subseteq_i}sP_1+P_3$ or $H{\subseteq_i}sP_2$ for some $s\geq 1$, and [[NP]{}]{}-complete if $H{\supseteq_i}C_r$ for some $r\geq 3$ or $H{\supseteq_i}K_{1,3}$. \[t-known2\] For a graph $H$, [Odd Cycle Transversal]{} on $H$-free graphs is polynomial-time solvable if $H=P_4$, $H{\subseteq_i}sP_1+P_3$ or $H{\subseteq_i}sP_2$ for some $s\geq 1$, and [[NP]{}]{}-complete if $H{\supseteq_i}C_r$ for some $r\geq 3$, $H{\supseteq_i}K_{1,3}$, $H{\supseteq_i}P_6$ or $H{\supseteq_i}P_2+P_5$. We note that no integer $r$ is known such that [Feedback Vertex Set]{} is [[NP]{}]{}-complete for $P_r$-free graphs. This situation changes for [Subset Feedback Vertex Set]{} which is, unlike [Feedback Vertex Set]{}, [[NP]{}]{}-complete for split graphs (that is, $(2P_2,C_4,C_5)$-free graphs), as shown by Fomin et al. [@FHKPV14]. Papadopoulos and Tzimas [@PT19; @PT20] proved that [Subset Feedback Vertex Set]{} is polynomial-time solvable for $sP_1$-free graphs for any $s\geq 1$, co-bipartite graphs, interval graphs and permutation graphs, and thus $P_4$-free graphs. Some of these results were generalized by Bergougnoux et al. [@BPT19], who solved an open problem of Jaffke et al. [@JKT20] by giving an $n^{O(w^2)}$-time algorithm for [Subset Feedback Vertex Set]{} given a graph and a decomposition of this graph of mim-width $w$. This does not lead to new results for $H$-free graphs: a class of $H$-free graphs has bounded mim-width if and only if $H{\subseteq_i}P_4$ [@BHMPP]. We are not aware of any results on [Subset Odd Cycle Transversal]{} for $H$-free graphs, but note that this problem generalizes [Odd Multiway Cut]{}, just as [Subset Feedback Vertex Set]{} generalizes [Node Multiway Cut]{}, another well-studied problem. We refer to a large body of literature [@CFLMRS17; @CPPW13; @FHKPV14; @GHKS14; @HK18; @KKK12; @KK12; @KW12; @LMRS17; @IWY16] for further details, in particular for parameterized and exact algorithms for [Subset Feedback Vertex Set]{} and [Subset Odd Cycle Transversal]{}. These algorithms are beyond the scope of this paper. [**Our Results.**]{} We significantly extend the known results for [Subset Feedback Vertex Set]{} and [Subset Odd Cycle Transversal]{} on $H$-free graphs. These new results lead us to the following two almost-complete dichotomies: \[t-main\] Let $H$ be a graph with $H\neq sP_1+P_4$ for all $s\geq 1$. Then [Subset Feedback Vertex Set]{} on $H$-free graphs is polynomial-time solvable if $H=P_4$ or $H{\subseteq_i}sP_1+P_3$ for some $s\geq 1$ and [[NP]{}]{}-complete otherwise. \[t-main2\] Let $H$ be a graph with $H\neq sP_1+P_4$ for all $s\geq 1$. Then [Subset Odd Cycle Transversal]{} on $H$-free graphs is polynomial-time solvable if $H=P_4$ or $H{\subseteq_i}sP_1+P_3$ for some $s\geq 1$ and [[NP]{}]{}-complete otherwise. (0,2.5)–(0,-0.5); (0,0.5) circle \[radius=3pt\] (0,1.5) circle \[radius=3pt\] (0,-0.5) circle \[radius=3pt\] (0,2.5) circle \[radius=3pt\]; at (0,-3) [$P_4$]{}; (0,2)–(0,-2); (0,2) circle \[radius=3pt\] (0,1) circle \[radius=3pt\] (0,0) circle \[radius=3pt\] (0,-1) circle \[radius=3pt\] (0,-2) circle \[radius=3pt\]; at (0,-4) [$P_5$]{}; (0,2.5)–(0,-2.5); (0,2.5) circle \[radius=3pt\] (0,1.5) circle \[radius=3pt\] (0,0.5) circle \[radius=3pt\] (0,-0.5) circle \[radius=3pt\] (0,-1.5) circle \[radius=3pt\] (0,-2.5) circle \[radius=3pt\]; at (0,-4) [$P_6$]{}; (1,2.5)–(1,-1.5) (0,1.3)–(0,-0.3); (1,2.5) circle \[radius=3pt\] (1,1.5) circle \[radius=3pt\] (1,0.5) circle \[radius=3pt\] (1,-0.5) circle \[radius=3pt\] (1,-1.5) circle \[radius=3pt\] (0,1.3) circle \[radius=3pt\] (0,-0.3) circle \[radius=3pt\]; at (0,-3.5) [$P_2+P_5$]{}; (0.7,2)–(0.7,-0.4) (-0.8,2.5)–(-1,2.5)–(-1,-0.9)–(-0.8,-0.9); (0.7,2) circle \[radius=3pt\] (0.7,-0.4) circle \[radius=3pt\] (0.7,0.8) circle \[radius=3pt\] (-0.5,2.3) circle \[radius=2.5pt\] (-0.5,-0.7) circle \[radius=2.5pt\] (-0.5,1.8) circle \[radius=1.5pt\] (-0.5,1.3) circle \[radius=1.5pt\] (-0.5,0.8) circle \[radius=1.5pt\] (-0.5,0.3) circle \[radius=1.5pt\] (-0.5,-0.2) circle \[radius=1.5pt\]; at (-1,0.8) [$s$]{}; at (0,-3) [$sP_1+P_3$]{}; (-0.5,1.5)–(0.5,1.5) (-0.5,-0.75)–(0.5,-0.75) (-0.5,2.25)–(0.5,2.25) (-0.8,2.45)–(-1,2.45)–(-1,-0.95)–(-0.8,-0.95); (-0.5,2.25) circle \[radius=3pt\] (-0.5,1.5) circle \[radius=3pt\] (-0.5,-0.75) circle \[radius=3pt\] (0.5,2.25) circle \[radius=3pt\] (0.5,1.5) circle \[radius=3pt\] (0.5,-0.75) circle \[radius=3pt\] (0,1.05) circle \[radius=1.5pt\] (0,0.6) circle \[radius=1.5pt\] (0,0.15) circle \[radius=1.5pt\] (0,-0.3) circle \[radius=1.5pt\]; at (-1,0.65) [$s$]{}; at (0,-3) [$sP_2$]{}; (0,2)–(1.9,0.6)–(1.16,-1.6)–(-1.16,-1.6)–(-1.9,0.6)–(0,2); (-1.9,0.6) circle \[radius=3.6pt\] (-1.16,-1.6) circle \[radius=3.6pt\] (1.16,-1.6) circle \[radius=3.6pt\] (1.9,0.6) circle \[radius=3.6pt\] (0,2) circle \[radius=3.6pt\]; at (0,-3.5) [$C_5$]{}; (-0.7,0)–(0.7,1.2) (-0.7,0)–(0.7,0) (-0.7,0)–(0.7,-1.2); (0.7,1.2) circle \[radius=3pt\] (0.7,-1.2) circle \[radius=3pt\] (0.7,0) circle \[radius=3pt\] (-0.7,0) circle \[radius=3pt\]; at (0,-3) [$K_{1,3}$]{}; Though the proved complexities of [Subset Feedback Vertex Set]{} and [Subset Odd Cycle Transversal]{} are the same on $H$-free graphs, the algorithm that we present for [<span style="font-variant:small-caps;">Subset Odd Cycle Transversal</span>]{} on $(sP_1+P_3)$-free graphs is more technical compared to the algorithm for [<span style="font-variant:small-caps;">Subset Feedback Vertex Set</span>]{}, and considerably generalizes the transversal algorithms for $(sP_1+P_3)$-free graphs of [@DFJPPP19]. There is further evidence that [<span style="font-variant:small-caps;">Subset Odd Cycle Transversal</span>]{} is a more challenging problem than [<span style="font-variant:small-caps;">Subset Feedback Vertex Set</span>]{}. For example, the best-known parameterized algorithm for [Subset Feedback Vertex Set]{} runs in $O^*(4^k)$ time [@IWY16], but the best-known run-time for [Subset Odd Cycle Transversal]{} is $O^*(2^{O(k^3 \log k)})$ [@LMRS17]. Moreover, it is not known if there is an [[XP]{}]{} algorithm for [<span style="font-variant:small-caps;">Subset Odd Cycle Transversal</span>]{} in terms of mim-width in contrast to the known [[XP]{}]{} algorithm for [<span style="font-variant:small-caps;">Subset Feedback Vertex Set</span>]{} [@BPT19]. In Section \[s-pre\] we introduce our terminology. In Section \[s-svc\] we present some results for [Subset Vertex Cover]{}: the first result shows that [Subset Vertex Cover]{} is polynomial-time solvable for $(sP_1+P_4)$-free graphs for every $s\geq 1$, and we later use this as a subroutine to obtain a polynomial-time algorithm for [<span style="font-variant:small-caps;">Subset Odd Cycle Transversal</span>]{} on $P_4$-free graphs. We present our results on [Subset Feedback Vertex Set]{} and [Subset Odd Cycle Transversal]{} in Sections \[s-sfvs\] and \[s-soct\], respectively. In Section \[s-con\] on future work we discuss [Subset Vertex Cover]{} in more detail. Preliminaries {#s-pre} ============= We consider undirected, finite graphs with no self-loops and no multiple edges. Let $G=(V,E)$ be a graph, and let $S\subseteq V$. The graph $G[S]$ is the subgraph of $G$ induced by $S$. We write $G-S$ to denote the graph $G[V\setminus S]$. Recall that for a graph $F$, we write $F{\subseteq_i}G$ if $F$ is an induced subgraph of $G$. The cycle and path on $r$ vertices are denoted $C_r$ and $P_r$, respectively. We say that $S$ is [*independent*]{} if $G[S]$ is edgeless, and that $S$ is a [*clique*]{} if $G[S]$ is [*complete*]{}, that is, contains every possible edge between two vertices. We let $K_r$ denote the complete graph on $r$ vertices, and $sP_1$ denote the graph whose vertices form an independent set of size $s$. A [*(connected) component*]{} of $G$ is a maximal connected subgraph of $G$. The graph $\overline{G}=(V,\{uv\; |\; uv\not \in E\; \mbox{and}\; u\neq v\})$ is the *complement* of $G$. The *neighbourhood* of a vertex $u\in V$ is the set $N_G(u)=\{v\; |\; uv\in E\}$. For $U\subseteq V$, we let $N_G(U)=\bigcup_{u\in U}N(u)\setminus U$. The [*closed*]{} neighbourhoods of $u$ and $U$ are denoted by $N_G[u]=N_G(u)\cup \{u\}$ and $N_G[U]=N_G(U)\cup U$, respectively. We omit subscripts when there is no ambiguity. Let $T\subseteq V$ be such that $S\cap T=\emptyset$. Then $S$ is *complete* to $T$ if every vertex of $S$ is adjacent to every vertex of $T$, and $S$ is *anti-complete* to $T$ if there are no edges between $S$ and $T$. In the first case, $S$ is also said to be *complete* to $G[T]$, and in the second case we say it is *anti-complete* to $G[T]$. We say that $G$ is a *forest* if it has no cycles, and, furthermore, that $G$ is a *linear forest* if it is the disjoint union of one or more paths. The graph $G$ is *bipartite* if $V$ can be partitioned into at most two independent sets. A graph is *complete bipartite* if its vertex set can be partitioned into two independent sets $X$ and $Y$ such that $X$ is complete to $Y$. We denote such a graph by $K_{|X|,|Y|}$. If $X$ or $Y$ has size $1$, the complete bipartite graph is a [*star*]{}; recall that $K_{1,3}$ is also called a claw. A graph $G$ is a [*split graph*]{} if it has a bipartition $(V_1,V_2)$ such that $G[V_1]$ is a clique and $G[V_2]$ is an independent set. A graph is split if and only if it is $(C_4, C_5, 2P_2)$-free [@FH77]. Let $G_1$ and $G_2$ be two vertex-disjoint graphs. The *union* operation $+$ creates the disjoint union $G_1+\nobreak G_2$ of $G_1$ and $G_2$ (recall that $G_1+G_2$ is the graph with vertex set $V(G_1)\cup V(G_2)$ and edge set $E(G_1)\cup E(G_2)$). The *join* operation adds an edge between every vertex of $G_1$ and every vertex of $G_2$. The graph $G$ is a *cograph* if $G$ can be generated from $K_1$ by a sequence of join and union operations. A graph is a cograph if and only if it is $P_4$-free (see, for example, [@BLS99]). It is also well known [@CLS81] that a graph $G$ is a cograph if and only if $G$ allows a unique tree decomposition called the [*cotree*]{} $T_G$ of $G$, which has the following properties: - The root $r$ of $T_G$ corresponds to the graph $G_r=G$. - Each leaf $x$ of $T_G$ corresponds to exactly one vertex of $G$, and vice versa. Hence $x$ corresponds to a unique single-vertex graph $G_x$. - Each internal node $x$ of $T_G$ has at least two children, is labelled $\oplus$ or $\otimes$, and corresponds to an induced subgraph $G_x$ of $G$ defined as follows: - if $x$ is a $\oplus$-node, then $G_x$ is the disjoint union of all graphs $G_y$ where $y$ is a child of $x$; - if $x$ is a $\otimes$-node, then $G_x$ is the join of all graphs $G_y$ where $y$ is a child of $x$. - Labels of internal nodes on the (unique) path from any leaf to $r$ alternate between $\oplus$ and $\otimes$. Note that $T_G$ has $O(n)$ vertices. We modify $T_G$ into a [*modified cotree*]{} $T_G'$ in which each internal node has exactly two children by applying the following well-known procedure (see for example [@BM93]). If an internal node $x$ of $T_G$ has more than two children $y_1$ and $y_2$, remove the edges $xy_1$ and $xy_2$ and add a new vertex $x'$ with edges $xx'$, $x'y_1$ and $x'y_2$. If $x$ is a $\oplus$-node, then $x'$ is a $\oplus$-node. If $x$ is a $\otimes$-node, then $x'$ is a $\otimes$-node. Applying this rule exhaustively yields $T_G'$. As $T_G$ has $O(n)$ vertices, constructing $T_G'$ from $T_G$ takes linear time. This leads to the following result, due to Corneil, Perl and Stewart, who proved it for cotrees. \[l-cotree\] Let $G$ be a graph with $n$ vertices and $m$ edges. Then deciding whether or not $G$ is a cograph, and constructing a modified cotree $T_G'$ (if it exists) takes time $O(n+m)$. We also consider optimization versions of subset transversal problems, in which case we have instances $(G,T)$ (instead of instances $(G,T,k)$). We say that a set $S\subseteq V(G)$ is a [*solution*]{} for an instance $(G,T)$ if $S$ is a $T$-transversal (of whichever kind we are concerned with). A solution $S$ is [*smaller*]{} than a solution $S'$ if $|S|<|S'|$, and a solution $S$ is [*minimum*]{} if $(G,T)$ does not have a solution smaller than $S$, and it is [*maximum*]{} if there is no larger solution. We will use the following general lemma, which was implicitly used in [@PT20]. \[bound\] Let $S$ be a minimum solution for an instance $(G,T)$ of a subset transversal problem. Then $|S \setminus T| \le |T \setminus S|$. For contradiction, assume that $|S \setminus T| > |T \setminus S|$. Then $|T|<|S|$ (see also \[scheme\]). This means that $T$ is a smaller solution than $S$, a contradiction. (-10,5)–(10,5)–(10,-5)–(-10,-5)–(-10,5) (2,5)–(2,-5) (-10,-1)–(10,-1); at (-4.5,5) [$V\setminus S$]{}; at (5.5,5) [$S$]{}; at (-10,2) [$T$]{}; at (-10,-3) [$V\setminus T$]{}; at (-4.5,2) [$T\setminus S$]{}; at (5.5,-3) [$S\setminus T$]{}; Let $T\subseteq V$ be a vertex subset of a graph $G=(V,E)$. Recall that a cycle is a $T$-cycle if it contains a vertex of $T$. A subgraph of $G$ is a [*$T$-forest*]{} if it has no $T$-cycles. Recall also that a cycle is odd if it has an odd number of edges. A subgraph of $G$ is [*$T$-bipartite*]{} if it has no odd $T$-cycles. Recall that a set $S_T\subseteq V$ is a [$T$-vertex cover]{}, a [$T$-feedback vertex set]{} or an [odd $T$-cycle transversal]{} of $G$ if $S_T$ has at least one vertex of, respectively every edge incident to a vertex of $T$, every $T$-cycle, or every odd $T$-cycle. Note that $S_T$ is a $T$-feedback vertex set if and only if $G[V\setminus S_T]$ is a $T$-forest, and $S_T$ is an odd $T$-cycle transversal if and only if $G[V\setminus S_T]$ is $T$-bipartite. A [*$T$-path*]{} is a path that contains a vertex of $T$. A $T$-path is [*odd*]{} (or [*even*]{}) if the number of edges in the path is odd (or even, respectively). We will use the following easy lemma, which proves that $T$-forests and $T$-bipartite graphs can be recognized in polynomial time. It combines results claimed but not proved in [@LMRS17; @PT20]. \[st-test\] Let $G=(V,E)$ be a graph and $T\subseteq V$. Then deciding whether or not $G$ is a $T$-forest or $T$-bipartite takes $O(n+m)$ time. A [*block*]{} of $G$ is a maximal 2-connected subgraph of $G$ and is [*non-trivial*]{} if it contains a cycle, or, equivalently, at least three vertices. Suppose that we have a block decomposition of $G$; it is well known that this can be found in $O(n+m)$ time (see for example [@HT73]). It is clear that $G$ is a $T$-forest if and only if no non-trivial block contains a vertex of $T$. We claim that $G$ is $T$-bipartite if and only if no non-bipartite block contains a vertex of $T$. To see this note first that the sufficiency is obvious. We will show that if a vertex $t$ of $T$ belongs to a block $B$ that contains an odd cycle $C$, then $t$ belongs to an odd cycle. If $t$ is in $C$, we are done. Otherwise find two paths $P$ and $P'$ from $t$ to, respectively, distinct vertices $u$ and $u'$ in $C$. We can assume that the paths contain no other vertex of $C$ (else we truncate them) and that, as $B$ is 2-connected, they contain no common vertex other than $t$. We can form two cycles that contain $t$ by adding to $P+P'$ each of the two paths between $u$ and $u'$ in $C$. As $C$ is an odd cycle, the lengths of these two paths, and therefore the lengths of the two cycles, have distinct parity. Thus $t$ belongs to an odd cycle. Finally we note that the checks of the block decomposition needed to decide whether or not $G$ is a $T$-forest or $T$-bipartite can be done in $O(n+m)$ time. Subset Vertex Cover {#s-svc} =================== In this section we present some results on [Subset Vertex Cover]{}, some of which we will need later on. \[svc-p4\] [Subset Vertex Cover]{} can be solved in polynomial time for $P_4$-free graphs. Let $G$ be a cograph with $n$ vertices and $m$ edges. First construct a modified cotree $T_G'$ and then consider each node of $T_G'$ starting at the leaves of $T_G'$ and ending at the root $r$. Let $x$ be a node of $T_G'$. We let $S_x$ denote a minimum $(T\cap V(G_x))$-vertex cover of $G_x$. If $x$ is a leaf, then $G_x$ is a $1$-vertex graph. Hence, we can let $S_x=\emptyset$. Now suppose that $x$ is a $\oplus$-node. Let $y$ and $z$ be the two children of $x$. Then, as $G_x$ is the disjoint union of $G_y$ and $G_z$, we can let $S_x=S_y\cup S_z$. Finally suppose that $x$ is a $\otimes$-node. Let $y$ and $z$ be the two children of $x$. As $G_x$ is the join of $G_y$ and $G_z$ we observe the following: if $V(G_x)\setminus S_x$ contains a vertex of $T\cap V(G_y)$, then $V(G_z)\subseteq S_x$. Similarly, if $V(G_x)\setminus S_x$ contains a vertex of $T\cap V(G_z)$, then $V(G_y)\subseteq S_x$. Hence, we let $S_x$ be the smallest set of $S_y\cup V(G_z)$, $S_z\cup V(G_y)$ and $T\cap V(G_x)$. Constructing $T_G'$ takes $O(n+m)$ time by Lemma \[l-cotree\]. As $T_{G'}$ has $O(n)$ nodes and processing a node takes $O(1)$ time, the total running time is $O(n+m)$. The following lemma generalizes a corresponding well-known observation for [Vertex Cover]{}. \[svc-ext\] Let $H$ be a graph. If [Subset Vertex Cover]{} is polynomial-time solvable for $H$-free graphs, then it is for $(P_1+H)$-free graphs as well. Let $G=(V,E)$ be a $(P_1+H)$-free graph and let $T\subseteq V$. Let $S_T$ be a minimum $T$-vertex cover of $G$. For each vertex $u\in T$ we consider the option that $u$ belongs to $V\setminus S_T$. If so, then $N(u)$ belongs to $S_T$. Let $G'=G-N[u]$ and let $T'=T\setminus N[u]$. As $G'$ is $H$-free, we find a minimum $T'$-vertex cover $S_{T'}$ of $G'$ in polynomial time. We remember the smallest set $S_{T'}\cup N(u)$ and compare it with the size of $T$ to find $S_T$ (or some other minimum solution for $(G,T)$). Lemma \[svc-p4\], combined with $s$ applications of Lemma \[svc-ext\], yields the following result. \[c-svc\] For every integer $s\geq 1$, [Subset Vertex Cover]{} can be solved in polynomial time for $(sP_1+P_4)$-free graphs. Subset Feedback Vertex Set {#s-sfvs} ========================== In this section we prove Theorem \[t-main\]. Our contribution to this theorem is \[sfvs-sp1p3\], which is the case where $H = sP_1+P_3$. In the next section, we present an analogous result for [Subset Odd Cycle Transversal]{}. The proofs are similar in outline, but the latter requires additional insights. We require two lemmas. In the first lemma, note that the bound of $4s-2$ is not necessarily tight, but is sufficient for our needs. \[tree-sp1p3\] Let $s$ be a non-negative integer, and let $R$ be an $(sP_1 + P_3)$-free tree. Then either 1. $|V(R)| \le \max\{7,4s-2\}$, or 2. $R$ has precisely one vertex $r$ of degree more than $2$ and at most $s-1$ vertices of degree $2$, each adjacent to $r$. Moreover, $r$ has at least $3s-1$ neighbours. If $R$ has no vertices of degree more than $2$, then $R$ is a path, so $|V(R)|\leq 2s+2\leq\max\{7,4s-2\}$, otherwise $R$ has an induced $sP_1+P_3$ subgraph. Now let $r$ be a vertex of degree more than $2$, and let $x$, $y$ and $z$ be distinct neighbours of $r$. We view $r$ as the root of the tree, and for $v\in V(R)$ we use $T_v$ to denote the subtree rooted at $v$. Suppose that $T_x$ has a vertex of degree at least $2$. Then $T_x$ has an induced $P_3$ subgraph, so $R - (V(T_x) \cup \{r\})$ is $sP_1$-free, and hence, by [@PT19 Observation 1], this subtree consists of at most $2(s-1)$ vertices. Likewise, $R[\{y,r,z\}] \cong P_3$, so $T_x -x$ is $sP_1$-free, and hence consists of at most $2(s-1)$ vertices. Thus $|V(R)| \le 2(s-1) + 2(s-1) + 2 = 4s-2$. We may now assume that for each $v \in N(r)$, the subtree $T_v$ has no vertices of degree at least $2$; that is, either $T_v \cong P_1$ or $T_v \cong P_2$. It remains to show that when (i) does not hold, at most $s-1$ of the $T_v$ subgraphs are isomorphic to $P_2$. Towards a contradiction, suppose that $R$ has $s$ vertices at distance $2$ from $r$, and $|V(R)| > \max\{7,4s-2\}$. Since $|V(R)| > 2(s+1) + 1$ for any non-negative integer $s$, the vertex $r$ has at least $s+2$ neighbours. Without loss of generality, label the neighbours of $r$ as $v_1, v_2, \dotsc, v_{deg(r)}$ such that $T_{v_i} \cong P_2$ for each $i \in \{1,\dotsc,s\}$. Then $R[v_{s+1},r,v_{s+2}] \cong P_3$, and $T_{v_i} - \{v_i\} \cong P_1$ for each $i \in \{1,\dotsc,s\}$; a contradiction. Finally, $|N_R(r)|+(s-1)+1\geq |V(R)|\geq 4s-1$, so $|N_R(r)|\geq 3s-1$. (-1,0) circle \[radius=3pt\] (1,3) circle \[radius=3pt\] (1,2) circle \[radius=3pt\] (1,1) circle \[radius=3pt\] (1,0) circle \[radius=3pt\] (1,-3) circle \[radius=3pt\] (1,-0.6) circle \[radius=2pt\] (1,-1.2) circle \[radius=2pt\] (1,-1.8) circle \[radius=2pt\] (1,-2.4) circle \[radius=2pt\] (2.5,3) circle \[radius=3pt\] (2.5,2) circle \[radius=3pt\] (2.5,1) circle \[radius=3pt\]; (-1,0)–(1,3)–(2.5,3) (-1,0)–(1,2)–(2.5,2) (-1,0)–(1,1)–(2.5,1) (-1,0)–(1,0) (-1,0)–(1,-0.6) (-1,0)–(1,-1.2) (-1,0)–(1,-1.8) (-1,0)–(1,-2.4) (-1,0)–(1,-3) (2.7,3.5)–(3,3.5)–(3,0.5)–(2.7,0.5) (1.2,3.5)–(1.5,3.5)–(1.5,-3.5)–(1.2,-3.5); at (-1,0) [$r$]{}; at (3,2) [$\leq s-1$]{}; at (1.5,-1.5) [$\geq 3s-1$]{}; We can extend “partial” solutions to full solutions in polynomial time as follows. \[sfvs-solvep3free\] Let $G=(V,E)$ be a graph with a set $T \subseteq V$. Let $V' \subseteq V$ and $S'_T \subseteq V'$ such that $S'_T$ is a $T$-feedback vertex set of $G[V']$, and let $Z = V \setminus V'$. Suppose that $G[Z]$ is $P_3$-free, and $|N_{G-S'_T}(Z)| \le 1$. Then there is a polynomial-time algorithm that finds a minimum $T$-feedback vertex set $S_T$ of $G$ such that $S'_T \subseteq S_T$ and $V' \setminus S'_T \subseteq V \setminus S_T$. Since $G[Z]$ is $P_3$-free, it is a disjoint union of complete graphs. Let $G' = G-S'_T$, and consider a $T$-cycle $C$ in $G'$. Then $C$ contains at least one vertex of $Z$. If $N_{G'}(Z) = \emptyset$, then $C$ is contained in a component of $G[Z]$. On the other hand, if $N_{G'}(Z) = \{y\}$, say, then $y$ is a cut-vertex of $G'$, so there exists a component $G[U]$ of $G[Z]$ such that $C$ is contained in $G[U \cup \{y\}]$. Hence, we can consider each component of $G[Z]$ independently: for each component $G[U]$ it suffices to find the maximum subset $U'$ of $U$ such that $G[U'\cup N_{G'}(U)]$ contains no $T$-cycles. Then $U' \subseteq F_T$ and $U \setminus U' \subseteq S_T$, where $F_T = V \setminus S_T$. Let $U \subseteq Z$ such that $G[U]$ is a component of $G[Z]$. Either $N_{G'}(U) \cap T = \emptyset$, or $N_{G'}(U) = \{y\}$ for some $y \in T$. First, consider the case where $N_{G'}(U) \cap T = \emptyset$. We find a set $U'$ that is a maximum subset of $U$ such that $G[U'\cup N_{G'}(U)]$ has no $T$-cycles. Clearly if $|U|=1$, then we can set $U'=U$. If $|U'| \ge 3$, then, since $U'$ is a clique, $U' \subseteq V \setminus T$. Thus, if $|U\setminus T|\geq 2$, then we set $U' = U\setminus T$. So it remains to consider when $|U|\geq 2$ but $|U\setminus T|\leq 1$. If there is some $u \in U$ that is anti-complete to $N_{G'}(U)$, then we can set $U'$ to be any $2$-element subset of $U$ containing $u$. Otherwise $N_{G'}(U) = \{y\}$ and $y$ is complete to $U$. In this case, for any $u \in U$, we set $U'=\{u\}$. Now we may assume that $N_{G'}(U) = \{y\}$ and $y \in T$. Again, we find a set $U'$ that is a maximum subset of $U$ such that $G[U'\cup \{y\}]$ has no $T$-cycles. Partition $U$ into $\{U_0,U_1\}$ where $u \in U_1$ if and only if $u$ is a neighbour of $y$. Since $y \in V' \setminus S_T'$, observe that $U'$ contains at most one vertex of $U_1$, otherwise $G[U' \cup \{y\}]$ has a $T$-cycle. Since $U'$ is a clique, if $|U'| \ge 3$ then $U' \subseteq U \setminus T$. So if $|U_0\setminus T|\geq 2$ and there is an element $u\in U_1\setminus T$, then we can set $U'=\{u\}\cup (U_0\setminus T)$. If $|U_0\setminus T|\geq 2$ but $U_1\setminus T=\emptyset$, then we can set $U'=U_0\setminus T$. So we may now assume that $|U_0\setminus T|\leq 1$. If $U_0\neq \emptyset$ and $|U|\geq 2$, then we set $U'$ to any $2$-element subset of $U$ containing some $u \in U_0$. Clearly if $|U| = 1$, then we can set $U'=U$. So it remains to consider when $U_0 = \emptyset$ and $|U_1| \ge 2$. In this case, we set $U' = \{u\}$ for an arbitrary $u \in U_1$. We now prove the main result of this section. \[sfvs-sp1p3\] For every integer $s\geq 0$, [Subset Feedback Vertex Set]{} can be solved in polynomial time for $(sP_1+P_3)$-free graphs. Let $G=(V,E)$ be an $(sP_1+P_3)$-free graph for some $s\geq 0$, and let $T\subseteq V$. We describe a polynomial-time algorithm for the optimization version of the problem on input $(G,T)$. Let $S_T \subseteq V$ such that $S_T$ is a minimum $T$-feedback vertex set of $G$, and let $F_T = V \setminus S_T$, so $G[F_T]$ is a maximum $T$-forest. Note that $G[F_T\cap T]$ is a forest. We consider three cases: either 1. $G[F_T \cap T]$ has at least $2s$ components; 2. $G[F_T \cap T]$ has fewer than $2s$ components, and each of these components consists of at most $\max\{7,4s-2\}$ vertices; or 3. $G[F_T \cap T]$ has fewer than $2s$ components, one of which consists of at least $\max\{8,4s-1\}$ vertices. We describe polynomial-time subroutines that find a set $F_T$ such that $G[F_T]$ is a maximum $T$-forest in each of these three cases, giving a minimum solution $S_T = V \setminus F_T$ in each case. We obtain an optimal solution by running each of these subroutines in turn: of the (at most) three potential solutions, we output the one with minimum size. [**Case 1:**]{} $G[F_T\cap T]$ has at least $2s$ components. We begin by proving a sequence of claims that describe properties of a maximum $T$-forest $F_T$, when in Case 1. Since $G$ is $(sP_1+P_3)$-free, $F_T\cap T$ induces a $P_3$-free forest, so $G[F_T \cap T]$ is a disjoint union of graphs isomorphic to $P_1$ or $P_2$. Let $A \subseteq F_T \cap T$ such that $G[A]$ consists of precisely $2s$ components. Note that $|A| \le 4s$. We also let $Y = N(A) \cap F_T$, and partition $Y$ into $\{Y_1,Y_2\}$ where $y \in Y_1$ if $y$ has only one neighbour in $A$, whereas $y \in Y_2$ if $y$ has at least two neighbours in $A$. [*Claim 1: $|Y_2|\leq 1$.* ]{} [*Proof of Claim 1.*]{} Let $v \in Y_2$. Then $v$ has neighbours in at least $s+1$ of the components of $G[A]$, otherwise $G[A \cup \{v\}]$ contains an induced $sP_1+P_3$. Note also that $v$ has at most one neighbour in each component of $G[A]$, otherwise $G[F_T]$ has a $T$-cycle. Now suppose that $Y_2$ contains distinct vertices $v_1$ and $v_2$. Then, of the $2s$ components of $G[A]$, the vertices $v_1$ and $v_2$ each have some neighbour in $s+1$ of these components. So there are at least two components of $G[A]$ containing both a vertex adjacent to $v_1$, and a vertex adjacent to $v_2$. Let $A'$ and $A''$ be the vertex sets of two such components. Then $A' \cup A'' \cup \{v_1,v_2\} \subseteq F_T$, but $G[A' \cup A'' \cup \{v_1,v_2\}]$ has a $T$-cycle; a contradiction. [*Claim 2: $|Y|\leq 2s+1$.* ]{} [*Proof of Claim 2.*]{} By Claim 1, it suffices to prove that $|Y_1| \le 2s$. We argue that each component of $G[A]$ has at most one neighbour in $Y_1$, implying that $|Y_1| \le 2s$. Indeed, suppose that there is a component $C_A$ of $G[A]$ having two neighbours in $Y_1$, say $u_1$ and $u_2$. Then $G[V(C_A) \cup \{u_1,u_2\}]$ contains an induced $P_3$ that is anti-complete to $A\setminus V(C_A)$, contradicting that $G$ is $(sP_1+P_3)$-free. [*Claim 3: $Y_1$ is independent, and no component of $G[A]$ of size $2$ has a neighbour in $Y_1$.* ]{} [*Proof of Claim 3.*]{} Suppose that there are adjacent vertices $u_1$ and $u_2$ in $Y_1$. Let $a_i$ be the unique neighbour of $u_i$ in $A$ for $i \in \{1,2\}$. Note that $a_1 \neq a_2$, for otherwise $G[F_T]$ has a $T$-cycle. Then $\{a_1,u_1,u_2\}$ induces a $P_3$, so $G[\{u_1,u_2\} \cup A]$ contains an induced $sP_1+P_3$, which is a contradiction. We deduce that $Y_1$ is independent. Now let $\{a_1,a_2\} \subseteq A$ such that $G[\{a_1,a_2\}]$ is a component of $G[A]$, and suppose that $u_1 \in Y_1$ is adjacent to $a_1$. Then $a_1$ is the unique neighbour of $u_1$ in $A$, so $G[\{u_1,a_1,a_2\}] \cong P_3$. Thus $G[\{u_1\}\cup A]$ contains an induced $sP_1+P_3$, which is a contradiction. [*Claim 4: Let $Z = V \setminus N[A]$. Then $N(Z) \cap F_T \subseteq Y_2$.* ]{} [*Proof of Claim 4.*]{} Suppose that there exists $y \in Y_1$ that is adjacent to a vertex $c \in Z$. Let $a$ be the unique neighbour of $y$ in $A$. Then $G[\{c,y\}\cup A]$ contains an induced $sP_1+P_3$, which is a contradiction. So $Y_1$ is anti-complete to $Z$. Now, if $c \in Z$ is adjacent to a vertex in $N[A] \cap F_T$, then $c$ is adjacent to $y_2$ where $Y_2 = \{y_2\}$. (4.5,-0.84)–(0,2)–(4.5,4.84) (5.7,-2.02)–(0,2)–(4.1,-3.8); (-3,-4) circle \[radius=3pt\] (-3,-3) circle \[radius=3pt\] (-3,-2) circle \[radius=3pt\] (-3,-1) circle \[radius=3pt\] (-3,0) circle \[radius=3pt\] (-3,1) circle \[radius=3pt\] (-3,2) circle \[radius=3pt\] (-3,3) circle \[radius=3pt\] (-3,4) circle \[radius=3pt\] (0,2) circle \[radius=3pt\] (0,0) circle \[radius=3pt\] (0,-1.5) circle \[radius=3pt\]; (-3,1)–(0,2)–(-3,3) (-3,2)–(0,2) to\[out=250,in=110\] (0,-1.5)–(-3,-1) (-3,4)–(0,2)–(-3,0)–(0,0) (-3,-2)(-3,-3)–(-3,-4) (-3,-2)–(0,2)–(-3,-3) (-3.2,-4.3)–(-3.5,-4.3)–(-3.5,4.3)–(-3.2,4.3) (0.2,0.3)–(0.5,0.3)–(0.5,-1.8)–(0.2,-1.8); (5,2) ellipse (1.5cm and 3cm); (5,-3) ellipse (1.2cm and 1.2cm); at (5.5,2) [$U$]{}; at (0.5,-1) [$Y_1$]{}; at (0,2.2) [$y_2$]{}; at (-3.5,-0.5) [$A$]{}; at (7,-1.5) [$Z$]{}; (3,5.5) – (7,5.5) – (7,-5) – (3,-5) – (3,5.5); We now describe the subroutine that finds an optimal solution in Case 1. In this case, for any maximum forest $F_T$, there exists some set $A \subseteq T$ of size at most $4s$ such that $A \subseteq F_T$, and $G[A]$ consists of exactly $2s$ components, each isomorphic to either $P_1$ or $P_2$. Moreover, there is such an $A$ for which $N(A) \cap T \subseteq S_T$. Thus we guess a set $A' \subseteq T$ in $O(n^{4s})$ time, discarding those sets that do not induce a forest with exactly $2s$ components, and those that induce a component consisting of more than two vertices. For any such $F_T$ and $A'$, the set $N(A') \cap F_T$ has size at most $2s+1$, by Claim 2. Thus, in $O(n^{2s+1})$ time, we guess $Y' \subseteq N(A')$ with $|Y'| \le 2s+1$, and assume that $Y' \subseteq F_T$ whereas $N(A') \setminus Y' \subseteq S_T$. Let $Y_2'$ be the subset of $Y'$ that contains vertices that have at least two neighbours in $A'$. We discard any sets $Y'$ that do not satisfy Claims 1 or 3, or those sets for which $G[A'\cup Y']$ has a $T$-cycle on three vertices, one of which is the unique vertex of $Y_2'$. Let $Z=V\setminus N[A']$ (for example, see \[structurefig\]). Since $G[A']$ contains an induced $sP_1$, the subgraph $G[Z]$ is $P_3$-free. Now $N(Z) \cap F_T \subseteq Y_2'$ by Claim 4, where $|Y_2'| \le 1$ by Claim 1. Thus, by \[sfvs-solvep3free\], we can extend a partial solution $S'_T = N[A'] \setminus (A' \cup Y')$ of $G[N[A']]$ to a solution $S_T$ of $G$, in polynomial time. [**Case 2:**]{} $G[F_T\cap T]$ has at most $2s-1$ components, each of size at most $\max\{7,4s-2\}$. We guess sets $F \subseteq T$ and $S \subseteq V\setminus T$ such that $F_T \cap T = F$ and $S_T \setminus T = S$. Since $F$ has size at most $(2s-1)\max\{7,4s-2\}$ vertices, there are $O(n^{\max\{14s-7,8s^2-8s+2\}})$ possibilities for $F$. By Lemma \[bound\], we may assume that $|S_T \setminus T| \le |F|$. So for each guessed $F$, there are at most $O(n^{\max\{14s-7,8s^2-8s+2\}})$ possibilities for $S$. For each $S$ and $F$, we set $S_T = (T \setminus F) \cup S$ and check, in $O(n+m)$-time by Lemma \[st-test\], if $G-S_T$ is a $T$-forest. In this way we exhaustively find all solutions satisfying Case 2, in $O(n^{\max^2\{14s-7,8s^2-8s+2\}})$ time; we output the one of minimum size. [**Case 3:**]{} $G[F_T\cap T]$ has at most $2s-1$ components, one of which has size at least $\max\{8,4s-1\}$. By Lemma \[tree-sp1p3\], there is some subset $B_T \subseteq F_T\cap T$ such that $|B| \ge \max\{8,4s-1\}$, and $G[B]$ is a component of $G[F_T \cap T]$ that is a tree satisfying Lemma \[tree-sp1p3\](ii), as illustrated in \[figbigtree\]. In particular, there is a unique vertex $r \in B$ such that $r$ has degree more than $2$ in $G[B]$. Moreover, $G[F_T]$ has a component $G[D]$ that contains $B$, where $G[D]$ is a tree that also satisfies \[tree-sp1p3\](ii). Note that there are at most $s-1$ vertices in $N_{G[B]}(r)$ having a neighbour in $V \setminus T$. We guess a set $B' \subseteq T$ such that $|B'| = \max\{8,4s-1\}$. We also guess a set $L' \subseteq V \setminus T$ such that $|L'| \le s-1$. Let $D' = B' \cup L'$. We check that $G[D']$ has the following properties: - $G[D']$ is a tree, - $G[D']$ has a unique vertex $r'$ of degree more than $2$, with $r' \in B'$, - $G[D']$ has at most $s-1$ vertices with distance $2$ from $r'$, and each of these vertices has degree $1$, and - each vertex $v \in L'$ has degree $1$ in $G[D']$, and distance $2$ from $r'$. We assume that $D'$ induces a subtree of the large component $G[D]$, where $r=r'$, and $D'$ contains $r$, all neighbours of $r$ with degree $2$ in $G[D]$, and all vertices at distance $2$ from $r$. In other words, $G[D']$ can be obtained from $G[D]$ by deleting some subset of the leaves of $G[D]$ that are adjacent to $r$. In particular, $D' \subseteq F_T$. We also assume that $L'$ is the set of all vertices of $V(D) \setminus T$ that have distance $2$ from $r$. It follows from these assumptions that $N(D' \setminus \{r\}) \setminus \{r\} \subseteq S_T$. Let $Z = V \setminus N[D' \setminus \{r\}]$, and observe that each $z \in Z$ has at most one neighbour in $D'$ (if it has such a neighbour, this neighbour is $r$). So $N(Z) \cap F_T \subseteq \{r\}$. Towards an application of \[sfvs-solvep3free\], we claim that $G[Z]$ is $P_3$-free. Let $B_1 = B' \cap N(r)$. As $r$ has at least $3s-1$ neighbours in $G[B']$, by Lemma \[tree-sp1p3\], $G[B_1]$ contains an induced $sP_1$. Moreover, $N(B_1) \cap F_T \subseteq D'$. Since $G$ is $(sP_1+P_3)$-free, $G[Z]$ is $P_3$-free. We now apply \[sfvs-solvep3free\], which completes the proof. We are now ready to prove Theorem \[t-main\]. [**Theorem \[t-main\] (restated).**]{} [*Let $H$ be a graph with $H\neq sP_1+P_4$ for all $s\geq 1$. Then [Subset Feedback Vertex Set]{} on $H$-free graphs is polynomial-time solvable if $H=P_4$ or $H{\subseteq_i}sP_1+P_3$ for some $s\geq 1$ and is [[NP]{}]{}-complete otherwise.*]{} If $H$ has a cycle or claw, we use Theorem \[t-known\]. The cases $H=P_4$ and $H=2P_2$ follow from the corresponding results for permutation graphs [@PT19] and split graphs  [@FHKPV14]. The remaining case $H{\subseteq_i}sP_1+P_3$ follows from Theorem \[sfvs-sp1p3\]. Subset Odd Cycle Transversal {#s-soct} ============================ At the end of this section we prove Theorem \[t-main2\]. We need three new results to combine with existing knowledge. Our first result uses the reduction of [@PT19] which proved the analogous result for [Subset Feedback Vertex Set]{}. \[soct-split\] [Subset Odd Cycle Transversal]{} is [[NP]{}]{}-complete for the class of split graphs (or equivalently, $(C_4,C_5,2P_2)$-free graphs). We observe that the problem belongs to [[NP]{}]{}. To show [[NP]{}]{}-hardness, we reduce from [Vertex Cover]{}. Let a graph $G=(V,E)$ and a positive integer $k$ be an instance of [Vertex Cover]{}. From $G$, we construct a graph $G'$ as follows. Let $V(G')=V\cup E$. Add an edge between $e\in E$ and $v\in V$ in $G'$ if and only if $v$ is an end-vertex of $e$ in $G$. Add edges so that $V$ induces a clique of $G'$. Hence, $G'$ is a split graph with independent set $E$ and clique $V$. For example, when $G=P_4$, see \[constegfig\]. Let $T=E$. We show that $G$ has a vertex cover of size at most $k$ if and only if $G'$ has an odd $T$-cycle transversal of size at most $k$. First suppose that $G$ has a vertex cover $S$ of size at most $k$. Then $S$ is an odd $T$-cycle transversal of $G'$. Now suppose that $G'$ has an odd $T$-cycle transversal $S_T$ of size at most $k$. As every vertex of $E$ in $G'$ has degree $2$, we can replace every vertex of $E$ that belongs to $S_T$ by one of its neighbours to obtain an odd $T$-cycle transversal of the same size as $S_T$. Hence we may assume, without loss of generality, that $S_T\cap E=\emptyset$. As a vertex of $E$ and its two neighbours in $V$ form a triangle, this means that $S_T$ contains at least one neighbour of every $e\in E$. Hence, $S_T$ is a vertex cover of $G$. (-4,3.3)–(-4.2,3.3)–(-4.2,-3.3)–(-4,-3.3) (2.4,2.3)–(2.6,2.3)–(2.6,-2.3)–(2.4,-2.3) (-3,3)–(-3,-3)–(2,-2)–(-3,-1)–(2,0)–(-3,1)–(2,2)–(-3,3) (-3,-3) to\[out=110,in=250\] (-3,1) (-3,-1) to\[out=110,in=250\] (-3,3) (-3,-3) to\[out=115,in=245\] (-3,3); (-3,3) circle \[radius=4pt\] (-3,1) circle \[radius=4pt\] (-3,-1) circle \[radius=4pt\] (-3,-3) circle \[radius=4pt\] (2,-2) circle \[radius=4pt\] (2,0) circle \[radius=4pt\] (2,2) circle \[radius=4pt\]; at (-4.2,0) [$V$]{}; at (2.7,0) [$E$]{}; \[soct-p4\] [Subset Odd Cycle Transversal]{} can be solved in polynomial time for $P_4$-free graphs. Let $G$ be a cograph with $n$ vertices and $m$ edges. First construct the modified cotree $T_G'$ and then consider each node of $T_G'$ starting at the leaves of $T_G'$ and ending in its root $r$. Let $x$ be a node of $T_G'$. We let $S_x$ denote a minimum odd $(T\cap V(G_x))$-cycle transversal of $G_x$. If $x$ is a leaf, then $G_x$ is a 1-vertex graph. Hence, we can let $S_x=\emptyset$. Now suppose that $x$ is a $\oplus$-node. Let $y$ and $z$ be the two children of $x$. Then, as $G_x$ is the disjoint union of $G_y$ and $G_z$, we let $S_x=S_y\cup S_z$. Finally suppose that $x$ is a $\otimes$-node. Let $y$ and $z$ be the two children of $x$. Let $T_y=T\cap V(G_y)$ and $T_z=T\cap V(G_z)$. Let $B_x=V(G_x)\setminus S_x$. As $G_x$ is the join of $G_y$ and $G_z$ we observe the following. If $B_x\cap V(G_y)$ contains two adjacent vertices, at least one of which belongs to $T_x$, then $B_x\cap V(G_z)=\emptyset$ (as otherwise $G[B_x]$ has a triangle containing a vertex of $T$) and thus $V(G_z)\subseteq S_x$. In this case we may assume that $S_x=S_y\cup V(G_z)$. Similarly, if $B_x\cap V(G_z)$ contains two adjacent vertices, at least one of which belongs to $T_z$, then $B_x\cap V(G_y)=\emptyset$ and thus $V(G_y)\subseteq S_x$. In this case we may assume that $S_x=S_z\cup V(G_y)$. It remains to examine the case where both the vertices of $T_y$ that belong to $B_x\cap V(G_y)$ are isolated vertices in $G_x[B_x\cap V(G_y)]$ and the vertices of $T_z$ that belong to $B_x\cap V(G_z)$ are isolated vertices in $G_x[B_x\cap V(G_z)]$. This is exactly the case when $S_x\cap V(G_y)$ is a $T_y$-vertex cover of $G_y$ and $S_x\cap V(G_z)$ is a $T_z$-vertex cover of $G_z$. We can compute these two vertex covers in polynomial time using Lemma \[svc-p4\] and compare their union with the sizes of $T\cap V(G_x)$, $S_y\cup V(G_z)$, and $S_z\cup V(G_y)$. Let $S_x$ be a smallest set amongst these four sets. Constructing $T_G'$ takes $O(n+m)$ time by Lemma \[l-cotree\]. As $T_{G'}$ has $O(n)$ nodes and processing a node takes $O(n+m)$ time (due to the application of Lemma \[svc-p4\]), the total running time is $O(n^2+mn)$. The following result is the main result of this section. Its proof uses the same approach as the proof of Theorem \[sfvs-sp1p3\] but we need more advanced arguments. \[soct-sp1p3\] For every integer $s\geq 0$, [Subset Odd Cycle Transversal]{} can be solved in polynomial time for $(sP_1+P_3)$-free graphs. Let $G=(V,E)$ be an $(sP_1+P_3)$-free graph and let $T\subseteq V$. We describe a polynomial-time algorithm to solve the optimization problem on input $(G,T)$. That is, we describe how to find a smallest odd $T$-cycle transversal. In fact, we will solve the equivalent problem of finding a maximum size $T$-bipartite subgraph of $G$ which is, of course, the complement of a smallest odd $T$-cycle transversal. We separate into two cases that separately seek to find $T$-bipartite subgraphs with complementary constraints on the size of the intersection of this subgraph with $T$. The largest one found overall is the desired output. [**Case 1:**]{} Compute a largest $T$-bipartite subgraph $B_T$ of $G$ such that $|B_T \cap T|\leq \max\{3,4s-3\}$. Note that $B^*=V \setminus T$ is a candidate solution. We must see if we can find something larger. Consider each set $B' \subseteq T$ of size at most $\max\{3,4s-3\}$, discarding any set that does not induce a bipartite graph. There are $O(n^{\max\{3,4s-3\}})$ possible sets. For each choice of $B'$, consider all sets $S \subseteq V \setminus T$ of size less than $|B'|$. Then $B' \cup (V \setminus T) \setminus S$ is a candidate solution if it induces a $T$-bipartite subgraph, which is checked in $O(n+m)$-time by Lemma \[st-test\]. For each $B'$, there are $O(n^{\max\{3,4s-3\}})$ possible choices of $S$ to consider. Note that we do not need to examine larger $S$ since then $B' \cup (V \setminus T) \setminus S$ is no larger than $B^*$. [**Case 2:**]{} Compute a largest $T$-bipartite subgraph $B_T$ of $G$ such that $|B_T \cap T|\geq \max\{4,4s-2\}$. Note that $B_T$ might not exist in which case the output of Case 1 is our result. We make some observations about the subgraph $B_T$ that we seek. As $G[B\cap T]$ is a bipartite graph on at least $\max\{4,4s-2\}$ vertices, it contains an independent set $A$ of size $\max\{2,2s-1\}$. Let $Y=B_T \cap N(A)$ and consider a partition $\{Y_1,Y_2\}$ of $Y$ where $y$ is in $Y_1$ if $y$ has precisely one neighbour in $A$, and otherwise in $Y_2$. Let $Z = V \setminus N[A]$. [*Claim 1: $Y_1$ is an independent set, no two vertices of $Y_1$ have a common neighbour in $A$ and $|Y_1| \le |A|$.*]{} [*Proof of Claim 1.*]{} Suppose that there are adjacent vertices $y,y'\in Y_1$, and let $a$ be the unique neighbour of $y$ in $A$. Then, according to whether or not $y'$ is adjacent to $a$, either $\{y,y',a\}$ induces an odd $T$-cycle, or $G[A \cup \{y,y'\}]$ contains an induced $sP_1+P_3$; both are contradictions. If there are vertices $y,y'\in Y_1$ that have the same neighbour $a$ in $A$, then, again, $G[A \cup \{y,y'\}]$ contains an induced $sP_1+P_3$, a contradiction. It follows that $|Y_1| \le |A|$.  [*Claim 2: $Y_2$ is an independent set, each $y \in Y_2$ has at least $s$ neighbours in $A$ and any two vertices of $Y_2$ share at least one neighbour in $A$.*]{} [*Proof of Claim 2.*]{} Let $y$ and $y'$ be distinct vertices in $Y_2$. Since $G[A \cup \{y\}]$ is $(sP_1+P_3)$-free, $y$ is non-adjacent to at most $s-1$ vertices of $A$. So $y$ has at least $2s-1-(s-1)=s$ neighbours in $A$. Similarly, $y'$ is non-adjacent to at most $s-1$ vertices of $A$, so $y$ and $y'$ have a neighbour of $A$ in common, $a$ say. If $y$ and $y'$ are adjacent, then $\{y,y',a\}$ induces an odd $T$-cycle; a contradiction. [*Claim 3: $N(Z) \cap B_T \subseteq Y_2$.*]{} [*Proof of Claim 3.*]{} By definition, $N(Z) \cap B_T \subseteq Y$. Suppose that $z \in Z$ is adjacent to a vertex $y \in Y_1$. Let $a$ be the unique neighbour of $y$ in $A$. Since $|A| = \max\{2,2s-1\} \ge s+1$ for all $s \ge 0$, it follows that $G[\{z,y\}\cup A]$ contains an induced $sP_1+P_3$, a contradiction. So $Y_1$ is anti-complete to $Z$, and the claim follows. Armed with these definitions and claims we consider how to find $B_T$. The basic idea is to consider all possible choices of $A$ and $Y$. We have two subcases. [**Case 2a:**]{} Compute a largest $T$-bipartite subgraph $B_T$ of $G$ such that $|B_T \cap T|\geq \max\{4,4s-2\}$ and, for some choice of $A$, we have $|Y| < \max\{4,3s\}$. Consider each set $A \subseteq T$ of size $\max\{2,2s-1\}$ such that $A$ is an independent set. There are $O(n^{\max\{2,2s-1\}})$ choices. For each $A$, we consider each set $Y_1 \subseteq N(A)$ of vertices that each has a single neighbour in $A$ such that $Y_1$ satisfies Claim 1. As we require that $|Y_1| \leq |A|$, there are again $O(n^{\max\{2,2s-1\}})$ choices. Then consider each set $Y \subseteq N(A)$ of size at most $\max\{4,3s\}$ such that $Y_1 \subseteq Y$ and $Y_2 = Y \setminus Y_1$ is a set of vertices that each has at least two neighbours in $A$ and satisfies Claim 2. We also require that $A \cup Y$ does not contain any odd $T$-cycles, which is checked in $O(n+m)$-time by Lemma \[st-test\]. There are $O(n^{\max\{4,3s\}})$ choices for $Y$. Note that $G[A \cup Y]$ is bipartite since $G[Y]$ can contain only even cycles as $Y_1$ and $Y_2$ are independent sets, and any odd cycle is an odd $T$-cycle, since $A\subseteq T$, which we have proscribed. By Claim 2, vertices of $Y_2$ all belong to the same component of $G[A \cup Y]$ and, as, by definition and Claim 1, each vertex in $G[A \cup Y_1]$ has degree at most 1, we deduce that every vertex of degree at least 2 in $G[A \cup Y]$ belongs to the same component. We denote this component by $D$, or we let $D$ be the empty graph if there is no such component (which only occurs when $Y_2 = \emptyset$). See \[figbipart\] for an illustration. (3,0)–(3,2) (3,0)–(1.5, 2) (3,0)–(0,2) (1.5,0)–(1.5,2) (1.5,0)–(0,2) (1.5,0)–(-1.5,2)–(-1.5,0) (1.5,0)–(0,0)–(-3,2) (1.5,0) to\[out=160,in=20\] (-3,0)–(-3,2) (3.3,2.3)–(3.3,2.5)–(-3.3,2.5)–(-3.3,2.3) (3.3,-0.3)–(3.3,-0.5)–(1.2,-0.5)–(1.2,-0.3) (-3.3,-0.3)–(-3.3,-0.5)–(0.3,-0.5)–(0.3,-0.3); (0,0) circle \[radius=3pt\] (-3,0) circle \[radius=3pt\] (3,2) circle \[radius=3pt\] (1.5,2) circle \[radius=3pt\] (0,2) circle \[radius=3pt\] (-1.5,2) circle \[radius=3pt\] (-3,2) circle \[radius=3pt\] (3,0) circle \[radius=3pt\] (1.5,0) circle \[radius=3pt\] (-1.5,0) circle \[radius=3pt\]; at (0,2.5) [$\subseteq A$]{}; at (2.25,-0.5) [$Y_2$]{}; at (-1.5,-0.5) [$\subseteq Y_1$]{}; Recall that $Z = V \setminus N[A]$. Since $A$ contains an induced $sP_1$ subgraph, $G[Z]$ is $P_3$-free, and so is a disjoint union of complete graphs. For a component $U$ of $G[Z]$, let $U^+$ contain each vertex $u$ of $U$ such that $G[A \cup Y \cup \{u\}]$ does not contain an odd $T$-cycle through $u$, which is checked in $O(n+m)$-time by Lemma \[st-test\]. The aim in the remainder of this subcase is to find the largest possible $T$-bipartite subgraph $B_T$ that contains $A \cup Y$ and a subset of $Z$. Clearly for each component $U$ in $G[Z]$, any vertex that might be in $B_T$ must belong to $U^+$. We shall see later that we can consider each component of $Z$ independently and that it suffices to find for each the maximum size subset of $U^+$ that can be added to $B_T$. We first investigate the possible edges between $U^+$ and $D$. Note that by Claim 3, the neighbours of $U^+$ in $D$ belong to $Y_2$. [*Claim 4: If $|U^+ \cap B_T| \ge 2$, then either $|N(U^+ \cap B_T) \cap V(D)| \le 1$ or $|N(D)\cap (U^+ \cap B_T)| \le 1$.*]{} [*Proof of Claim 4.*]{} We can assume that there are two vertices $u_1, u_2$ of $U^+ \cap B_T$ that each have a neighbour in $D$ else the claim follows immediately. Moreover we can assume that these neighbours, say $y_1$ and $y_2$ respectively, are distinct. By Claim 2, $y_1$ and $y_2$ have a common neighbour $a$ in $A$. Thus we have a path $u_1y_1ay_2u_2$. As $U^+$ is a clique, this can be extended to a cycle by the edge $u_1u_2$, but, as $A \subseteq T$, this is an odd $T$-cycle, a contradiction. Let $Z^+$ be the union of $U^+$ over all components $U$ of $G[Z]$. Suppose that $C$ is an odd $T$-cycle of $G[A\cup Y\cup Z^+]$. We show that $C$ contains two vertices of some set $U^+$. Assume that $C$ is a subgraph of $G[A\cup Y\cup Z^*]$, where $Z^*$ is a subset of $Z^+$ with at most one vertex from each component. But this is a contradiction as $G[A\cup Y\cup Z^*]$ is bipartite: $G[A\cup Y]$ is bipartite and the vertices of $Z^*$ are adjacent to $Y_2$ whose vertices are separated by paths of length 2. Thus to extend $A \cup Y$ to the largest possible $T$-bipartite graph, for each component $U$ of $G[Z]$, we must find $U^{++}$, a maximum subset of $U^+$ such that $G[A \cup Y \cup U^{++}]$ has no odd $T$-cycle. By the preceding argument and Claim 4, we can consider each component separately. We describe how to find such a set $U^{++}$. We first suppose that for the set we seek $|U^{++}| \ge 3$. Partition $U^{+}$ into $\{U^{+}_0, U^{+}_1, U^{+}_2\}$ where $u\in U^{+}_0$ if $u \in U^{+}$ has no neighbours in $V(D)$, $u\in U^{+}_1$ if $u$ has exactly one neighbour in $V(D)$, and otherwise $u\in U^{+}_2$. By Claim 4, and since we are assuming that $|U^{++}| \ge 3$, we have $|U^{+}_2 \cap U^{++}| \leq 1$. And $U^{+}_2 \cap U^{++} = \{u_2\}$ if this set is not empty. Let $N(U^{+}_1) \cap V(D) = \{d_1,\dotsc,d_m\}$, for some $m\geq 1$, if $U^{+}_1$ is not empty. We partition $U^{+}_1$ into classes $\{Q_1,\dotsc,Q_m\}$ such that $u \in Q_i$ if $N(u) \cap V(D) = \{d_i\}$. Using Claim 4 again, we have that $U^{++} \cap U^{+}_1 \subseteq Q_i$ for some $i \in \{1, \dotsc, m\}$. So we choose the $i$ with $d_i \notin T$ that maximises $|Q_i \setminus T|$, and set $U^{++} = (U^{+}_0 \cup Q_i) \setminus T$. If $d_i \in T$ for all $i \in \{1,\dotsc,m\}$ but $U^{+}_1 \setminus T \neq \emptyset$, then $U^{++} = (U^{+}_0 \setminus T) \cup \{u\}$ for an arbitrarily chosen $u \in U^{+}_1 \setminus T$. Otherwise $U^{++}=(U_0^{+}\cup U^{+}_2)\setminus T$. This process finds a maximum $U^{++}$ of size at least $3$ if such a set exists. Now consider the case where $|U^{++}| \le 2$. Recall that no vertex of $U^{+}$ creates an odd $T$-cycle with vertices of $A \cup Y$. So any odd $T$-cycle of $G[A \cup Y \cup \{u_1,u_2\}]$ contains $\{u_1,u_2\}$. We require one more claim to handle this case, which shows that we may also consider each of these remaining components independently. [*Claim 5: If $C$ is an odd $T$-cycle of $G[A \cup Y \cup Z]$ with $|C \cap U| \leq 2$ for each component $U$ of $G[Z]$, then there is a component $U*$ and an odd $T$-cycle $C'$ of $G[A \cup Y \cup Z]$ such that $C'\cap Z=C\cap U*$.*]{} [*Proof of Claim 5.*]{} Let $C$ be such an odd $T$-cycle of $G[A \cup Y \cup Z]$. Since $N(Z) \cap (A \cup Y) \subseteq Y_2$, by Claim 3, and the vertices of $Y_2$ are contained in one part of the bipartition of $D$, $C$ must contain at least one edge $u_1u_2$ with $u_1,u_2$ in some component $U*$ of $G[Z]$. By assumption and Claim 3, $C$ contains the path $yu_1u_2y'$ for some $y,y' \in Y_2$. Then there is some $a \in N(y) \cap N(y') \cap A$, by Claim 2, and $C'=\{a,y,u_1,u_2,y',a\}$ is an odd $T$-cycle. We exhaustively check all pairs of vertices in $U^{+}$, of which there are $O(n^2)$. Let $u_1,u_2$ be such a pair of distinct vertices. By Claim 5, the choice of solution for one of these components does not affect any other. Since an odd $T$-cycle must contain $\{u_1,u_2\}$, if $N(u_i)\cap Y = \emptyset$ for some $i \in \{1,2\}$, we can set $U^{++} = \{u_1,u_2\}$. Otherwise, $N(u_i)\cap Y \neq \emptyset$ for each $i \in \{1,2\}$. By Claim 3, $N(\{u_1,u_2\})\cap Y \subseteq Y_2$. Thus, by Claim 2, we require $\{u_1,u_2\} \subseteq V \setminus T$. Now if $\{u_1,u_2\} \subseteq V \setminus T$, then $U^{++} = \{u_1,u_2\}$ if and only if $N(u_1)\cap Y_2 = N(u_2)\cap Y_2 = \{y\}$, for some $y \in Y_2 \setminus T$, again by Claim 2. Finally, if no pair $\{u_1,u_2\}$ is found, we can arbitrarily choose any single vertex of $U^{+}$. [**Case 2b:**]{} Compute a largest $T$-bipartite subgraph $B_T$ of $G$ such that $|B_T \cap T|\geq \max\{4,4s-2\}$ and, for some choice of $A$, we have $|Y| \geq \max\{4,3s\}$. Note that as $A$ has size $\max\{2,2s-1\}$ and $|Y_1| \leq |A|$, we have that $|Y_2| \geq \max\{2,s+1\}$. So suppose that $Y'_2$ is a subset of $Y$ with $|Y'_2| = \max\{2,s+1\}$. Let $A_0=N(Y'_2)\cap A$, and let $Y_0 = N(A_0) \cap B_T$. Observe that $s\leq |A_0|\leq \max\{2,2s-1\}$ and $Y'_2 \subseteq Y_0 \subseteq Y$. [*Claim 6: Let $y \in Y'_2$ and $y' \in Y_0$ be distinct vertices. Then there is an even $T$-path in $G[A_0\cup Y'_2 \cup \{y'\}]$ between $y$ and $y'$.*]{} [*Proof of Claim 6.*]{} Assume that $y$ and $y'$ have no common neighbour in $A_0$ else the claim is immediate. By Claim 2 and the definitions of $A_0$ and $Y_0$, we can assume that $y' \in Y_0 \setminus Y'_2$ and that $y'$ has a neighbour $a'$ in $A_0$, and, moreover, that $a'$ is the neighbour of some vertex $y'' \in Y'_2 \setminus \{y\}$. Again by Claim 2, $y$ and $y''$ share a common neighbour $a'' \in A_0$. Thus $ya''y''a'y'$ is an even $T$-path in $G[A_0\cup Y'_2 \cup \{y'\}]$. (-4,4)–(-7,0) (-2,4)–(-5,0) (-2,4)–(1,0) (-2,4)–(3,0) (-2,4)–(5,0) (0,4)–(-3,0) (0,4)–(1,0) (0,4)–(3,0) (0,4)–(7,0) (2,4)–(1,0) (2,4)–(3,0) (2,4)–(5,0) (2,4)–(7,0) (4,4)–(1,0) (4,4)–(5,0) (4,4)–(7,0) (-2.2,4.2)–(-2.2,4.4)–(4.2,4.4)–(4.2,4.2) (-4.2,5.2)–(-4.2,5.4)–(4.2,5.4)–(4.2,5.2) (0.8,-1.2)–(0.8,-1.4)–(7.2,-1.4)–(7.2,-1.2) (-7.2,-1.2)–(-7.2,-1.4)–(-2.8,-1.4)–(-2.8,-1.2) (-5.2,-0.2)–(-5.2,-0.4)–(7.2,-0.4)–(7.2,-0.2); (-4,4) circle \[radius=4pt\] (-2,4) circle \[radius=4pt\] (0,4) circle \[radius=4pt\] (2,4) circle \[radius=4pt\] (4,4) circle \[radius=4pt\] (7,0) circle \[radius=4pt\] (5,0) circle \[radius=4pt\] (3,0) circle \[radius=4pt\] (1,0) circle \[radius=4pt\] (-3,0) circle \[radius=4pt\] (-5,0) circle \[radius=4pt\] (-7,0) circle \[radius=4pt\]; at (4.2,5.3) [$A$]{}; at (4.2,4.3) [$A_0$]{}; at (7.2,-0.3) [$\subseteq Y_0$]{}; at (7.2,-1.3) [$Y_2'$]{}; at (-7.2,-1.3) [$Y_1$]{}; Let $Y'_0=N(A_0)$ and let $Z=V \setminus N[A_0]$. Since $G[A_0]$ has an induced $sP_1$ subgraph, $G[Z]$ is $P_3$-free, so it is a disjoint union of complete graphs. Let $U$ be a component of $G[Z]$. Partition the vertices of $U$ into $\{U_0,U_1,U_2\}$, where $u\in U_0$ if $u$ has no neighbours in $Y'_0$, whereas $u\in U_1$ if all neighbours of $u$ in $Y'_0$ are in one component of $Y'_0$, and otherwise $u\in U_2$ (when $u$ has neighbours in distinct components of $Y'_0$). [*Claim 7: If $u \in U_2$, then $u$ has at least two neighbours in $Y'_2\subseteq Y_2$.*]{} [*Proof of Claim 7.*]{} Suppose that $u \in U_2$. Since $|Y'_0|\geq |Y'_2|=\max\{2,s+1\}$, the graph $G[Y'_0]$ contains an induced $sP_1$ subgraph by Claim 2. Consider when $s \ge 1$. As $G[Y'_0 \cup \{u\}]$ is $(sP_1+P_3)$-free, $u$ is non-adjacent to at most $s-1$ of the vertices in $Y'_2$. Since $|Y'_2|=s+1$, the claim holds. The case where $s=0$ follows, in a similar manner, since $|Y_2| \ge 2$. [*Claim 8: Either $U_0 = \emptyset$ or $U_1 = \emptyset$. Moreover, $|N(U_1) \cap Y'_0| = 1$ if $U_1 \neq \emptyset$.*]{} [*Proof of Claim 8.*]{} Suppose that $U_0$ and $U_1$ are both non-empty. Let $u_0\in U_0$, $u_1\in U_1$ and $y\in N(u_1)\cap Y'_0$. Then $\{u_0,u_1,y\}$ induces a $P_3$, so $G[\{u_0,u_1\}\cup Y'_0]$ contains an induced $sP_1+P_3$; a contradiction. Similarly, let $u_1,u'_1\in U_1$ and $y \in N(u_1)\cap Y'_0$. If $y \notin N(u'_1)$, then $\{y,u_1,u'_1\}$ induces a $P_3$, so $G[\{u_1,u'_1\}\cup Y'_0]$ contains an induced $sP_1+P_3$; a contradiction. [*Claim 9: $|U_2 \cap B| \le 1$.*]{} [*Proof of Claim 9.*]{} Let $u,u' \in U_2 \cap B_T$ with $u \neq u'$. By Claim 7, $u$ and $u'$ each have at least two neighbours in $Y'_2$. Hence, there exist vertices $y,y' \in Y'_2$ such that $y \in N(u)$, $y' \in N(u')$ and $y \neq y'$. By Claim 6, there is an even $T$-path $P$ in $G[A'_0 \cup Y_2]$ between $y$ and $y'$. Using the path $yuu'y'$, $P$ can be extended to an odd $T$-cycle; a contradiction. [*Claim 10: Suppose that $u_1,u_2 \in B_T$ for some $u_1 \in U_1$ and $u_2 \in U_2$. Let $N(U_1) \cap Y'_0 = \{y\}$. Then $y \in Y'_0 \setminus Y_2$ and $y \notin B_T$.*]{} [*Proof of Claim 10.*]{} Since $u_2$ has at least two neighbours in $Y'_2$, by Claim 7, $u_2$ has a neighbour $y' \in Y'_2$ such that $y' \neq y$. By Claim 6, there is an even $T$-path $P$ in $G[A_0 \cup Y_2 \cup \{y\}]$ between $y$ and $y'$. Using the path $yu_1u_2y'$, the path $P$ can be extended to an odd $T$-cycle. Since $V(P) \setminus \{y\} \subseteq A_0 \cup Y_2 \subseteq B_T$ and $u_1,u_2 \in B_T$, we deduce that $y \notin B_T$. Our approach is to consider each possible pair of sets $A$ and $Y_2$ with $s \leq |A| \leq \max\{2,2s-1\}$ and $|Y_2|=\max\{2,s+1\}$ that conform with the definitions of this subcase and Claim 6, with $A$ and $Y_2$ taking the role of $A_0$ and $Y'_2$, respectively. We describe, for each component $U$ of $G[Z]$, how to find the largest possible set of vertices $U'$ in $U$ to add to $A_0 \cup Y'_2$ to form $B_T$. We then prove, as Claim 11, the correctness of the approach of considering each component independently; that is, we prove that we cannot introduce any odd $T$-cycles that meet multiple components of $G[Z]$. First consider whether it is possible to find $U'$ such that $|U'| \ge 3$. Then $U'$ contains no vertex of $T$, otherwise $G[U']$ has an odd $T$-cycle, since $U$ is a clique. By Claim 9, $|U' \cap U_2| \le 1$. Thus, by Claim 8, exactly one of $U_0 \setminus T$ and $U_1 \setminus T$ is non-empty. Hence, if $U_0 \setminus T \neq \emptyset$ and $U_2 \setminus T \neq \emptyset$, then we let $U' = (U_0 \setminus T) \cup \{u\}$ for an arbitrary $u \in U_2 \setminus T$. If $U_0 \setminus T \neq \emptyset$ and $U_2 \setminus T = \emptyset$, then $U' = U_0 \setminus T$. Suppose that $U_1 \setminus T \neq \emptyset$. By Claim 8, there exists $y \in Y'_0$ such that $N(u)\cap Y'_0=\{y\}$, for all $u \in U_1$. As $U_1 \cup \{y\}$ is a clique, we assume that $y \notin Y_2 \cap T$; otherwise $|U_1 \cap U'| \le 1$ and hence $|U'| \le 1$ by Claim 9. If $U_2 \setminus T \neq \emptyset$, then $U' = (U_1 \setminus T) \cup \{u\}$ for an arbitrary $u \in U_2 \setminus T$, and, by Claim 10, we also have $y \in S'_T$. If $U_2 \setminus T = \emptyset$, then we set $U' = U_1 \setminus T$ and if $y \in T$, then $y \in S'_T$. We now assume that we want to find $U'$ such that $|U'| \le 2$. First consider when $U_0 \neq \emptyset$. If $|U_0| \ge 2$, then we set $U' = \{u,u'\}$ for any distinct $u,u' \in U_0$. If $U_0 = \{u_0\}$ and $|U_2| \ge 1$, then we set $U' = \{u_0,u_2\}$ for an arbitrary $u_2 \in U_2$. Finally, if $U_0 = \{u_0\}$ and $U_2 = \emptyset$, then $U' = \{u_0\}$. Now consider when $U_0 = \emptyset$. By Claim 9, $U_1 \neq \emptyset$, and there is some $y \in Y'_0$ such that $U_1 \cup \{y\}$ is a clique. If $y \notin Y_2 \cap T$ and $|U_2 \setminus T| \ge 2$, then set $U' = U_2 \setminus T$ and put $y \in S'_T$. If $y \in Y_2 \cap T$ then set $U' = \{u_1,u_2\}$ for an arbitrary $u_1 \in U_1$ and some $u_2 \in U_2\setminus T$ such that $y \notin N(u_2)$, if such an element $u_2$ exists. Otherwise, $|U'| \le 1$, and we set $U' = \{u\}$ for an arbitrary $u \in U_1 \cup U_2$. [*Claim 11: Let $U$ and $\hat{U}$ be distinct components of $G[Z]$, and let $U'$ and $\hat{U}'$ be subsets of their vertex sets, respectively, obtained in the way just described. Then $G[A \cup Y_0' \cup U' \cup \hat{U}']$ has no odd $T$-cycles containing a vertex in $U' \cup \hat{U}'$.*]{} [*Proof of Claim 11.*]{} Suppose that $C$ is an odd $T$-cycle of $G[A \cup Y'_0 \cup U' \cup \hat{U}']$ containing a vertex of $U' \cup \hat{U}'$. By construction, $G[A \cup Y_0' \cup U']$ and $G[A \cup Y_0' \cup \hat{U}']$ have no odd $T$-cycles. So $C$ contains both a vertex of $U'$, and a vertex of $\hat{U}'$. We may also assume, without loss of generality, that the vertices of $U'$ (and, respectively, $\hat{U}'$) form a path in $C$; if not, then there is a shorter odd $T$-cycle of $G[A \cup Y_0' \cup U' \cup \hat{U}']$ having this property. Since $A$ is anti-complete to $U' \cup \hat{U}'$, the cycle $C$ contains paths $P$ and $\hat{P}$, starting and ending at vertices of $Y'_0$, with internal vertices in $U'$ and $\hat{U}'$, respectively. Now $C$ is the concatenation of $P$, $\hat{P}$ and two paths in $G[A \cup Y_0']$, both starting and ending at vertices of $Y_0'$. As $G[A \cup Y_0']$ is bipartite, the two paths in $G[A \cup Y_0']$ are even. Thus $P$ and $\hat{P}$ cannot have the same parity. Assume, without loss of generality, that $P$ is odd. By Claim 6, $P$ can be extended to an odd $T$-cycle of $G[A \cup Y_0' \cup U']$; a contradiction. Finally we ask which vertices in $Y'_0 \setminus Y_2$ to add to $B_T$. First note that $G[Y'_0\setminus Y_2]$ is $P_3$-free; indeed, if a component $G[W]$ of $G[Y'_0\setminus Y_2]$ contains an induced $P_3$, then $G[Y_2 \cup W]$ has an induced $sP_1 + P_3$ subgraph, as $Y_2$ is anti-complete to $Y'_0 \setminus Y_2$. So $G[Y'_0\setminus Y_2]$ is a disjoint union of complete graphs. By Claim 6, there is an even $T$-path between any pair of vertices of $G[Y'_0]$, so we keep at most one vertex of each clique. For some component $G[U]$ of $G[Z]$ such that $N(U_1) \cap Y'_0= \{y\}$ and $y \in T$, we may have forced $y \in S'_T$, when $|U'| \ge 3$. It is always optimal to have $y \in S'_T$ in such a case else we would have $|U'| =1$, since $U' \cup \{y\}$ is a clique. So for each clique $G[W]$ of $G[Y'_0\setminus Y_2]$, we include a vertex of $W\setminus S'_T$ in $B_T$. We are now ready to prove our almost-complete classification. [**Theorem \[t-main2\] (restated).**]{} [*Let $H$ be a graph with $H\neq sP_1+P_4$ for all $s\geq 1$. Then [Subset Odd Cycle Transversal]{} on $H$-free graphs is polynomial-time solvable if $H=P_4$ or $H{\subseteq_i}sP_1+P_3$ for some $s\geq 1$ and [[NP]{}]{}-complete otherwise.*]{} If $H$ has a cycle or claw, we use Theorem \[t-known2\]. The cases $H=P_4$ and $H=2P_2$ follow from Theorems \[soct-split\] and \[soct-p4\], respectively. The remaining case, where $H{\subseteq_i}sP_1+P_3$, follows from Theorem \[soct-sp1p3\]. Conclusions {#s-con} =========== We gave almost-complete classifications of the complexity of [Subset Feedback Vertex Set]{} and [Subset Odd Cycle Transversal]{} for $H$-free graphs. The only open case in each classification is when $H=sP_1+P_4$ for some $s\geq 1$, which is also open for [Feedback Vertex Set]{} and [Odd Cycle Transversal]{} for $H$-free graphs. Our proof techniques for $H=sP_1+P_3$ do not carry over and new structural insights are needed in order to solve the missing cases. \[o-1\] Determine the complexity of [(Subset) Feedback Vertex Set]{} and [(Subset) Odd Cycle Transversal]{} for $(sP_1+P_4)$-free graphs, when $s \ge 1$. One of the main obstacles to solving Open Problem \[o-1\] is the case where there is a solution $S$ such that $G-S$ is a forest that contains (many) arbitrarily large stars. In particular, \[tree-sp1p3\] no longer holds. The vertex-weighted version of [Subset Feedback Vertex Set]{} has also been studied for $H$-free graphs. Papadopoulos and Tzimas [@PT20] proved that [Weighted Subset Feedback Vertex Set]{} is polynomial-time solvable for $4P_1$-free graphs but [[NP]{}]{}-complete for $5P_1$-free graphs (in contrast to the unweighted version). Bergougnoux et al. [@BPT19] recently proved that [Weighted Subset Feedback Vertex Set]{} is polynomial-time solvable for every class of bounded mim-width and thus for $P_4$-free graphs. Combining these results with Theorem \[t-main\] still leaves six gaps. \[o-2\] Determine the complexity of [Weighted Subset Feedback Vertex Set]{} for $H$-free graphs when $H\in \{2P_1+P_2,3P_1+P_2,P_1+P_3, 2P_1+P_3,P_1+P_4, 2P_1+P_4\}$. For the weighted variant, a vertex in $T$ may have a large weight that prevents it from being deleted in any solution; in particular, \[bound\], which plays a crucial role in our proofs, no longer holds. We note that the [[NP]{}]{}-completeness proof given by Papadopoulos and Tzimas for [Weighted Subset Feedback Vertex Set]{} on $5P_1$-free graphs [@PT20] can also be used to show that the weighted version of [Subset Odd Cycle Transversal]{} is [[NP]{}]{}-complete for $5P_1$-free graphs. Some initial analysis suggests that [Weighted Subset Odd Cycle Transversal]{} is polynomial-time solvable for $4P_1$-free graphs. As previously mentioned, Bergougnoux et al. [@BPT19] gave an [[XP]{}]{} algorithm for [Weighted Subset Feedback Vertex Set]{} parameterized by mim-width. \[o-4\] Does there exist an [[XP]{}]{} algorithm for [Subset Odd Cycle Transversal]{} and [Weighted Subset Odd Cycle Transversal]{} parameterized by mim-width? We also introduced the [Subset Vertex Cover]{} problem and showed that this problem is polynomial-time solvable on $(sP_1+P_4)$-free graphs for every $s\geq 0$. Lokshtanov et al. [@LVV14] proved that [Vertex Cover]{} is polynomial-time solvable for $P_5$-free graphs. Grzesik et al. [@GKPP19] extended this result to $P_6$-free graphs. \[o-5\] Determine the complexity of [Subset Vertex Cover]{} for $P_5$-free graphs. \[o-6\] Determine whether there exists an integer $r\geq 5$ such that [Subset Vertex Cover]{} is [[NP]{}]{}-complete for $P_r$-free graphs. By Poljak’s construction [@Po74], [Vertex Cover]{} is [[NP]{}]{}-complete for $H$-free graphs if $H$ has a cycle. However, [Vertex Cover]{} becomes polynomial-time solvable on $K_{1,3}$-free graphs [@Mi80; @Sh80]. We did not research the complexity of [Subset Vertex Cover]{} on $K_{1,3}$-free graphs and also leave this as an open problem for future work. \[o-7\] Determine the complexity of [Subset Vertex Cover]{} for $K_{1,3}$-free graphs. Finally, several related transversal problems have been studied but not yet for $H$-free graphs. For example, the parameterized complexity of [Even Cycle Transversal]{} and [Subset Even Cycle Transversal]{} has been addressed in [@MRRS12] and [@KKK12], respectively. Moreover, several other transversal problems have been studied for $H$-free graphs, but not the subset version: for example, [Connected Vertex Cover]{}, [Connected Feedback Vertex Set]{} and [Connected Odd Cycle Transversal]{}, and also for [Independent Feedback Vertex Set]{} and [Independent Odd Cycle Transversal]{}; see [@BDFJP19; @CHJMP18; @DJPPZ18; @JPP20] for a number of recent results. It would be interesting to solve the subset versions of these transversal problems for $H$-free graphs and to determine the connections amongst all these problems in a more general framework. [^1]: This paper received support from the Leverhulme Trust (RPG-2016-258). An extended abstract of it has been accepted for the proceedings of WG 2020 [@BJPP20].
--- author: - 'M. Colpi$^1$[^1], K. Holley-Bockelmann$^{2,3}$[^2], T. Bogdanović$^4$, P. Natarajan$^5$, J. Bellovary$^{6}$, A. Sesana$^{1,7}$, M. Tremmel$^8$, J. Schnittman$^{9}$, J. Comerford$^{10}$, E. Barausse$^{11}$, E. Berti$^{12}$, M. Volonteri$^{13}$, F. M. Khan$^{14,2}$, S. T. McWilliams$^{15}$, S. Burke-Spolaor$^{16}$, J. S. Hazboun$^{17}$, J. Conklin$^{18}$, G. Mueller$^{19}$, S. Larson$^{20}$' bibliography: - 'tb\_smbh.bib' --- \[front\] [Figure depicting the merger of two galaxies with their nuclear MBHs (circles), adopted with permission from @tremmel18]{} \ [**Thematic Science Areas:**]{} Galaxy Evolution, Multi Messenger Astronomy and Astrophysics, Cosmology and Fundamental Physics [**A New Window into the Cosmos**]{} \[sec:intro\] Gravity has its own messenger: GWs are ripples in the fabric of spacetime produced by non-axisymmetric motions of matter. Traveling essentially unimpeded throughout the Universe, GWs carry unbiased information on their sources, from binary stellar remnants, to MBH collisions, to the Big Bang itself. GWs provide a clean way to measure the geometry of black hole spacetimes, including masses and spins, and even characterize their horizons [@Klein16; @Berti2016; @Cardoso2017]. With their strongly curved geometry and relativistic motion, coalescing MBH binaries generate a highly warped and dynamic spacetime – the strongest gravitational signals expected in the Universe. Moreover, their amplitude and frequency show a simple and universal scaling with mass, inherited from the fact that general relativity has no built-in fundamental scale. This gives us direct access to a huge range of black holes – from primordial to ultra-massive – by exploring different GW bands. With gravity as a messenger, we stand to revolutionize our understanding of the birth, growth, and evolution of MBHs, as well as their role in sketching the cosmological canvas of the Universe. Most of what we know about MBHs thus far has been informed by EM observations of active galactic nuclei (AGN) over several epochs of cosmic evolution [@heckman14; @madau14]. During the [*Cosmic Dawn*]{}, starting at $z\sim 20$, mostly-neutral baryons in low-mass dark matter halos began to collapse and fragment, forming the first stars and seed black holes. The physics in this era is all but invisible to us in the EM window until it ends at $z=7.5$, when a myriad of sources of ultra-violet radiation, including accreting MBHs, reionized the intergalactic neutral hydrogen into a hot, tenuous plasma [@Planck-2018-cosmology]. The most [**MBH Mysteries**]{} $\bullet$ How are MBHs born and how do they grow?\ $\bullet$ How efficiently do MBHs merge and how does this affect their galaxy hosts?\ $\bullet$ What are the demographics of MBHs in the Universe?\ distant quasars are now found at $z\sim 7$, when the Universe was less than one billion years old, posing extreme constraints on their formation squarely in this heretofore unobserved era[@Banados18]. These rare, overluminous sources are probing the tip of an underlying population of yet undiscovered much fainter objects. GW observations will be key to unveiling the existence of and physics governing MBHs within the Cosmic Dawn. Well after MBH seeds are sown comes the epoch of[*Cosmic Noon*]{}, extending from $z\sim 6$ to $2$. This is the epoch of galaxy growth through repeated major mergers, accretion of lower mass dark-matter halos, and cold gas flowing in along dark matter filaments [@white78; @barnes96; @white91; @dekel09]. Around $z\sim 2$, the cosmic integrated star formation rate and AGN activity reach their peak, followed by a decline that extends to the present day [@heckman14]. Though it is widely accepted that AGN activity and major mergers are related, the precise details of how MBHs assemble during this epoch is an open question, one that GWs provide unique data to answer. Upcoming GW observations serve complementary views of the cosmos. GWs at low frequency will observe the origin and evolution of the most extreme and enigmatic objects in the Universe: [*massive black holes*]{}. In this whitepaper, we describe the science that can be obtained by LISA, the first-generation space-based GW observatory. By design, LISA will provide key observations needed revolutionize out view of MBHs [@LISA17], filling an unobserved gap of $\sim$9 orders of magnitude between nHz, where Pulsar Timing Arrays (PTAs) are sensitive to supermassive black holes orbiting on timescales of decades [@IPTA16], and $>$Hz, where ground-based observatories probe the last fraction of a second of stellar mass black hole mergers. [**Massive Black Holes in the Gravitational Universe**]{} The GW signals from comparable mass ratio coalescing MBHs are similar in shape to the first signal ever detected, GW150914 [@Abbott-1], and the subsequent stellar black-hole binary mergers. Indeed, one simply needs to appropriately rescale the time variable – making the signal longer lasting (from seconds to months)– and the amplitude – which results in signal-to-noise ratios (SNRs) as high as $\sim 1000$, to be compared to SNRs of a few tens at most for today’s Advanced LIGO and Advanced Virgo. Because of this rescaling, the GW frequency of merging binary MBHs with total masses of $ 10^4{\,{\rm M}_{\odot}}- 10^7{\,{\rm M}_{\odot}}$ falls squarely within LISA’s bandwidth (which extends from about 100 $\mu$Hz to 100 mHz) in the late inspiral, merger and ringdown phase of the binary evolution. The best sensitivity will be reached for MBHs with masses comparable to the one residing at the heart of the Milky Way, i.e. $\sim 10^5{\,{\rm M}_{\odot}}-10^6{\,{\rm M}_{\odot}}$. The galaxy mass function suggests that these ‘low-mass’ MBHs are the most common, but the least well-known in terms of basic demographics, birth, growth, dynamics and connection to their galaxy host [@Kormendy13]. Figure 1 shows the vastness of the LISA exploration volume: lines of constant SNR are depicted in the $M_{\rm B} - z$ plane, where $M_{\rm B}$ is the mass of the binary in the source frame. LISA will be unique at detecting the GW signal from coalescing binaries between $10^4{\,{\rm M}_{\odot}}$ and a few $10^7{\,{\rm M}_{\odot}}$, with SNR higher than 20 at formation redshifts $z$ as large as 20, and SNR as high as a few thousands at low redshifts [@Klein16; @LISA17]. [**Solving MBH Mysteries**]{} $\bullet$ LISA will measure the masses and spins of coalescing MBHs to a few $\%$ accuracy in $10^4 - 10^7{\,{\rm M}_{\odot}}$ binaries out to $z\sim 20.$\ $\bullet$ GWs will unveil the MBH growth via mergers, and their accretion history via mass and spin measurements.\ $\bullet$ GWs will shed light on the co-evolution of galaxies and MBHs.\ Detecting coalescing MBH binaries with $M_{\rm B}$ of $10^4-10^6{\,{\rm M}_{\odot}}$ at $z>10$ will provide unique insights into the initial masses, occupation fraction, and early growth of the first seed BHs, ancestors of the MBHs [@Volonteri10; @Natarajan14]. This will inform us about the physics producing the seeds, whether it involves the first generation of metal-free stars, the direct collapse of massive clouds [@Latif16], the collapse of hyper-massive stars formed in stellar runaway collisions [@Devecchi2012], or a different process altogether, such as primordial black holes formed in the early Universe, before the epoch of galaxy formation [@Bernal2018]. Knowledge of this population will anchor the initial conditions of MBH cosmic evolution, setting the stage for their subsequent mass growth and merger rate. ![Contours of constant SNR as a function of redshift (cosmic time) and source-frame binary mass $M_{\rm B}$ for the LISA observatory. For this figure, the MBHs are non spinning and have mass ratio $q=0.5$. Overlaid is an illustration of evolutionary tracks ending with the formation of a MBH at $z\sim 3$. Black dots and arrows represent the MBHs and their spins, respectively. MBHs are embedded in galaxy halos (white-yellow circles) and experience episodes of accretion (black lines) and mergers. Black stars refer to the most distant long Gamma-Ray Burst host, quasar and galaxy detected so far.](waterfall-LISA-cosmic-time.pdf){width="15.5cm"} \[waterfall-LISA\] Between about $3\lesssim z\lesssim 10$, LISA will detect the inspiral, merger and ringdown of sources with $10^5<M_{\rm B}/{\,{\rm M}_{\odot}}<10^7,$ enabling the measurement of their intrinsic masses with accuracies at the percent level [@Klein16]. GW signals will also carry exquisite information on the MBH spins, which enhance the GW amplitude, introducing modulations in the signal due to precession. Spins of MBHs powering AGN are difficult to measure from their EM spectra [@Reynolds14]. On the contrary, the spin of the larger (smaller) MBH in a binary will be measured from the GW signal with absolute error of better than 0.01 (0.1) in several LISA events, and the spin misalignment relative to the orbital angular momentum will be determined to within 10 degrees or better [@Klein16]. Individual spins prior to the merger encode information on whether accretion, which shaped both the MBH mass and spin evolution, was coherent (leading to spins close to maximal and small misalignment angles) or chaotic (leading to lower average spins and random spin orientations over the black hole life cycle) [@King05; @Barausse2012; @Sesana14spin]. This will give us the unprecedented opportunity to reconstruct the MBH cosmic history from GW observations alone [@BertiVolonteri08]. Moreover, by measuring the angle between BH spins and the angular momentum, crucial information will be gathered about the interaction between the MBHs and their environment and about whether the binary evolution is driven by the gas. In particular, gas can exert dissipative torques on the BH spins, potentially aligning them with the gas angular momentum. This also has crucial implications for the fate of the MBH produced by the merger, which could be imparted a large GW recoil velocity in the presence of large spin orbit misalignment prior to merger [@Campanelli07; @Bogdanovic07; @Haiman09; @Dotti10; @Kesden2010; @Roedig12; @Berti2012; @Lousto12; @Miller13]. As illustrated in Figure 1, GWs from MBHs are incredibly strong, and the advantage of this fact cannot be understated. At Cosmic Noon, right when galaxy mergers are rife, MBH mergers become extraordinarily loud, which enables precise measurements of the source parameters over 12 billion years of cosmic time. MBH coalescences may not occur in vacuum, and low-redshift ($z\lesssim 2$) binaries of $10^5<M_{\rm B}/{\,{\rm M}_{\odot}}<10^7$ surrounded by circumbinary gas, may outshine in the optical and X-rays during the inspiral and merger proper, becoming key targets for EM follow up, with advance warning of hours. These mergers will be localized within $10$ or even $0.4 \,\rm deg^2$, corresponding to the field of view of Large Synoptic Survey Telescope and of the Athena WFI [@McGee2018], respectively. The science with contemporaneous EM and GW observations is spectacular. It has the potential to discover the yet unknown periodic emission from shocked gas surrounding the two MBHs in the violently changing spacetime before merger [@Armitage02; @Haiman17; @Tang2018; @Bowen2018; @Ascoli18], flashes, bursts and jetted emission at merger [@Palenzuela10; @Kelly17], and also post-merger afterglow signatures [@Rossi2010]. Linking masses and spins determined with exquisite precision by the GW signal with EM emission will be paramount. [**Deciphering the Astrophysics Behind the Discovery**]{} At all redshifts, forming a MBH binary after a galaxy merger requires dissipation of orbital energy and efficient transport of angular momentum from the galaxy scale of hundreds of thousands of parsecs to the micro-parsec scale, when the merger gives birth to a new, single MBH [@bbr80; @khan11; @khan13; @vasiliev14; @Colpi14; @Holley2015; @Khan16; @Souza17]. The physics governing the orbital evolution of a MBH pair on each scale is dramatically different. The process starts with the assembly of galaxy haloes and is followed by galaxy collisions, which all occur on cosmological scales [@tremmel18]. In the new galaxy, the pairing, hardening, and coalescence of two MBHs is a complicated dynamical problem. The processes of galaxy and MBH binary dynamics are intimately connected. The link is provided by several strands of highly-coupled non-linear physics that ignites star formation, triggers nuclear inflows of gas, excites stellar and AGN feedback, and transports the incoming MBHs toward the center of the newly formed galaxy host. The cover page here depicts the merger of two galaxies and their embedded MBHs (circles), extracted from the cosmological simulation Romulus25, which tracks MBH pairs down to sub-kpc distances [@tremmel17; @tremmel18], still too widely separated for GWs to dominate, yet at the frontier of cosmological simulations of our day. Given the overwhelmingly large dynamical range involved in this problem, numerical simulations coupled to semi-analytical models and sub-grid physics are precious tools to assist us in the interpretation of LISA data. Masses, mass ratios, eccentricities and spins, which are encoded in the GW signal, can be connected to the physical processes leading to MBH binary formation and growth. For example, the eccentricity is largely amplified by stellar scatterings [@sesanaecc10; @mirza17], which is determined by the shape and kinematics of the background stellar potential. Meanwhile, the mass ratios and encounter geometries determine the efficiency of dynamical friction and sinking times [@Callegari11], and spin magnitudes and orientations reflect BH interactions with massive gas discs [@BertiVolonteri08]. [*Therefore, not only will LISA detect MBH binaries at the very end of their journey, but it will also unveil the cosmic evolution of the interplay between MBH binary dynamics and their host galaxies properties as they co-assemble in the cosmic web.*]{} The rates (3-20 per year in a conservative scenario) and properties of merging MBH binaries are inevitably connected with those of their host galaxies, and ultimately to the evolving large scale structure of the Universe [@Bonetti2018]. Complimentary to LISA, the North American Nanohertz Observatory for GWs [@Nanograv18] and other PTAs are targeting the GW foreground from very massive MBH binaries of $10^8-10^9 {\,{\rm M}_{\odot}}$ at nHz frequencies observed during their inspiral phases up to $z\sim 1$ [@Chen17]. The spectrum of the GW foreground contains precious information on how the giant MBHs pair and interact with the broader galaxy dynamics. Deciphering the information encoded in the LISA and PTA observations will grant us access to physics spanning a remarkable range, from the galactic scale down to the MBH horizon some 12 orders of magnitude smaller. In the coming years, observations of galaxies in deep fields coupled to forefront cosmological simulations will help us to interpret rate of MBH mergers as measured by LISA and PTAs in the low-frequency gravitational Universe. With its unique and nearly complete census of coalescing massive black hole binaries, from the Cosmic Dawn to the local Universe, GW observations will be a game changer in our understanding of the deepest mysteries of MBH birth, growth and coevolution, shedding light on structure formation, galaxy evolution and dynamics, accretion and fundamental physics. Space-based and pulsar timing gravitational wave observatories will cement the role of GWs as precise MBH probes across cosmic history, providing definitive answers about their origins and evolution. Interpreting the GW view of MBHs in the context of large scale structure, galaxy formation, and evolution requires a broad scientific vision that includes detailed modeling, inference, statistics, and input from EM surveys. By using MBH mergers as signposts for galaxy formation and assembly, we are poised for a paradigm shift in our understanding of MBHs and the Universe. $^1$ M. Colpi, Department of Physics, University of Milano Bicocca, Piazza della Scienza 3, I20126 Milano, Italy $^2$ K. Holley-Bockelmann, Physics and Astronomy Department, Vanderbilt University, PMB 401807, 2301 Vanderbilt Place, Nashville, USA $^3$ K. Holley-Bockelmann, Physics Department, Fisk University, Nashville, USA $^4$ T.Bogdanović, Center for Relativistic Astrophysics, School of Physics, Georgia Institute of Technology, Atlanta GA 30332, USA $^5$ P. Natarajan, Department of Astronomy, Yale University, New Haven, CT 06511, USA $^6$ J. Bellovary, Queensboro Community College and American Museum of Natural History, New York, NY, USA $^7$ A. Sesana, School of Physics and Astronomy, University of Birmingham, Edgbaston, Birmingham B15 2TT, United Kingdom $^8$ M. Tremmel, Yale Center for Astronomy and Astrophysics, Physics Department, P.O. Box 208120, New Haven, CT 06520, USA $^9$ J. Schnittman, NASA Goddard Space Flight Center, Greenbelt, MD, USA $^{10}$ J. M. Comerford, Department of Astrophysical and Planetary Sciences, University of Colorado Boulder, Boulder, CO 80309, USA $^{11}$ E. Barausse, CNRS, UMR 7095, Institut d’Astrophysique de Paris, 98 bis Bd Arago, 75014 Paris, France $^{12}$ E. Berti, Department of Physics and Astronomy, Johns Hopkins University, 3400 N. Charles Street, Baltimore, MD 21218, USA $^{13}$ M. Volonteri, Sorbonne Universités, UPMC Université Paris 06 et CNRS, UMR7095, Institut d’Astrophysique de Paris, 98bis Boulevard Arago, F-75014, Paris, France $^{14}$ F. M. Khan, Department of Space Science, Institute of Space Technology, P.O. Box 2750 Islamabad, Pakistan $^{15}$ S. T. McWilliams, Department of Physics and Astronomy, West Virginia University, Morgantown, WV 26506, USA; Center for Gravitational Waves and Astronomy, Morgantown, WV 26506, USA $^{16}$ S. Burke-Spolaor, Department of Physics and Astronomy, West Virginia University, Morgantown, WV 26506, USA; Center for Gravitational Waves and Astronomy, Morgantown, WV 26506, USA; CIFAR Azrieli Global Scholar $^{17}$ J. S. Hazboun, Physical Sciences Division, University of Washington Bothell, 18115 Campus Way NE Bothell, WA 98011-8246 $^{18}$ J. Conklin, University of Florida $^{19}$ G. Mueller, University of Florida $^{20}$ S. Larson, Northwestern University [^1]: [email protected] [^2]: [email protected]
--- abstract: 'While backscatter communication emerges as a promising solution to reduce power consumption at IoT devices, the transmission range of backscatter communication is short. To this end, this work integrates unmanned ground vehicles (UGVs) into the backscatter system. With such a scheme, the UGV could facilitate the communication by approaching various IoT devices. However, moving also costs energy consumption and a fundamental question is: what is the right balance between spending energy on moving versus on communication? To answer this question, this paper proposes a joint graph mobility and backscatter communication model. With the proposed model, the total energy minimization at UGV is formulated as a mixed integer nonlinear programming (MINLP) problem. Furthermore, an efficient algorithm that achieves a local optimal solution is derived, and it leads to automatic trade-off between spending energy on moving versus on communication. Numerical results are provided to validate the performance of the proposed algorithm.' author: - | Shuai Wang$^{*}$, Minghua Xia$^{\dag}$, and Yik-Chung Wu$^{*}$\ $^{*}$Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong\ $^{\dag}$School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou, 510006, China\ E-mail: [email protected]; [email protected]; [email protected] title: '[Joint Communication and Motion Energy Minimization in UGV Backscatter Communication]{}' --- Backscatter communication, Internet of Things (IoT), mobility, unmanned ground vehicle (UGV). Introduction ============ With a wide range of commercial and industrial applications, Internet of Things (IoT) market is continuously growing, with the number of inter-connected IoT devices expected to exceed 20 billion by 2020 [@iot1]. However, these massive IoT devices (e.g., sensors and tags) are usually limited in size and energy supply [@xia], making data collection challenging in IoT systems. To this end, backscatter communication is a promising solution, because it eliminates radio frequency (RF) components in IoT devices [@back1; @back2]. Unfortunately, due to the round-trip path-loss, the transmission range of backscatter communication is limited [@back4; @back5; @back6]. This can be seen from a recent prototype in [@back1], where the wirelessly powered backscatter communication only supports a range of $1$ meter at a data-rate of $1$ $\mathrm{kbps}$. To combat the short communication range, this paper investigates a viable solution that the backscatter transmitter and receiver are mounted on an unmanned ground vehicle (UGV). With such a scheme, the UGV could vary its location for wireless data collection, thus having the flexibility of being close to different IoT devices at different times [@ugv1]. However, since moving the UGV would consume motion energy, an improperly chosen path might lead to excessive movement, thus offseting the benefit brought by movement [@ugv1; @ugv2; @ugv3]. Therefore, the key is to balance the trade-off between spending energy on moving versus on communication, which unfortunately cannot be handled by traditional vehicle routing algorithms [@vr1; @vr2; @vr3], since they do not take the communication power and quality-of-service (QoS) into account. In view of the apparent research gap, this paper proposes an algorithm that leads to automatic trade-off in spending energy on moving versus on communication. In particular, the proposed algorithm is obtained by integrating the graph mobility model and the backscatter communication model. With the proposed model, the joint mobility management and power allocation problem is formulated as a QoS constrained energy minimization problem. Nonetheless, such a problem turns out to be a mixed integer nonlinear programming problem (MINLP), which is nontrivial to solve due to the nonlinear coupling between discrete variables brought by moving and continuous variables brought by communication. This is in contrast to unmanned aerial vehicle communication in which only continuous variables are involved [@uav]. To this end, an efficient algorithm, which is guaranteed to obtain a local optimal solution, is proposed. By adopting the proposed algorithm, simulation results are presented to further demonstrate the performance of the proposed algorithm under various noise power levels at IoT devices. *Notation*. Italic letters, simple bold letters, and capital bold letters represent scalars, vectors, and matrices, respectively. Curlicue letters represent sets and $|\cdot|$ is the cardinality of a set. We use $(a_1,a_2,\cdots)$ to represent a sequence and $[a_1,a_2,\cdots]^{T}$ to represent a column vector, with $(\cdot)^{T}$ being the transpose operator. The operators $\textrm{Tr}(\cdot)$ and $(\cdot)^{-1}$ take the trace and the inverse of a matrix, respectively. Finally, $\mathbb{E}(\cdot)$ represents the expectation of a random variable. System Model ============ Mobility Model -------------- ![An illustration of UGV mobility model with $M=7$.[]{data-label="fig_sim"}](WANG1.eps){width="50mm"} We consider a wireless data collection system, which consists of $K$ IoT users and one UGV equipped with a RF transmitter and a tag reader. The environment in which the UGV operates in is described by a directed graph $(\mathcal{V},\mathcal{E})$ as shown in Fig. 1, where $\mathcal{V}$ is the set of $M$ vertices representing the possible stopping points, and $\mathcal{E}$ is the set of directed edges representing the allowed movement paths [@graph]. To quantify the path length, a matrix $\mathbf{D}=[D_{1,1},\cdots,D_{1,M};\cdots;D_{M,1},\cdots,D_{M,M}]\in\mathbb{R}^{M\times M}_+$ is defined, with the element $D_{m,j}$ representing the distance from vertex $m$ to vertex $j$ ($D_{m,m}=0$ for any $m$). If there is no allowed path from vertex $m$ to vertex $j$, we set $D_{m,j}=+\infty$ [@graph]. To model the movement of the UGV, we define a visiting path $\mathcal{Q}=\{y_1,y_2,\cdots,y_Q\}$ where $y_j \in \mathcal{V}$ for $j=1,\cdots,Q$ and $(y_j,y_{j+1})\in \mathcal{E}$ for $j=1,\cdots,Q-1$, with $Q-1$ being the number of steps to be taken. Without loss of generality, we assume the following two conditions hold: - $y_1=y_Q$. This is generally true as a typical UGV management scenario is to have the UGV standing by at the starting point (e.g., for charging and maintenance services) after the data collection task [@vr1]. For notational simplicity, it is assumed that vertex $y_1=y_Q=1$ is the start and end point of the path to be designed. - There are no repeating vertices among $(y_1,\cdots,y_{Q-1})$. This is true because if a vertex $m$ is visited twice, we can always introduce an auxiliary vertex with $D_{M+1,j}=D_{m,j}$ and $D_{j,M+1}=D_{j,m}$ for all $j\in\mathcal{V}$ [@laporte]. Thus this scenario can be represented by an extended graph with one more vertex and an extended $\mathbf{D}$ with dimension $(M+1)\times(M+1)$. Correspondingly, we define the selection variable $\mathbf{v}=[v_1,\cdots,v_M]^{T}\in\{0,1\}^M$, where $v_m=1$ if the vertex $m$ appears in the path $\mathcal{Q}$ and $v_m=0$ otherwise. Furthermore, we define a matrix $\mathbf{W}=[W_{1,1},\cdots,W_{1,M};\cdots;W_{M,1},\cdots,W_{M,M}]\in\{0,1\}^{M\times M}$, with $W_{y_j,y_{j+1}}=1$ for all $j=1,\cdots,Q-1$ and zero otherwise. With the moving time from the vertex $m$ to the vertex $j$ being $D_{m,j}/a$ where $a$ is the velocity, the total moving time along path $\mathcal{Q}$ is $$\begin{aligned} \frac{1}{a}\sum_{m=1}^M\sum_{j=1}^MW_{m,j}D_{m,j}=\frac{\mathrm{Tr}(\mathbf{D}^{T}\mathbf{W})}{a}. \label{SW}\end{aligned}$$ Furthermore, since the total motion energy $E_M$ of the UGV is proportional to the total motion time [@ugv1; @ugv2; @ugv3], the motion energy can be expressed in the form of $$\begin{aligned} E_M=\left(\frac{\alpha_1}{a}+\alpha_2\right)\mathrm{Tr}(\mathbf{D}^{T}\mathbf{W}), \label{EW}\end{aligned}$$ where $\alpha_1$ and $\alpha_2$ are parameters of the model (e.g., for a Pioneer 3DX robot in Fig. 1, $\alpha_1=0.29$ and $\alpha_2=7.4$ [@ugv1 Sec. IV-C]). Backscatter Communication Model ------------------------------- Based on the mobility model, the UGV moves along the selected path $\mathcal{Q}$ to collect data from users. In particular, from the starting point $y_1$, the UGV stops for a duration $u_{y_1}$ and then it moves along edge $(y_1,y_2)$ to its outward neighbor $y_2$, and stops for a duration $u_{y_2}$. The UGV keeps on moving and stopping along the path until it reaches the destination $y_Q$. When the UGV stops at the vertex $m$ (with $v_m=1$), it will wait for a time duration $u_m$ for data collection. Out of this $u_m$, a duration of $t_{k,m}$ will be assigned to collect data from user $k$ via full-duplex backscatter communication[^1] [@back2]. More specifically, if $t_{k,m}=0$, the IoT user $k$ will not be served in duration $u_m$. On the other hand, if $t_{k,m}\neq0$,the RF source at the UGV transmits a symbol $x_{k,m}\in\mathbb{C}$ with $\mathbb{E}[|x_{k,m}|^2]=p_{k,m}$, where $p_{k,m}$ is the transmit power of the RF source. Then the received signal-to-noise ratio (SNR) at the UGV tag reader is $\eta |g_{k,m}|^2|h_{k,m}|^2 p_{k,m}/N_0$, where $h_{k,m}\in \mathbb{C}$ is the downlink channel from the UGV to user $k$, $g_{k,m}\in \mathbb{C}$ is the uplink channel[^2] from user $k$ to the UGV, and $N_0$ is the power of complex Gaussian noise (including the self-interference due to full-duplex communication [@liwang; @zwen; @zwen2]). Furthermore, $\eta$ is the tag scattering efficiency determined by the load impedance and the antenna impedance [@back10]. Based on the backscatter model, the transmission rate during $t_{k,m}$ is given by $$\begin{aligned} \label{rate} R_{k,m}=\mathrm{log}_2\left(1+v_m\cdot\frac{\beta\eta|g_{k,m}|^2|h_{k,m}|^2 p_{k,m}}{N_0}\right),\end{aligned}$$ where $\beta$ is the performance loss due to imperfect modulation and coding schemes in backscatter communication [@back11]. For example, in bistatic backscatter communication with frequency shift keying, $\beta=0.5$ [@back11]. On the other hand, in ambient backscatter communication with on-off keying, $\beta$ is obtained by fitting the logarithm function $\mathrm{log}_2\left(1+\beta x\right)$ to $1-\mathbb{Q}\left(\sqrt{x}\right)$ [@ook], where $\mathbb{Q}\left(x\right)=1/\sqrt{2\pi}\int_x^\infty \mathrm{exp}\left(-u^2/2 \right)\mathrm{d}u$ refers to the Q-function. Joint Communication and Motion Energy Miminization ================================================== In wireless data collection systems, the task is to collect certain amount of data from different IoT devices by planning the path (involving variables $\mathbf{v}$ and $\mathbf{W}$) and designing the stopping time $\{t_{k,m}\}$ and transmit power $\{p_{k,m}\}$. In particular, the data collection QoS requirement of the $k^{\mathrm{th}}$ IoT device can be described by $$\begin{aligned} &\sum_{m=1}^M t_{k,m}\cdot\mathrm{log}_2\left(1+v_m\cdot\frac{\beta\eta|g_{k,m}|^2|h_{k,m}|^2 p_{k,m}}{N_0}\right)\geq\gamma_k,\end{aligned}$$ where $\gamma_k>0$ (in $\mathrm{bit/Hz}$) is the amount of data to be collected from user $k$. Notice that the variables $\mathbf{v}$ and $\mathbf{W}$ are dependent since $v_m=0$ implies $W_{m,j}=W_{j,m}=0$ for any $j\in \mathcal{V}$. On the other hand, the UGV would visit the vertex with $v_m=1$, making $\sum_{j=1}^M W_{m,j} = \sum_{j=1}^M W_{j,m}=1$. Combining the above two cases, we have $$\begin{aligned} &\sum_{j=1}^MW_{m,j}=v_m,~\sum_{j=1}^MW_{j,m}=v_m,~~\forall m=1,\cdots,M.\end{aligned}$$ Furthermore, since the path must be connected, the following subtour elimination constraints are required to eliminate disjointed sub-tours [@laporte]: $$\begin{aligned} &\lambda_m-\lambda_j+\left(\sum_{l=1}^Mv_l-1\right)W_{m,j}+\left(\sum_{l=1}^Mv_l-3\right)W_{j,m} \nonumber\\ &\leq \sum_{l=1}^Mv_l-2 +J\left(2-v_m-v_j\right) ,~~\forall m,j\geq 2,~m\neq j, \nonumber\\ &v_m\leq\lambda_m\leq\left(\sum_{l=1}^Mv_l-1\right)v_m,~~\forall m\geq 2,\end{aligned}$$ where $\{\lambda_{m}\}$ are slack variables to guarantee a connected path, and $\sum_{l=1}^Mv_l$ is the number of vertices involved in the path. The constant $J=10^{6}$ is large enough such that the first line of constraint is always satisfied when $v_m=0$ or $v_j=0$. In this way, the vertices not to be visited would not participate in subtour elimination constraints. Having the data collection and graph mobility constraints satisfied, it is then crucial to reduce the total energy consumption at the UGV. As the energy consumption includes motion energy $E_M=\left(\alpha_1/a+\alpha_2\right)\mathrm{Tr}(\mathbf{D}^{T}\mathbf{W})$ and communication energy $E_C=\sum_{m=1}^M\sum_{k=1}^Kt_{k,m}p_{k,m}$, the joint mobility management and power allocation problem of the data collection system is formulated as $\mathrm{P}1$, where is for constraining the operation (including moving and data collection) to be completed within $T$ seconds, and is for constraining the stopping time to be zero if the vertex is not visited. It can be seen from the constraint of $\mathrm{P}1$ that the UGV can choose the stopping vertices, which in turn affect the channel gains to and from the IoT users. By choosing the stopping vertices with better channel gains to IoT users, the transmit powers $\{p_{k,m}\}$ might be reduced. However, this might also lead to additional motion energy, which in turn costs more energy consumption at the UGV. Therefore, there exists a trade-off between moving and communication, and solving $\mathrm{P}1$ can concisely balance this energy trade-off. Unfortunately, problem $\mathrm{P}1$ is nontrivial to solve due to the following reasons. Firstly, $\mathrm{P}1$ is NP-hard, since it involves the integer constraints $-$ [@dimitri]. Secondly, the data-rate and the energy cost at each vertex are dependent on the transmit power $\{p_{k,m}\}$ and transmission time $\{t_{k,m}\}$, which are unknown. This is in contrast to traditional integer programming problems [@dimitri], where the reward of visiting each vertex is a constant. $$\begin{aligned} \mathrm{P}1: &\mathop{\mathrm{min}}_{\substack{\mathbf{v},\mathbf{W},\{\lambda_m\}\\ \{t_{k,m},p_{k,m}\}}} ~\left(\frac{\alpha_1}{a}+\alpha_2\right)\mathrm{Tr}(\mathbf{D}^{T}\mathbf{W})+\sum_{m=1}^M\sum_{k=1}^Kt_{k,m}p_{k,m} \nonumber \\ \mathrm{s.t.}~&\sum_{m=1}^M t_{k,m}\cdot\mathrm{log}_2\left(1+v_m\cdot\frac{\beta\eta|g_{k,m}|^2|h_{k,m}|^2 p_{k,m}}{N_0}\right) \nonumber\\ &\geq\gamma_k,~~\forall k, \label{P1a} \\ &\frac{1}{a}\mathrm{Tr}(\mathbf{D}^{T}\mathbf{W})+\sum_{m=1}^M\sum_{k=1}^Kt_{k,m}\leq T, \label{P1b} \\ & \sum_{j=1}^MW_{m,j}=v_m,~\sum_{j=1}^MW_{j,m}=v_m,~~\forall m, \label{P1c} \\ &\lambda_m-\lambda_j+\left(\sum_{l=1}^Mv_l-1\right)W_{m,j}+\left(\sum_{l=1}^Mv_l-3\right)W_{j,m} \nonumber\\ & \leq \sum_{l=1}^Mv_l-2+J\left(2-v_m-v_j\right) , \nonumber\\ & \forall m,j\geq 2,~m\neq j, \label{subtour1} \\ &v_m\leq\lambda_m\leq\left(\sum_{l=1}^Mv_l-1\right)v_m,~~\forall m\geq 2, \label{subtour2} \\ &W_{m,j}\in\{0,1\},~~\forall m,j,~~W_{m,m}=0,~~\forall m, \label{edge} \\ &v_{1}=1,~v_{m}\in\{0,1\},~~\forall m\geq 2, \label{vertex} \\ &(1-v_m)\cdot t_{k,m}=0,~~\forall k,m, \label{notvisit} \\ &t_{k,m}\geq0,~p_{k,m}\geq0,~~\forall k,m, \label{resource}\end{aligned}$$ Local Optimal Solution to $\mathrm{P}1$ ======================================= Despite the optimization challenges, this section proposes an algorithm that theoretically achieves a local optimal solution to $\mathrm{P}1$. The insight behind this algorithm is to derive the optimal solution of $\mathbf{W},\{\lambda_m\},\{t_{k,m},p_{k,m}\}$ to $\mathrm{P}1$ with fixed $\mathbf{v}$. By representing $\mathbf{W},\{\lambda_m\},\{t_{k,m},p_{k,m}\}$ as functions of $\mathbf{v}$, problem $\mathrm{P}1$ is simplified to an equivalent problem only involving $\mathbf{v}$. Then we will can capitalize on the successive local search (SLS) method [@meta; @sls1; @sls2] to obtain the local optimal solution. Optimal Solution of $\mathbf{W}$ and $\{t_{k,m},p_{k,m}\}$ with Fixed $\mathbf{v}$ ---------------------------------------------------------------------------------- When $\mathbf{v}=\widetilde{\mathbf{v}}$, where $\widetilde{\mathbf{v}}$ is any feasible solution to $\mathrm{P}1$, the constraint can be dropped since it only involves $\mathbf{v}$. Moreover, to resolve the nonlinear coupling between $\{t_{k,m}\}$ and $\{p_{k,m}\}$, we replace $\{p_{k,m}\}$ with a new variable $\{Q_{k,m}\}$ such that $\{Q_{k,m}:=t_{k,m}p_{k,m}\}$. Based on the above variable substitution, the objective function of $\mathrm{P}1$ becomes $$\begin{aligned} &\left(\alpha_1/a+\alpha_2\right)\mathrm{Tr}(\mathbf{D}^{T}\mathbf{W})+\sum_{m=1}^M\sum_{k=1}^KQ_{k,m}, \label{obj}\end{aligned}$$ which is linear. On the other hand, the constraint is equivalent to $$\begin{aligned} \label{P1a2} &\sum_{m=1}^M\underbrace{ t_{k,m}\cdot\mathrm{log}_2\left(1+\frac{A_{k,m} Q_{k,m}}{t_{k,m}}\right)} _{:=\Phi_{k,m}(t_{k,m},Q_{k,m})} \geq\gamma_k,~~\forall k,\end{aligned}$$ where the constant $$\begin{aligned} \label{Akm} A_{k,m}:=\widetilde{v}_m\cdot\frac{\beta\eta|g_{k,m}|^2|h_{k,m}|^2}{N_0},\end{aligned}$$ and the following property can be established. \(i) The function $\Phi_{k,m}$ is concave with respect to $\{t_{k,m},Q_{k,m}\}$. (ii) $\Phi_{k,m}$ is a monotonically increasing function of $t_{k,m}$ for all $(k,m)$. To prove part (i) of this property, we note that $\Phi_{k,m}$ is the perspective transformation of the concave function $\mathrm{log}_2\left(1+A_{k,m} Q_{k,m}\right)$. Since perspective transformation preserves concavity [@opt1], $\Phi_{k,m}$ is also concave. To prove part (ii), we compute the derivative of $\Phi_{k,m}$ in with respect to $t_{k,m}$ as $$\begin{aligned} \nabla_{t_{k,m}}\Phi_{k,m}=& \mathrm{log}_2\left(1+\frac{A_{k,m} Q_{k,m}}{t_{k,m}}\right) \nonumber\\ &-\frac{1}{\mathrm{ln}2}\cdot \frac{A_{k,m}Q_{k,m}}{t_{k,m}+A_{k,m}Q_{k,m}}.\end{aligned}$$ Using the result from part (i), we have $\nabla^2_{t_{k,m}}\Phi_{k,m}\leq 0$ due to $\Phi_{k,m}$ being concave. Therefore, $\nabla_{t_{k,m}}\Phi_{k,m}$ is a monotonically decreasing function of $t_{k,m}$. This means that $$\begin{aligned} \nabla_{t_{k,m}}\Phi_{k,m}\geq \mathop{\mathrm{lim}}_{t_{k,m}\rightarrow+\infty} \nabla_{t_{k,m}}\Phi_{k,m}=0,\end{aligned}$$ and the proof is completed. Based on the result from part (i) of **Property 1**, it is clear that the constraint is convex. On the other hand, according to part (ii) of **Property 1**, it can be seen that the optimal $\mathbf{W}^*$ and $\{t_{k,m}^*\}$ to $\mathrm{P}1$ must activate the constraint . Otherwise, we can always increase the value of $\{t_{k,m}\}$ such that the left hand side of the constraint is increased. This allows us to decrease the value of $\{Q_{k,m}\}$ (thus the objective value of ), which contradicts to $\{t_{k,m}^*\}$ being optimal. As a result, the constraint can be restricted into an equality $\mathrm{Tr}(\mathbf{D}^{T}\mathbf{W})/a+\sum_{m=1}^M\sum_{k=1}^Kt_{k,m}=T$, giving $$\begin{aligned} \label{DW} \mathrm{Tr}(\mathbf{D}^{T}\mathbf{W})=a\left(T-\sum_{m=1}^M\sum_{k=1}^Kt_{k,m}\right).\end{aligned}$$ Putting into , $\mathrm{P}1$ is equivalently transformed into the following two-stage optimization problem: $$\begin{aligned} \mathrm{P}2: \mathop{\mathrm{min}}_{\substack{\{t_{k,m},Q_{k,m}\}}} ~&\left(\alpha_1+\alpha_2a\right)\left(T-\sum_{m=1}^M\sum_{k=1}^Kt_{k,m}\right) \nonumber\\ & +\sum_{m=1}^M\sum_{k=1}^KQ_{k,m} \nonumber\\ \mathrm{s.t.}~~~~~&\sum_{m=1}^M\Phi_{k,m}(t_{k,m},Q_{k,m}) \geq\gamma_k,~~\forall k, \nonumber\\ & \sum_{m=1}^M\sum_{k=1}^Kt_{k,m} =\mathop{\mathrm{max}}_{\substack{\{\mathbf{W},\lambda_m\}}} \Big\{T-\frac{\mathrm{Tr}(\mathbf{D}^{T}\mathbf{W})}{a}: \nonumber\\ &~~~~~~~~~~~~~~~~~~~~ \eqref{P1c}-\eqref{edge}\Big\},\nonumber \\ &(1-\widetilde{v}_m)\cdot t_{k,m}=0,~~\forall k,m, \nonumber \\ &t_{k,m}\geq0,~Q_{k,m}\geq0,~~\forall k,m.\end{aligned}$$ To solve $\mathrm{P}2$, we first need to compute the right hand side of the second constraint, which leads to the following problem: $$\begin{aligned} &\mathop{\mathrm{max}}_{\substack{\mathbf{W},\{\lambda_{m}\}}} ~T-\frac{\mathrm{Tr}(\mathbf{D}^{T}\mathbf{W})}{a} ~~~\mathrm{s.t.}~\eqref{P1c}-\eqref{edge}. \label{TSP}\end{aligned}$$ The problem is a travelling salesman problem, which can be optimally solved via the software Mosek [@laporte; @MINLP]. Denoting the optimal solution to the problem as $\{\widehat{\mathbf{W}},\widehat{\lambda}_m\}$, the optimal objective value of the travelling salesman problem is given by $\Upsilon(\widetilde{\mathbf{v}}):= T-\mathrm{Tr}(\mathbf{D}^{T}\widehat{\mathbf{W}})/a$. Finally, by putting the obtained $\Upsilon(\widetilde{\mathbf{v}})$ into $\mathrm{P}2$, the second constraint of $\mathrm{P}2$ is written as $\sum_{m=1}^M\sum_{k=1}^Kt_{k,m} = \Upsilon(\widetilde{\mathbf{v}})$. Adding to the fact that all the other constraints in $\mathrm{P}2$ are convex, $\mathrm{P}2$ is a convex optimization problem. Therefore, $\mathrm{P}2$ can be optimally solved by CVX, a Matlab software for solving convex problems [@opt1]. Denoting its solution as $\{\widehat{t}_{k,m},\widehat{Q}_{k,m}\}$, the optimal $\{\widehat{p}_{k,m}\}$ with fixed $\mathbf{v}=\widetilde{\mathbf{v}}$ can be recovered as $\widehat{p}_{k,m}=\widehat{Q}_{k,m}/\widehat{t}_{k,m}$. Local Optimal Solution of $\mathbf{v}$ -------------------------------------- With path selection $\widehat{\mathbf{W}}$, transmit times $\{\widehat{t}_{k,m}\}$, and transmit powers $\{\widehat{p}_{k,m}\}$ derived in Section IV-A, the optimal objective value of $\mathrm{P}1$ with $\mathbf{v}=\widetilde{\mathbf{v}}$ can be written as $$\begin{aligned} &\Xi(\widetilde{\mathbf{v}})= \left(\alpha_1/a+\alpha_2\right)\mathrm{Tr}(\mathbf{D}^{T}\widehat{\mathbf{W}}) +\sum_{m=1}^M\sum_{k=1}^K\widehat{t}_{k,m}\widehat{p}_{k,m}.\end{aligned}$$ Therefore, problem $\mathrm{P}1$ is re-written as $$\begin{aligned} &\mathrm{P}3:\mathop{\mathrm{min}}_{\substack{\mathbf{v}}} ~~\Xi(\mathbf{v}) ~~\mathrm{s.t.}~~v_{1}=1,~v_{m}\in\{0,1\},~~\forall m\geq 2. \label{v}\end{aligned}$$ To solve $\mathrm{P}3$, a naive way is to apply exhaustive search for $\mathbf{v}$. Unfortunately, since the searching space of $\{v_m\}$ is very large (i.e., $2^{M-1}$), direct implementation of exhaustive search is impossible. To address the above issue, a SLS method [@meta; @sls1; @sls2] is presented, which significantly reduces the computational complexity compared to exhaustive search. More specifically, we start from a feasible solution of $\mathbf{v}$ (e.g., $\mathbf{v}^{[0]}=[1,0,\cdots,0]^{T}$), and randomly selects a candidate solution $\mathbf{v}'$ from the neighborhood $\mathcal{N}(\mathbf{v}^{[0]})$. Since a natural neighborhood operator for binary optimization problem is to flip the values of $\{v_m\}$, $\mathcal{N}(\mathbf{v}^{[0]})$ can be set to $$\begin{aligned} \mathcal{N}(\mathbf{v}^{[0]})=\{\mathbf{v}\in\{0,1\}^M:||\mathbf{v}-\mathbf{v}^{[0]}||_0\leq L,~v_1=1\},\end{aligned}$$ where $L\geq 1$ is the size of neighborhood [@sls2]. It can be seen that $\mathcal{N}(\mathbf{v}^{[0]})$ is a subset of the entire feasible space and containing solutions “close” to $\mathbf{v}^{[0]}$. With the neighborhood $\mathcal{N}(\mathbf{v}^{[0]})$ defined above and the choice of $\mathbf{v}$ fixed to $\mathbf{v}=\mathbf{v}'$, we consider two cases. - If $\Xi(\mathbf{v}')\leq\Xi(\mathbf{v}^{[0]})$, we update $\mathbf{v}^{[1]}\leftarrow\mathbf{v}'$. By treating $\mathbf{v}^{[1]}$ as a new feasible solution, we can construct the next neighborhood $\mathcal{N}(\mathbf{v}^{[1]})$. - If $\Xi(\mathbf{v}')>\Xi(\mathbf{v}^{[0]})$, we find another point within the neighborhood $\mathcal{N}(\mathbf{v}^{[0]})$ until $\Xi(\mathbf{v}')\leq\Xi(\mathbf{v}^{[0]})$. The above procedure is repeated to iteratively generate a sequence of $\{\mathbf{v}^{[1]},\mathbf{v}^{[2]},\cdots\}$ and the converged point is guaranteed to be a local optimal solution to $\mathrm{P}1$ [@dimitri]. In practice, we terminate the iterative procedure when the number of iterations is larger than $\overline{\mathrm{Iter}}$. Summary of Algorithm -------------------- Since the Algorithm 1 finds the local optimal solution of $\mathbf{v}$ to $\mathrm{P}3$, and the optimal solution of $\mathbf{W}$ and $\{t_{k,m},p_{k,m}\}$ with fixed $\mathbf{v}$ can be computed according to Section IV-A, the entire algorithm for computing the local optimal solution to $\mathrm{P}3$ (equivalently $\mathrm{P}1$) is summarized in Algorithm 1. In terms of computational complexity, computing $\Upsilon(\mathbf{v}')$ would involve the travelling salesman problem, which requires a complexity of $O\left((M-1)^2\cdot2^{M-1}\right)$ in the worst case [@tsp]. On the other hand, since $\mathrm{P}2$ has $2KM$ variables, solving $\mathrm{P}2$ via CVX requires a complexity of $O\left((2KM)^{3.5}\right)$ [@opt2]. Therefore, with $\overline{\mathrm{Iter}}$ iterations, the proposed Algorithm 1 requires a complexity of $O\left(\overline{\mathrm{Iter}}\left[(M-1)^2\cdot2^{M-1}+(2KM)^{3.5}\right]\right)$. **Initialize** $\mathbf{v}^{[0]}=[1,0,0,\cdots]^{T}$ and a proper $L$. Set counter $n=0$ and the number of iterations $\mathrm{Iter}=0$. **Repeat**   Sample a solution $\mathbf{v}'\in\mathcal{N}(\mathbf{v}^{[n]})$.   Compute $\Xi(\mathbf{v}')$ by solving $\mathrm{P}1$ with $\mathbf{v}=\mathbf{v}'$.   If $\Xi(\mathbf{v}')\leq\Xi(\mathbf{v}^{[n]})$, update $\mathbf{v}^{[n+1]}\leftarrow\mathbf{v}'$ and $n\leftarrow n+1$.   Update $\mathrm{Iter}\leftarrow \mathrm{Iter}+1$. **Until** $\mathrm{Iter}= \overline{\mathrm{Iter}}$. Output $\mathbf{v}^{[n]}$, $\widehat{\mathbf{W}}$, and $\{\widehat{t}_{k,m},\widehat{p}_{k,m}\}$. Simulation Results and Discussions ================================== This section provides numerical results to evaluate the performance of the UGV backscatter communication network. It is assumed that the backscattering efficiency is $\eta=0.78$ (corresponding to $1.1~\mathrm{dB}$ loss [@back11]), and the performance loss due to imperfect modulation is $\beta=0.5$ [@back11]. Within the time budget $T=50~\mathrm{s}$, the data collection targets $\gamma_k\sim \mathcal{U}(2,4)$ in the unit of $\mathrm{bit/Hz}$ are requested by $K=10$ IoT users (corresponding to $K\gamma_k/T=0.4\sim0.8~\mathrm{bps/Hz}$ for the system [@iot1]), where $\mathcal{U}(a,b)$ represents the uniform distribution within the interval $[a,b]$. Based on the above settings, we simulate the data collection map in a $20~\mathrm{m}\times20~\mathrm{m}=400~\mathrm{m}^2$ square area, which is a typical size for smart warehouses. Inside this map, $K=10$ IoT users and $M=15$ vertices representing stopping points are uniformly scattered. Among all the vertices, the vertex $m=1$ is selected as the starting point of the UGV. With the locations of all the stopping points and the IoT devices, the distances between each pair of IoT device and stopping point can be computed, and the distance-dependent path-loss model $\varrho_{k,m}=\varrho_0\cdot(\frac{d_{k,m}}{d_0})^{-2.5}$ is adopted [@channel], where $d_{k,m}$ is the distance from user $k$ to the stopping point $m$, and $\varrho_0=10^{-3}$ is the path-loss at distance $d_0=1~\mathrm{m}$. Based on the path-loss model, channels $g_{k,m}$ and $h_{k,m}$ are generated according to $\mathcal{CN}(0,\varrho_{k,m})$. Each point in the figures is obtained by averaging over $100$ simulation runs, with independent channels and realizations of locations of vertices and users in each run. To verify the convergence of Algorithm 1 in Section IV, Fig. 2a shows the total energy consumption versus the number of iterations $\mathrm{Iter}$ when the receiver noise power $N_0=-70~\mathrm{dBm}$ (corresponding to power spectral density $-120~\mathrm{dBm/Hz}$ [@noise] with $100~\mathrm{kHz}$ bandwidth [@iot1]). It can be seen that with the choice of $L=3$, the total energy consumption in the unit of joule converges and stabilizes after $50$ iterations. This verifies the convergence property of SLS and also indicates that the number of iterations for SLS to converge is moderate. Therefore, we set $L = 3$ with the number of iterations being 50 in the subsequent simulations. Next, we focus on the energy management performance of Algorithm 1. In particular, the case of $K=10$ with $M=15$ is simulated, and the total energy consumption versus the noise power $N_0$ is shown in Fig. 2b. It can be seen that if the noise power is large, by allowing the UGV to visit all the vertices, it is possible to achieve a significantly lower energy consumption compared to the case of no UGV movement. However, this conclusion does not hold in the small noise power regime, which indicates that moving is not always beneficial. Fortunately, the proposed Algorithm 1 can automatically determine whether to move and how far to move. For example, if the noise power is extremely small (e.g., $-120~\mathrm{dBm}$), the UGV could easily collect the data from IoT users at the starting point. In such a case, the proposed Algorithm 1 would fix the UGV at the starting point. This can be seen from Fig. 2b at $N_0=-120~\mathrm{dBm}$, in which Algorithm 1 leads to the same performance as the case of no UGV movement. However, if the noise power is increased to a medium value (e.g., $-90$ $\mathrm{dBm}$), the total energy is reduced by allowing the UGV to move (with the moving path shown in Fig. 3a). On the other hand, if the noise power is large (e.g., $-60$ $\mathrm{dBm}$), the energy for data collection would be high for far-away users. Therefore, the UGV should spend more motion energy to get closer to IoT users. This is the case shown in Fig. 3b. But no matter which case happens, the proposed algorithm adaptively finds the best trade-off between spending energy on moving versus on communication, and therefore achieves the minimum energy consumption for all the simulated values of $N_0$ as shown in Fig. 2b. Conclusions =========== This paper studied a UGV-based backscatter data collection system, with an integrated graph mobility model and backscatter communication model. The joint mobility management and power allocation problem was formulated with the aim of energy minimization subject to communication QoS constraints and mobility graph structure constraints. An algorithm that automatically balances the trade-off between spending energy on moving and on communication was proposed. Simulation results showed that the proposed algorithm could significantly save energy consumption compared to the scheme with no UGV movement and the scheme with a fixed moving path. [50]{} J. Gubbi, R. Buyya, S. Marusic, and M. Palaniswami, “Internet of Things (IoT): A vision, architectural elements, and future directions,” *Future Gen. Comput. Syst.*, vol. 29, no. 7, pp. 1645-1660, 2013. M. Xia and S. Aïssa, “On the efficiency of far-field wireless power transfer,” [*IEEE Trans. Signal Process.*]{}, vol. 63, no. 11, pp. 2835-2847, Jun. 2015. V. Liu, A. Parks, V. Talla, S. Gollakota, D. Wetherall, and J. R. Smith, “Ambient backscatter: Wireless communication out of thin air,” in *Proc. ACM SIGCOMM*, 2013, pp. 39-50. V. Liu, V. Talla, and S. Gollakota, “Enabling instantaneous feedback with full-duplex backscatter,” in *Proc. ACM MobiCom*, 2014, pp. 67-78. A. Alma’aitah, H. S. Hassanein, and M. Ibnkahla, “Tag modulation silencing: Design and application in RFID anti-collision protocols,” *IEEE Trans. Commun.*, vol. 62, no. 11, pp. 4068-4079, Nov. 2014 P. N. Alevizos, K. Tountas, and A. Bletsas, “Multistatic scatter radio sensor networks for extended coverage,” *IEEE Trans. Wireless Commun.*, vol. 17, no. 7, pp. 4522-4535, Jul. 2018. B. Lyu, H. Guo, Z. Yang, and G. Gui, “Throughput maximization for hybrid backscatter assisted cognitive wireless powered radio networks,” *IEEE IoT Journal*, vol. 5 , no. 3, pp. 2015-2024, Jun. 2018. Y. Mei, Y. H. Lu, Y. Hu, and C. Lee, “Deployment of mobile robots with energy and timing constraints,” *IEEE Trans. Robotics*, vol. 22, no. 3, pp. 507-522, Jun. 2006. Y. Shu, H. Yousefi, P. Cheng, J. Chen, Y. Gu, T. He, and K. G. Shin, “Near-optimal velocity control for mobile charging in wireless rechargeable sensor networks,” *IEEE Trans. Mobile Comput.*, vol. 15, no. 7, pp. 1699-1713, Jul. 2016. S. Wang, M. Xia, K. Huang, and Y.-C. Wu, “Wirelessly powered two-way communication with nonlinear energy harvesting model: Rate regions under fixed and mobile relay,” *IEEE Trans. Wireless Commun.*, vol. 16, no. 12, pp. 8190-8204, Dec. 2017. G. Laporte, “The vehicle routing problem: An overview of exact and approximate algorithms,” *European Journal of Operational Research*, vol. 59, no. 3, pp. 345-358, 1992. M. Ma, Y. Yang, and M. Zhao, “Tour planning for mobile data gathering mechanisms in wireless sensor networks,” *IEEE Trans. Veh. Technol.*, vol. 62, no. 4, pp. 1472-1483, May 2013. M. Zhao, J. Li, and Y. Yang, “A framework of joint mobile energy replenishment and data gathering in wireless rechargeable sensor networks,” *IEEE Trans. Mobile Comput.*, vol. 13, no. 12, pp. 2689-2705, Dec. 2014. Q. Wu, Y. Zeng, and R. Zhang, “Joint trajectory and communication design for multi-UAV enabled wireless networks,” *IEEE Trans. Wireless Commun.*, vol. 17, no. 3, pp. 2109-2121, Mar. 2018. J. A. Bondy and U. Murthy, *Graph Theory with Applications*. New York: Elsevier, 1976. G. Laporte, “The traveling salesman problem: An overview of exact and approximate algorithms,” *European Journal of Operational Research*, vol. 59, no. 2, pp. 231-247, 1992. S. H. Kim and D. I. Kim, “Hybrid backscatter communication for wireless-powered heterogeneous networks,” *IEEE Trans. Wireless Commun.*, vol. 16, no. 10, pp. 6557-6570, Oct. 2017. L. Wang, F. Tian, T. Svensson, D. Feng, M. Song, and S. Li, “Exploiting full duplex for device-to-device communications in heterogeneous networks,” [*IEEE Commun. Mag.*]{}, vol. 53, no. 5, pp. 146-152, May 2015. Z. Wen, X. Liu, N. C. Beaulieu, Rui Wang, and S. Wang, “Joint source and relay beamforming design for full-duplex MIMO AF relay SWIPT systems,” [*IEEE Commun. Lett.*]{}, vol. 20, no. 2, pp. 320-323, Feb. 2016. Z. Wen, S. Wang, X. Liu, and J. Zou, “Joint relay-user beamform- ing design in full-duplex two-way relay channel,” *IEEE Trans. Veh. Technol.*, vol. 66, no. 3, pp. 2874-2879, Mar. 2017. G. Zhu, S. W. Ko, and K. Huang, “Inference from randomized transmissions by many backscatter sensors,” *IEEE Trans. Wireless Commun.*, vol. 17, no. 5, pp. 3111-3127, May 2018. J. G. Proakis, *Digital Communications (4th edition)*. New York, NY, USA: McGraw-Hill, 2001. K. A. Remley, H. R. Anderson, and A. Weisshar, “Improving the accuracy of ray-tracing techniques for indoor propagation modeling,” *IEEE Trans. Veh. Technol.*, vol. 49, no. 6, pp. 2350-2358, Nov. 2000. M. Malmirchegini and Y. Mostofi, “On the spatial predictability of communication channels,” *IEEE Trans. Wireless Commun.*, vol. 11, no. 3, pp. 964-978, Mar. 2012. D. P. Bertsekas, *Network Optimization: Continuous and Discrete Models*. Athena Scientific, 1998. M. Gendreau and JY Potvin, *Handbook of Metaheuristics (2nd edition)*. New York: Springer; 2010. F. Neumann and I. Wegener, “Randomized local search, evolutionary algorithms, and the minimum spanning tree problem,” *Theoretical Computer Science*, vol. 378, no. 1, pp. 32-40, 2007. L. Goldstein and M. Waterman, “Neighborhood size in the simulated annealing algorithm,” *Amer. J. Math. Manage. Sci.*, vol. 8, no. 3-4, pp. 409-423, Jan. 1988. S. Boyd and L. Vandenberghe, *Convex Optimization*. Cambridge, U.K.: Cambridge Univ. Press, 2004. P. Belotti, C. Kirches, S. Leyffer, J. Linderoth, J. Luedtke, and A. Mahajan, “Mixed-integer nonlinear optimization,” *Acta Numerica*, vol. 22, pp. 1-131, 2013. M. Held and R. M. Karp, “A dynamic programming approach to sequencing problems,” *J. of Soc. for Indust. and Appl. Math.*, vol. 10, no. 1, pp. 196-210, Mar. 1962. A. Ben-Tal and A. Nemirovski, *Lectures on Modern Convex Optimization* (MPS/SIAM Series on Optimizations). Philadelphia, PA, USA: SIAM, 2013. S. Wang, M. Xia and Y.-C. Wu, “Multi-pair two-way relay network with harvest-then-transmit users: resolving pairwise uplink-downlink coupling,” *IEEE J. Sel. Topics Signal Process.*, vol. 10, no. 8, pp. 1506-1521, Dec. 2016. W. Stallings, *Wireless Communications and Networks.* Englewood Cliffs, NJ, USA: Prentice Hall, 2004. [^1]: When user $k$ adapts the variable impedance for modulating the backscattered waveform, other users keep silent to avoid collision [@back4]. [^2]: If the environment is static, ray tracing methods [@E1] could be used to estimate $\{g_{k,m},h_{k,m}\}$. On the other hand, if the channel is varying but with a fixed distribution, we could allow the UGV to collect a small number of measurements at the stopping points before a set of new missions (e.g., three to five missions) [@E2], and then the UGV can predict $\{g_{k,m},h_{k,m}\}$.
--- abstract: 'This paper introduces a new task of politeness transfer which involves converting non-polite sentences to polite sentences while preserving the meaning. We also provide a dataset of more than [1.39 million ]{}instances automatically labeled for politeness to encourage benchmark evaluations on this new task. We design a *tag* and *generate* pipeline that identifies stylistic attributes and subsequently generates a sentence in the target style while preserving most of the source content. For politeness as well as five other transfer tasks, our model outperforms the state-of-the-art methods on automatic metrics for content preservation, with a comparable or better performance on style transfer accuracy. Additionally, our model surpasses existing methods on human evaluations for grammaticality, meaning preservation and transfer accuracy across all the six style transfer tasks. The data and code is located at <https://github.com/tag-and-generate/>' author: - | Aman Madaan [^1], Amrith Setlur , Tanmay Parekh , Barnabas Poczos, Graham Neubig,\ **Yiming Yang, Ruslan Salakhutdinov, Alan W Black, Shrimai Prabhumoye**\ School of Computer Science\ Carnegie Mellon University\ Pittsburgh, PA, USA\ `{amadaan, asetlur, tparekh}@cs.cmu.edu`\ bibliography: - 'acl2020.bib' title: 'Politeness Transfer: A Tag and Generate Approach' --- Acknowledgments {#acknowledgments .unnumbered} =============== This material is based on research sponsored in part by the Air Force Research Laboratory under agreement number FA8750-19-2-0200. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Air Force Research Laboratory or the U.S. Government. This work was also supported in part by ONR Grant N000141812861, NSF IIS1763562, and Apple. We would also like to acknowledge NVIDIA’s GPU support. We would like to thank Antonis Anastasopoulos, Ritam Dutt, Sopan Khosla, and, Xinyi Wang for the helpful discussions. [^1]: authors contributed equally to this work.
--- abstract: 'We define két abelian schemes, két 1-motives, and két log 1-motives, and formulate duality theory for these objects. Then we show that tamely ramified strict 1-motives over a complete discrete valuation field can be extended to két log 1-motives over the corresponding discrete valuation ring. As an application, we present a proof to a result of Kato stated in [@kat4 §4.3] without proof.' address: ' Heer Zhao, Fakultät für Mathematik, Universität Duisburg-Essen, Essen 45117, Germany, [email protected]' bibliography: - 'bib.bib' title: 'Extending tamely ramified strict 1-motives into két log 1-motives' --- Notation and conventions {#notation-and-conventions .unnumbered} ======================== Let $S$ be an fs log scheme, we denote by $({\mathrm}{fs}/S)$ the category of fs log schemes over $S$, and denote by $({\mathrm}{fs}/S)_{{\mathrm}{\acute{e}t}}$ (resp. $({\mathrm}{fs}/S)_{{\mathrm}{k\acute{e}t}}$, resp. $({\mathrm}{fs}/S)_{{\mathrm}{fl}}$, resp. $({\mathrm}{fs}/S)_{{\mathrm}{kfl}}$) the classical étale site (resp. Kummer étale site, resp. classical flat site, resp. Kummer flat site) on $({\mathrm}{fs}/S)$. In order to shorten formulas, we will mostly abbreviate ${({\mathrm}{fs}/S)_{{\mathrm}{\acute{e}t}}}$ (resp. ${({\mathrm}{fs}/S)_{{\mathrm}{k\acute{e}t}}}$, resp. ${({\mathrm}{fs}/S)_{{\mathrm}{fl}}}$, resp. ${({\mathrm}{fs}/S)_{{\mathrm}{kfl}}}$) as ${S_{{\mathrm}{\acute{e}t}}}$ (resp. ${S_{{\mathrm}{k\acute{e}t}}}$, resp. ${S_{{\mathrm}{fl}}}$, resp. ${S_{{\mathrm}{kfl}}}$). We refer to [@ill1 2.5] for the classical étale site and the Kummer étale site, and [@kat2 Def. 2.3] and [@niz1 §2.1] for the Kummer flat site. The definition of the classical flat site is an obvious analogue of that of the classical étale site. Then we have two natural “forgetful” map of sites: $$\label{eq0.1} \varepsilon_{{\mathrm}{\acute{e}t}}:({\mathrm}{fs}/S)_{{\mathrm}{k\acute{e}t}}\rightarrow ({\mathrm}{fs}/S)_{{\mathrm}{\acute{e}t}}$$ and $$\label{eq0.2} \varepsilon_{{\mathrm}{fl}}:({\mathrm}{fs}/S)_{{\mathrm}{kfl}}\rightarrow ({\mathrm}{fs}/S)_{{\mathrm}{fl}} .$$ Kato’s multiplicative group (or the log multiplicative group) ${\mathbb{G}_{{\mathrm}{m,log}}}$ is the sheaf on ${S_{{\mathrm}{\acute{e}t}}}$ defined by ${\mathbb{G}_{{\mathrm}{m,log}}}(U)=\Gamma(U,M^{{\mathrm}{gp}}_U)$ for any $U\in{({\mathrm}{fs}/S)}$, where $M_U$ denotes the log structure of $U$ and $M^{{\mathrm}{gp}}_U$ denotes the group envelope of $M_U$. The Kummer étale sheaf ${\mathbb{G}_{{\mathrm}{m,log}}}$ is also a sheaf on ${S_{{\mathrm}{kfl}}}$, see [@niz1 Cor. 2.22] for a proof. By convention, for any sheaf of abelian groups $F$ on ${S_{{\mathrm}{kfl}}}$ and a subgroup sheaf $G$ of $F$ on ${S_{{\mathrm}{kfl}}}$, we denote by $(F/G)_{{S_{{\mathrm}{\acute{e}t}}}}$ (resp. $(F/G)_{{S_{{\mathrm}{fl}}}}$, resp. $(F/G)_{{S_{{\mathrm}{k\acute{e}t}}}}$) the quotient sheaf on ${S_{{\mathrm}{\acute{e}t}}}$ (resp. ${S_{{\mathrm}{fl}}}$, resp. ${S_{{\mathrm}{k\acute{e}t}}}$), while $F/G$ denotes the quotient sheaf on ${S_{{\mathrm}{kfl}}}$. We abbreviate the quotient sheaf ${\mathbb{G}_{{\mathrm}{m,log}}}/{\mathbb{G}_{{\mathrm}{m}}}$ on ${S_{{\mathrm}{kfl}}}$ as ${\overline{\mathbb{G}}_{{\mathrm}{m,log}}}$. Introduction ============ Let $R$ be a complete discrete valuation ring with fraction field $K$, residue field $k$, and a chosen uniformizer $\pi$, $S={\mathop{{\mathrm}{Spec}}}R$, and we endow $S$ with the log structure associated to ${{\mathbb N}}\rightarrow R,1\mapsto \pi$. Let $s$ (resp. $\eta$) be the closed (resp. generic) point of $S$, we denote by $i:s\hookrightarrow S$ (resp. $j:\eta\hookrightarrow S$) the closed (resp. open) immersion of $s$ (resp. $\eta$) into $S$. We endow $s$ with the induced log structure from $S$. Let $M_K=[Y_K\xrightarrow{u_K}G_K]$ be a 1-motive over $K$. By [@ray2 Thm. 4.2.2], one can associate to $M_K$ a 1-motive $M'_K=[Y'_K\xrightarrow{u'_K}G'_K]$ over $K$ together with a canonical map $M'_{K,{\mathrm}{rig}}\rightarrow M_{K,{\mathrm}{rig}}$ such that $G'_K$ has potentially good reduction, i.e. $M'_K$ is strict, and the map is a quasi-isomorphism in the derived category $D^{{\mathrm}{b}}_{{\mathrm}{rig}}(K_{{\mathrm}{fppf}})$. Here $M_{K,{\mathrm}{rig}}$ (resp. $M'_{K,{\mathrm}{rig}}$) denotes the rigid analytic 1-motive associated to $M_K$ (resp. $M'_{K}$), and $D^{{\mathrm}{b}}_{{\mathrm}{rig}}(K_{{\mathrm}{fppf}})$ denotes the derived category of bounded complex of sheaves of abelian groups for the flat topology on the small rigid site of ${\mathop{{\mathrm}{Spec}}}K$. The canonical map $M'_{K,{\mathrm}{rig}}\rightarrow M_{K,{\mathrm}{rig}}$ induces an isomorphism $T_n(M'_K)\rightarrow T_n(M_K)$ for any positive integer $n$. Hence if one is only interested in problems related to $T_n(M_K)$, it is harmless to assume that $M_K$ is strict. For a 1-motive $M_K=[Y_K\xrightarrow{u_K}G_K]$ over $K$ coming from a log 1-motive in the sense of [@k-t1 4.6.1], [@b-c-c1 Thm. 19] extends $T_n(M_K)$ to a log finite group object in $({\mathrm}{fin}/S)_r$ (see Definition \[defn5.1\]) by using Kato’s classification theorem for objects in $({\mathrm}{fin}/S)_{{\mathrm}{r}}$ for an fs log scheme $S$ with its underlying scheme a noetherian strictly henselian local ring. Note that such a 1-motive, $Y_K$ and $G_K$ have good reduction automatically by the definition of log 1-motives from [@k-t1 4.6.1]. For us, a log 1-motive is as defined in [@k-k-n2 Def. 2.2], which is the more suitable one over a general base. We are going to show that a 1-motive $M_K=[Y_K\xrightarrow{u_K}G_K]$ with both $Y_K$ and $G_K$ having good reduction, can be extended to a unique log 1-motive $M=[Y\xrightarrow{u}G_{{\mathrm}{log}}]$ over $S$. Hence a log 1-motive in the sense of [@k-t1 Def. 4.6.1] is a log 1-motive in our sense. Taking $T_n(M)$, we get an object of $({\mathrm}{fin}/S)_{{\mathrm}{r}}$ with generic fiber $T_n(M_K)$. This gives an alternative proof to [@b-c-c1 Thm. 19]. Moreover, if we replace log 1-motive by két log 1-motive (see Definition \[defn2.6\]), we can generalize the result to tamely ramified strict 1-motives over $K$, see the theorem below. Here a strict 1-motive $M_K=[Y_K\xrightarrow{u_K}G_K]$ is said to be tamely ramified, if both $Y_K$ and $G_K$ have good reduction after a tamely ramified extension of $K$. \[thm1.1\] Let $M_K=[Y_K\xrightarrow{u_K}G_K]$ be a tamely ramified strict 1-motive over $K$. Then $M_K$ extends to a két log 1-motive $M^{{\mathrm}{log}}$ over $S$. The main player of this article is of course két log 1-motives which are defined in Section \[sec2\]. In fact, we define két tori, két lattice, két abelian schemes, két 1-motives, and két log 1-motives, and formulate duality theory for these objects. The highlight is the following special case of Theorem \[thm1.1\], which gives rise to a concrete non-trivial example of két abelian scheme. \[thm1.2\] Let $K$ be a complete discrete valuation field with ring of integers $R$, and $B_K$ a tamely ramified abelian variety over $K$. We endow $S:={\mathop{{\mathrm}{Spec}}}R$ with the canonical log structure, then $B_K$ extends to a két abelian scheme $B$ over $S$. Section \[sec3\] is devoted to the proof of Theorem \[thm1.1\]. In Section \[sec4\], for a tamely ramified strict 1-motive $M_K$ as in Theorem \[thm1.1\], we associate a logarithmic monodromy pairing to $M$ and compare it with Raynaud’s geometric monodromy for $M_K$. In Section \[sec5\], as an application of Theorem \[thm1.1\], we present a proof to the following theorem (see also Theorem \[thm5.2\]) which is stated in the preprint [@kat4 §4.3] without proof. \[thm1.3\] Let $K$ be a complete discrete valuation field with ring of integers $R$, $p$ a prime number, and $A_K$ a tamely ramified abelian variety over $K$. We endow $S:={\mathop{{\mathrm}{Spec}}}R$ with the canonical log structure. Then the $p$-divisible group $A_K[p^{\infty}]$ of $A_K$ extends to a két log $p$-divisible group, i.e. an object of $(\text{$p$-div}/S)^{{\mathrm}{log}}_{{\mathrm}{\acute{e}}}$ (see Definition \[defn5.2\]). It extends to an object of $(\text{$p$-div}/S)^{{\mathrm}{log}}_{{\mathrm}{d}}$ (see Definition \[defn5.2\]) if any of the following two conditions is satisfied. (1) $A_K$ has semi-stable reduction. (2) $p$ is invertible in $R$. Két log 1-motives {#sec2} ================= Két log 1-motives {#két-log-1-motives} ----------------- The following definitions about log 1-motives are taken from [@k-k-n2 §2]. Let $S$ be an fs log scheme, $T$ a torus over the underlying scheme of $S$ with its character group $X$. The **log augmentation of $T$**, denoted as $T_{{\mathrm}{log}}$, is the sheaf of abelian groups $${\mathcal}{H}om_{S_{{\mathrm}{\acute{e}t}}}(X,{\mathbb{G}_{{\mathrm}{m,log}}})$$ on $({\mathrm}{fs}/S)_{{\mathrm}{\acute{e}t}}$. Let $G$ be an extension of an abelian scheme $B$ by $T$ over the underlying scheme of $S$. The **logarithmic augmentation of $G$**, denoted as $G_{{\mathrm}{log}}$, is the push-out of $G$ along the inclusion $T\hookrightarrow T_{{\mathrm}{log}}$. A **log 1-motive** over an fs log scheme $S$ is a two-term complex $M=[Y\xrightarrow{u}G_{{\mathrm}{log}}]$ in the category of sheaves of abelian groups on $({\mathrm}{fs}/S)_{{\mathrm}{\acute{e}t}}$, with the degree $-1$ term $Y$ an étale locally constant sheaf of finitely generated free abelian groups and the degree 0 term $G_{{\mathrm}{log}}$ as above. We also call $Y$ the **lattice part** of $M$. By [@zha3 Prop. 2.1], one can replace $({\mathrm}{fs}/S)_{{\mathrm}{\acute{e}t}}$ by $({\mathrm}{fs}/S)_{{\mathrm}{k\acute{e}t}}$ in the above definitions. In particular, $T_{{\mathrm}{log}}$ and $G_{{\mathrm}{log}}$ are sheaves on $({\mathrm}{fs}/S)_{{\mathrm}{k\acute{e}t}}$. Now we define két 1-motives and két log 1-motives, and we work with $({\mathrm}{fs}/S)_{{\mathrm}{k\acute{e}t}}$. A **két (kummer étale) lattice** (resp. **két torus**, resp. **két abelian scheme**) over an fs log scheme $S$ is a sheaf $F$ of abelian groups on $({\mathrm}{fs}/S)_{{\mathrm}{k\acute{e}t}}$ such that the pull-back of $F$ to $S'$ is a lattice (resp. torus, resp. abelian scheme) over $S'$ for some Kummer étale cover $S'$ of $S$. Here by a lattice, we mean a group scheme which is étale locally representable by a finite rank free abelian group. Let $S$ be an fs log scheme. A **két 1-motive** over $S$ is a two-term complex $M=[Y\xrightarrow{u}G]$ in the category of sheaves of abelian groups on $({\mathrm}{fs}/S)_{{\mathrm}{k\acute{e}t}}$, such that the degree $-1$ term $Y$ is a két lattice and the degree 0 term $G$ is an extension of a két abelian scheme $B$ by a két torus $T$. Let $S$ be an fs log scheme. Then the associations $$T\mapsto{\mathcal}{H}om_{S_{{\mathrm}{k\acute{e}t}}}(T,{\mathbb{G}_{{\mathrm}{m}}}),\quad X\mapsto {\mathcal}{H}om_{S_{{\mathrm}{k\acute{e}t}}}(X,{\mathbb{G}_{{\mathrm}{m}}})$$ define an equivalence between the category of két tori over $S$ and the category of két locally constant sheaves of finitely generated free abelian groups over $S$. We still call the két lattice ${\mathcal}{H}om_{S_{{\mathrm}{k\acute{e}t}}}(T,{\mathbb{G}_{{\mathrm}{m}}})$ the **character group** of the két torus $T$. This follows from the classical equivalence between the category of tori and the category of étale locally constant sheaves of finitely generated free abelian groups. \[defn2.5\] Given a két torus $T$ over $S$, let $X:={\mathcal}{H}om_{S_{{\mathrm}{k\acute{e}t}}}(T,{\mathbb{G}_{{\mathrm}{m}}})$ be the character group of $T$. The **logarithmic augmentation of $T$**, denoted as $T_{{\mathrm}{log}}$, is the sheaf of abelian groups $${\mathcal}{H}om_{S_{{\mathrm}{k\acute{e}t}}}(X,{\mathbb{G}_{{\mathrm}{m,log}}})$$ on $({\mathrm}{fs}/S)_{{\mathrm}{k\acute{e}t}}$. Let $G$ be an extension of a két abelian scheme $B$ by $T$ over $S$. The **logarithmic augmentation of $G$**, denoted as $G_{{\mathrm}{log}}$, is the push-out of $G$ along the inclusion $T\hookrightarrow T_{{\mathrm}{log}}$. Note that the quotient $(G_{{\mathrm}{log}}/G)_{S_{{\mathrm}{k\acute{e}t}}}$ is canonically identified with the quotient $(T_{{\mathrm}{log}}/T)_{S_{{\mathrm}{k\acute{e}t}}}$, which can be further identified with ${\mathcal}{H}om_{S_{{\mathrm}{k\acute{e}t}}}(X,({\mathbb{G}_{{\mathrm}{m,log}}}/{\mathbb{G}_{{\mathrm}{m}}})_{S_{{\mathrm}{k\acute{e}t}}})$. \[defn2.6\] A **két log 1-motive** over an fs log scheme $S$ is a 2-term complex $M=[Y\xrightarrow{u} G_{{\mathrm}{log}}]$ of sheaves of abelian groups on $({\mathrm}{fs}/S)_{{\mathrm}{k\acute{e}t}}$ such that $Y$ is a két lattice over $S$ and $G$ is an extension of a két abelian scheme $B$ by a két torus over $S$. The composition $$Y\xrightarrow{u} G_{{\mathrm}{log}}\rightarrow (G_{{\mathrm}{log}}/G)_{S_{{\mathrm}{k\acute{e}t}}}=(T_{{\mathrm}{log}}/T)_{S_{{\mathrm}{k\acute{e}t}}}={\mathcal}{H}om_{S_{{\mathrm}{k\acute{e}t}}}(X,({\mathbb{G}_{{\mathrm}{m,log}}}/{\mathbb{G}_{{\mathrm}{m}}})_{S_{{\mathrm}{k\acute{e}t}}})$$ corresponds to a pairing $$Y\times X\rightarrow ({\mathbb{G}_{{\mathrm}{m,log}}}/{\mathbb{G}_{{\mathrm}{m}}})_{S_{{\mathrm}{k\acute{e}t}}}.$$ We call this pairing the **monodromy pairing** of $M$. \[prop2.1\] Let $G$ be an extension of a két abelian scheme $B$ by a két torus $T$ over an fs log scheme $S$. Then $G$ is Kummer étale locally an extension of an abelian scheme by a torus. Without loss of generality, we may assume that $B$ (resp. $T$) is an abelian scheme (resp. torus) over $S$. Let $\varepsilon:({\mathrm}{fs}/S)_{{\mathrm}{k\acute{e}t}}\rightarrow ({\mathrm}{fs}/S)_{{\mathrm}{\acute{e}t}}$ be the forgetful map between these two sites. The spectral sequence $$E_2^{i,j}={\mathrm}{Ext}^{i}_{S_{{\mathrm}{\acute{e}t}}}(B,R^j\varepsilon_* T)\Rightarrow {\mathrm}{Ext}^{i+j}_{S_{{\mathrm}{k\acute{e}t}}}(B,T)$$ gives rise to an exact sequence $$0\rightarrow {\mathrm}{Ext}^{1}_{S_{{\mathrm}{\acute{e}t}}}(B,T)\rightarrow {\mathrm}{Ext}^{1}_{S_{{\mathrm}{k\acute{e}t}}}(B,T)\rightarrow {\mathrm}{Hom}_{S_{{\mathrm}{\acute{e}t}}}(B,R^1\varepsilon_* T).$$ By this short exact sequence, it suffices to show that ${\mathrm}{Hom}_{S_{{\mathrm}{\acute{e}t}}}(B,R^1\varepsilon_* T)=0$. We may assume that $T=({\mathbb{G}_{{\mathrm}{m}}})^r$. Then we get $$\begin{aligned} {\mathrm}{Hom}_{S_{{\mathrm}{\acute{e}t}}}(B,R^1\varepsilon_* T)=&{\mathrm}{Hom}_{S_{{\mathrm}{\acute{e}t}}}(B,R^1\varepsilon_* {\mathbb{G}_{{\mathrm}{m}}})^r \\ =&{\mathrm}{Hom}_{S_{{\mathrm}{\acute{e}t}}}(B,({\mathbb{G}_{{\mathrm}{m,log}}}/{\mathbb{G}_{{\mathrm}{m}}})_{S_{{\mathrm}{\acute{e}t}}}\otimes_{{{\mathbb Z}}}({{\mathbb Q}}/{{\mathbb Z}})')^r \\ =&0\end{aligned}$$ by the similar argument as in the proof of [@k-k-n2 Lem. 6.1.1]. This finishes the proof. \[rmk2.1\] For an abelian scheme $B$ and a torus $T$ over $S$, the same argument as in the proof of Proposition \[prop2.1\] shows that ${\mathrm}{Ext}^{1}_{S_{{\mathrm}{fl}}}(B,T)\xrightarrow{\cong} {\mathrm}{Ext}^{1}_{S_{{\mathrm}{kfl}}}(B,T)$. Furthermore, we have $${\mathrm}{Ext}^{1}_{S_{{\mathrm}{k\acute{e}t}}}(B,T)\cong {\mathrm}{Ext}^{1}_{S_{{\mathrm}{\acute{e}t}}}(B,T)\cong{\mathrm}{Ext}^{1}_{S_{{\mathrm}{fl}}}(B,T)\cong {\mathrm}{Ext}^{1}_{S_{{\mathrm}{kfl}}}(B,T).$$ Két log 1-motives in the Kummer flat topology --------------------------------------------- In this subsection, we assume that the underlying scheme of the base $S$ is locally noetherian. We show that a két log 1-motive can be regarded as a 2-term complex in the category of sheaves for the Kummer flat topology. \[lem2.2\] Let $S$ be an fs log scheme, and let $F$ be a sheaf of abelian groups on $({\mathrm}{fs}/S)_{{\mathrm}{k\acute{e}t}}$ such that $F\times_SS'$ is representable by an fs log scheme for some Kummer étale cover $S'$ of $S$. Then $F$ is also a sheaf for the Kummer flat topology. In particular, két lattices, két tori, and két abelian schemes over $S$ are sheaves for the Kummer flat topology. It suffices to prove that, for any $U\in ({\mathrm}{fs}/S)$ and any Kummer flat cover $\{U_i\}_{i\in I}$ of $U$, the canonical sequence $$0\rightarrow F(U)\rightarrow\prod_{i\in I}F(U_i)\rightarrow \prod_{i,j\in I}F(U_i\times_UU_j)$$ is exact. Let $S'':=S'\times_SS'$, consider the following commutative diagram $$\xymatrix{ &0\ar[d] &0\ar[d] &0\ar[d] \\ 0\ar[r] &F(U)\ar[r]\ar[d] &\prod_{i\in I}F(U_i)\ar[r]\ar[d] &\prod_{i,j\in I}F(U_i\times_UU_j)\ar[d] \\ 0\ar[r] &F(U\times_SS')\ar[r]\ar[d] &\prod_{i\in I}F(U_i\times_SS')\ar[r]\ar[d] &\prod_{i,j\in I}F(U_i\times_UU_j\times_SS')\ar[d] \\ 0\ar[r] &F(U\times_SS'')\ar[r] &\prod_{i\in I}F(U_i\times_SS'')\ar[r] &\prod_{i,j\in I}F(U_i\times_UU_j\times_SS'') \\ }$$ with exact columns. Since $F\times_SS'$ is representable by an fs log scheme, so is $F\times_SS''$. By [@k-k-n4 Thm. 5.2], both $F\times_SS'$ and $F\times_SS''$ are sheaves for the Kummer flat topology. It follows that the second row and the third row are both exact. Therefore the first row is also exact. This finishes the proof. \[cor2.1\] Let $S$ be an fs log scheme, and let $G$ be an extension of a két abelian scheme $B$ by a két torus $T$ over $S$. Then the logarithmic augmentation $G_{{\mathrm}{log}}$ of $G$ defined in Definition \[defn2.5\] is a sheaf for the Kummer flat topology. Since ${\mathbb{G}_{{\mathrm}{m,log}}}$ is a sheaf for the Kummer flat topology by [@kat2 Thm. 3.2] and $X$ is a sheaf for the Kummer flat topology by Lemma \[lem2.2\], so is $T_{{\mathrm}{log}}={\mathcal}{H}om_{S_{{\mathrm}{k\acute{e}t}}}(X,{\mathbb{G}_{{\mathrm}{m,log}}})$. Let $\delta:({\mathrm}{fs}/S)_{{\mathrm}{kfl}}\rightarrow ({\mathrm}{fs}/S)_{{\mathrm}{k\acute{e}t}}$ be the forgetful map between these two sites. The adjunction $(\delta^*,\delta_*)$ gives rise to the following commutative diagram $$\xymatrix{ 0\ar[r] &T_{{\mathrm}{log}}\ar[r]\ar[d]^{=} &G_{{\mathrm}{log}}\ar[r]\ar[d] &B\ar[r]\ar[d]^{=} &0 \\ 0\ar[r] &T_{{\mathrm}{log}}\ar[r] &\delta_*\delta^*G_{{\mathrm}{log}}\ar[r] &B \ar[r] &R^1\delta_*T_{{\mathrm}{log}} }$$ with exact rows. The left vertical identification comes from $T_{{\mathrm}{log}}$ being a sheaves for the Kummer flat topology. The right identification follows from Lemma \[lem2.2\]. Since $T_{{\mathrm}{log}}$ is Kummer étale locally of the form ${\mathbb{G}_{{\mathrm}{m,log}}}^{r}$, we get $R^1\delta_*T_{{\mathrm}{log}}=0$ by Kato’s logarithmic Hilbert 90, see [@kat2 §5]. Therefore the canonical map $G_{{\mathrm}{log}}\rightarrow \delta_*\delta^*G_{{\mathrm}{log}}$ is an isomorphism, i.e. $G_{{\mathrm}{log}}$ is also a sheaf for the Kummer flat topology. Duality ------- In this subsection, we assume that the underlying scheme of the base $S$ is locally noetherian. We formulate the duality theory for két abelian schemes, két 1-motives, and két log 1-motives respectively. Let $B$ be an abelian scheme over a base scheme $S$, the dual abelian scheme $B^{\vee}$ can be described as ${\mathcal}{E}xt^1_{S_{{\mathrm}{fl}}}(B,{\mathbb{G}_{{\mathrm}{m}}})$ by Weil-Barsotti formula. We are going to use this description to define the dual of a given két abelian scheme. \[thm2.1\] Let $S$ be an fs log scheme. For any két abelian scheme $B$ over $S$, we denote $B^{\vee}:={\mathcal}{E}xt^1_{S_{{\mathrm}{kfl}}}(B,{\mathbb{G}_{{\mathrm}{m}}})$. Then we have the following. (1) The sheaf $B^{\vee}$ is a két abelian scheme over $S$. (2) There exists a functorial isomorphism $\iota:B\xrightarrow{\cong} (B^{\vee})^{\vee}$. For part (1), we may assume that $B$ is actually an abelian scheme. Let $\varepsilon_{{\mathrm}{fl}}:({\mathrm}{fs}/S)_{{\mathrm}{kfl}}\rightarrow ({\mathrm}{fs}/S)_{{\mathrm}{fl}}$ be the forgetful map between these two sites. Let $F_1$ (resp. $F_2$) be a sheaf on $({\mathrm}{fs}/S)_{{\mathrm}{fl}}$ (resp. $({\mathrm}{fs}/S)_{{\mathrm}{kfl}}$), then we have $$\varepsilon_{{\mathrm}{fl}*}{\mathcal}{H}om_{S_{{\mathrm}{kfl}}}(\varepsilon_{{\mathrm}{fl}}^*F_1,F_2)={\mathcal}{H}om_{S_{{\mathrm}{fl}}}(F_1,\varepsilon_{{\mathrm}{fl}*} F_2).$$ Let $\theta$ be the functor sending $F_2$ to $\varepsilon_{{\mathrm}{fl}*}{\mathcal}{H}om_{S_{{\mathrm}{kfl}}}(\varepsilon_{{\mathrm}{fl}}^*F_1,F_2)={\mathcal}{H}om_{S_{{\mathrm}{fl}}}(F_1,\varepsilon_{{\mathrm}{fl}*} F_2)$, then we get two Grothendieck spectral sequences $$E_2^{p,q}=R^p\varepsilon_{{\mathrm}{fl}*}\circ R^q{\mathcal}{H}om_{S_{{\mathrm}{kfl}}}(\varepsilon_{{\mathrm}{fl}}^*F_1,-)\Rightarrow R^{p+q}\theta$$ and $$E_2^{p,q}=R^p{\mathcal}{H}om_{S_{{\mathrm}{fl}}}(F_1,-)\circ R^q\varepsilon_{{\mathrm}{fl}*} \Rightarrow R^{p+q}\theta .$$ These two spectral sequences give two exact sequences $$\begin{split} 0&\rightarrow R^1\varepsilon_{{\mathrm}{fl}*}{\mathcal}{H}om_{S_{{\mathrm}{kfl}}}(\varepsilon_{{\mathrm}{fl}}^*F_1,F_2)\rightarrow R^1\theta(F_2)\rightarrow \varepsilon_{{\mathrm}{fl}*}{\mathcal}{E}xt^1_{S_{{\mathrm}{kfl}}}(\varepsilon_{{\mathrm}{fl}}^*F_1,F_2) \\ &\rightarrow R^2\varepsilon_{{\mathrm}{fl}*}{\mathcal}{H}om_{S_{{\mathrm}{kfl}}}(\varepsilon_{{\mathrm}{fl}}^*F_1,F_2) \end{split}$$ and $$0\rightarrow {\mathcal}{E}xt^1_{S_{{\mathrm}{fl}}}(F_1,\varepsilon_{{\mathrm}{fl}*} F_2)\rightarrow R^1\theta (F_2)\rightarrow {\mathcal}{H}om_{S_{{\mathrm}{fl}}}(F_1,R^1\varepsilon_{{\mathrm}{fl}*} F_2).$$ Let $F_1=B$ and $F_2={\mathbb{G}_{{\mathrm}{m}}}$. Since ${\mathcal}{H}om_{S_{{\mathrm}{kfl}}}(B,{\mathbb{G}_{{\mathrm}{m}}})=0$ by [@sga7-1 Exp. VIII, 3.2.1], we get $$R^1\theta ({\mathbb{G}_{{\mathrm}{m}}})\cong \varepsilon_{{\mathrm}{fl}*}{\mathcal}{E}xt_{S_{{\mathrm}{kfl}}}^1(B,{\mathbb{G}_{{\mathrm}{m}}}),$$ therefore we get an exact sequence $$0\rightarrow {\mathcal}{E}xt^1_{S_{{\mathrm}{fl}}}(B,{\mathbb{G}_{{\mathrm}{m}}})\rightarrow \varepsilon_{{\mathrm}{fl}*}{\mathcal}{E}xt^1_{S_{{\mathrm}{kfl}}}(B,{\mathbb{G}_{{\mathrm}{m}}})\rightarrow {\mathcal}{H}om_{S_{{\mathrm}{fl}}}(B,R^1\varepsilon_{{\mathrm}{fl}*} {\mathbb{G}_{{\mathrm}{m}}}).$$ We also have $${\mathcal}{H}om_{S_{{\mathrm}{fl}}}(B,R^1\varepsilon_{{\mathrm}{fl}*} {\mathbb{G}_{{\mathrm}{m}}})={\mathcal}{H}om_{S_{{\mathrm}{fl}}}(B,({\mathbb{G}_{{\mathrm}{m,log}}}/{\mathbb{G}_{{\mathrm}{m}}})_{S_{{\mathrm}{fl}}}\otimes_{{{\mathbb Z}}}({{\mathbb Q}}/{{\mathbb Z}}))=0$$ by the similar argument as in the proof of [@k-k-n2 Lem. 6.1.1], it follows that $$\label{eq2.1} {\mathcal}{E}xt^1_{S_{{\mathrm}{fl}}}(B,{\mathbb{G}_{{\mathrm}{m}}})\xrightarrow{\cong} \varepsilon_{{\mathrm}{fl}*}{\mathcal}{E}xt^1_{S_{{\mathrm}{kfl}}}(B,{\mathbb{G}_{{\mathrm}{m}}}).$$ By the Weil-Barsotti formula, the sheaf ${\mathcal}{E}xt^1_{S_{{\mathrm}{fl}}}(B,{\mathbb{G}_{{\mathrm}{m}}})$ is representable by the dual abelian scheme of $B$. This finishes the proof of part (1). Now we prove part (2). By [@sga7-1 Exp. VIII, 3.2.1], we have $${\mathcal}{H}om_{S_{{\mathrm}{kfl}}}(B,{\mathbb{G}_{{\mathrm}{m}}})={\mathcal}{H}om_{S_{{\mathrm}{kfl}}}(B^{\vee},{\mathbb{G}_{{\mathrm}{m}}})=0.$$ By [@sga7-1 Exp. VIII, 1.1.1, 1.1.4], we get $${\mathrm}{Hom}_{S_{{\mathrm}{kfl}}}(B,(B^{\vee})^{\vee})\xleftarrow{\cong} {\mathrm}{Biext}^1_{S_{{\mathrm}{kfl}}}(B,B^{\vee};{\mathbb{G}_{{\mathrm}{m}}})\xrightarrow{\cong} {\mathrm}{Hom}_{S_{{\mathrm}{kfl}}}(B^{\vee},B^{\vee}).$$ Let $\iota:B\rightarrow(B^{\vee})^{\vee}$ be the homomorphism corresponding to $1_{B^{\vee}}$ under the above identification. Note that $\iota$ is the isomorphism giving the duality in the case that $B$ is actually an abelian scheme. Since $B$ is Kummer étale locally an abelian scheme, $\iota$ is Kummer étale locally an isomorphism. It follows that $\iota$ is also an isomorphism over $S$. Let $S$ be an fs log scheme, and $B$ a két abelian scheme over $S$. In view of Theorem \[thm2.1\], we call $B^{\vee}:={\mathcal}{E}xt^1_{S_{{\mathrm}{kfl}}}(B,{\mathbb{G}_{{\mathrm}{m}}})$ the **dual két abelian scheme** of $B$. The biextension $P\in {\mathrm}{Biext}^1_{S_{{\mathrm}{kfl}}}(B,B^{\vee};{\mathbb{G}_{{\mathrm}{m}}})$ corresponding to $\iota$ is called the **Weil biextension of $(B,B^{\vee})$ by ${\mathbb{G}_{{\mathrm}{m}}}$**. In view of (\[eq2.1\]), one can also define the dual of $B$ in the flat topology. Now let $S$ be an fs log scheme, and let $M=[Y\xrightarrow{u}G]$ be a két 1-motive over $S$, where $G$ is an extension $0\rightarrow T\rightarrow G\rightarrow B\rightarrow 0$ of a két abelian scheme $B$ by a két torus $T$ on $({\mathrm}{fs}/S)_{{\mathrm}{kfl}}$. For any element $\chi\in X:={\mathcal}{H}om_{S_{{\mathrm}{kfl}}}(T,{\mathbb{G}_{{\mathrm}{m}}})$, the push-out of the short exact sequence $0\rightarrow T\rightarrow G\rightarrow B\rightarrow 0$ along $\chi$ gives rise to an element of $B^{\vee}={\mathcal}{E}xt^1_{S_{{\mathrm}{kfl}}}(B,{\mathbb{G}_{{\mathrm}{m}}})$, whence a homomorphism $v^{\vee}:X\rightarrow B^{\vee}$. Let $v$ be the composition $Y\xrightarrow{u}G\rightarrow B$, then $u$ corresponds to a unique section $s:Y\rightarrow v^*G$ of the extension $v^*G\in{\mathrm}{Ext}^1_{S_{{\mathrm}{kfl}}}(Y,T)$. Consider the following commutative diagram $$\label{eq2.2} \xymatrix{ &{\mathrm}{Biext}^1_{S_{{\mathrm}{kfl}}}(B,B^{\vee};{\mathbb{G}_{{\mathrm}{m}}})\ar[d]^{(1_B,v^{\vee})^*} \ar[r]^-{\cong} &{\mathrm}{Hom}_{S_{{\mathrm}{kfl}}}(B,B)\\ {\mathrm}{Ext}^1_{S_{{\mathrm}{kfl}}}(B,T)\ar[d]_{v^*}\ar[r]^-{\cong} &{\mathrm}{Biext}^1_{S_{{\mathrm}{kfl}}}(B,X;{\mathbb{G}_{{\mathrm}{m}}})\ar[d]^{(v,1_X)^*} \\ {\mathrm}{Ext}^1_{S_{{\mathrm}{kfl}}}(Y,T)\ar[r]^-{\cong} &{\mathrm}{Biext}^1_{S_{{\mathrm}{kfl}}}(Y,X;{\mathbb{G}_{{\mathrm}{m}}}) },$$ where the horizontal isomorphisms come from $${\mathcal}{H}om_{S_{{\mathrm}{kfl}}}(B,{\mathbb{G}_{{\mathrm}{m}}})={\mathcal}{E}xt^1_{S_{{\mathrm}{kfl}}}(X,{\mathbb{G}_{{\mathrm}{m}}})=0$$ and ${\mathcal}{E}xt^1_{S_{{\mathrm}{kfl}}}(B^{\vee},{\mathbb{G}_{{\mathrm}{m}}})=B$ with the help of [@sga7-1 Exp. VIII, 1.1.4]. Since $G$ gives rise to $v^{\vee}$, the biextension corresponding to $G$ must be $(1_B,v^{\vee})^*P$ and we have the following mapping diagram $$\xymatrix{ &P\ar@{<->}[r]\ar@{|->}[d] &1_B \\ G\ar@{<->}[r]\ar@{|->}[d] &(1_B,v^{\vee})^*P\ar@{|->}[d] \\ v^*G\ar@{<->}[r] &(v,v^{\vee})^*P }$$ with respect to the commutative diagram (\[eq2.2\]). The section $s$ of $v^*G$ corresponds to a section of the biextension $(v,v^{\vee})^*P$ of $(Y,X)$ by ${\mathbb{G}_{{\mathrm}{m}}}$, which we still denote by $s$ by abuse of notation. Therefore we get an equivalent description of the két 1-motive $M=[Y\xrightarrow{u}G]$ of the form $$\label{eq2.3} \xymatrix{ (v\times v^{\vee})^*P\ar[r]\ar[d] &P\ar[d] \\ Y\times X\ar[r]^{v\times v^{\vee}}\ar@/^/[u]^s &B\times B^{\vee} },$$ where $(v\times v^{\vee})^*P$ denotes the pull-back of the Weil biextension $P$. The description (\[eq2.3\]) is symmetric. If we switch the roll of $Y$ and $X$, $v$ and $v^{\vee}$, $B$ and $B^{\vee}$, we get another két 1-motive $M^{\vee}=[X\xrightarrow{u^{\vee}}G^{\vee}]$, where $$\label{eq2.4} G^{\vee}\in{\mathrm}{Ext}^1_{S_{{\mathrm}{kfl}}}(B^{\vee},T^{\vee})$$ corresponds to $(v,1_{B^{\vee}})^*P\in{\mathrm}{Biext}^1_{S_{{\mathrm}{kfl}}}(Y,B^{\vee};{\mathbb{G}_{{\mathrm}{m}}})$ with $T^{\vee}:={\mathcal}{H}om_{S_{{\mathrm}{kfl}}}(Y,{\mathbb{G}_{{\mathrm}{m}}})$. The association of $M^{\vee}$ to $M$ is clearly a duality. We call the két 1-motive $M^{\vee}=[X\xrightarrow{u^{\vee}}G^{\vee}]$ the **dual két 1-motive** of the két 1-motive $M=[Y\xrightarrow{u}G]$. Now we formulate the duality theory for két log 1-motives, which is analogous to the case of két 1-motives. Let $M=[Y\xrightarrow{u}G_{{\mathrm}{log}}]$ be a két log 1-motive over $S$, where $G$ is an extension $0\rightarrow T\rightarrow G\rightarrow B\rightarrow 0$ of a két abelian scheme $B$ by a két torus $T$ on $({\mathrm}{fs}/S)_{{\mathrm}{kfl}}$. For any element $\chi\in X:={\mathcal}{H}om_{S_{{\mathrm}{kfl}}}(T,{\mathbb{G}_{{\mathrm}{m}}})$, the push-out of the short exact sequence $0\rightarrow T\rightarrow G\rightarrow B\rightarrow 0$ along $\chi$ gives rise to an element of $B^{\vee}={\mathcal}{E}xt^1_{S_{{\mathrm}{kfl}}}(B,{\mathbb{G}_{{\mathrm}{m}}})$, whence a homomorphism $v^{\vee}:X\rightarrow B^{\vee}$. Let $v$ be the composition $Y\xrightarrow{u}G_{{\mathrm}{log}}\rightarrow B$, then $u$ corresponds to a unique section $s:Y\rightarrow v^*G_{{\mathrm}{log}}$ of the extension $v^*G_{{\mathrm}{log}}\in{\mathrm}{Ext}^1_{S_{{\mathrm}{kfl}}}(Y,T_{{\mathrm}{log}})$. Consider the following commutative diagram $$\label{eq2.5} \xymatrix{ &{\mathrm}{Biext}^1_{S_{{\mathrm}{kfl}}}(B,B^{\vee};{\mathbb{G}_{{\mathrm}{m,log}}})\ar[d]^{(1_B,v^{\vee})^*} \\ {\mathrm}{Ext}^1_{S_{{\mathrm}{kfl}}}(B,T_{{\mathrm}{log}})\ar[d]_{v^*}\ar[r]^-{\cong} &{\mathrm}{Biext}^1_{S_{{\mathrm}{kfl}}}(B,X;{\mathbb{G}_{{\mathrm}{m,log}}})\ar[d]^{(v,1_X)^*} \\ {\mathrm}{Ext}^1_{S_{{\mathrm}{kfl}}}(Y,T_{{\mathrm}{log}})\ar[r]^-{\cong} &{\mathrm}{Biext}^1_{S_{{\mathrm}{kfl}}}(Y,X;{\mathbb{G}_{{\mathrm}{m,log}}}) },$$ where the horizontal isomorphism comes from $${\mathcal}{E}xt^1_{S_{{\mathrm}{kfl}}}(X,{\mathbb{G}_{{\mathrm}{m}}})=0$$ with the help of [@sga7-1 Exp. VIII, 1.1.4]. There is an obvious map from the diagram (\[eq2.2\]) to the diagram (\[eq2.5\]). Let $P^{{\mathrm}{log}}$ be the push-out of $P$ along ${\mathbb{G}_{{\mathrm}{m}}}\hookrightarrow{\mathbb{G}_{{\mathrm}{m,log}}}$. Since $G\in {\mathrm}{Ext}^1_{S_{{\mathrm}{kfl}}}(B,T)$ corresponds to the biextension $$(1_B,v^{\vee})^*P\in {\mathrm}{Biext}^1_{S_{{\mathrm}{kfl}}}(B,X;{\mathbb{G}_{{\mathrm}{m}}}),$$ we have $G_{{\mathrm}{log}}\in{\mathrm}{Ext}^1_{S_{{\mathrm}{kfl}}}(B,T_{{\mathrm}{log}})$ corresponds to $$(1_B,v^{\vee})^*P^{{\mathrm}{log}}\in {\mathrm}{Biext}^1_{S_{{\mathrm}{kfl}}}(B,X;{\mathbb{G}_{{\mathrm}{m,log}}}).$$ We have the following mapping diagram $$\xymatrix{ &P^{{\mathrm}{log}}\ar@{|->}[d] \\ G_{{\mathrm}{log}}\ar@{<->}[r]\ar@{|->}[d] &(1_B,v^{\vee})^*P^{{\mathrm}{log}}\ar@{|->}[d] \\ v^*G_{{\mathrm}{log}}\ar@{<->}[r] &(v,v^{\vee})^*P^{{\mathrm}{log}} }$$ with respect to the commutative diagram (\[eq2.5\]). The section $s$ of $v^*G_{{\mathrm}{log}}$ corresponds to a section of the biextension $(v,v^{\vee})^*P^{{\mathrm}{log}}$ of $(Y,X)$ by ${\mathbb{G}_{{\mathrm}{m,log}}}$, which we still denote by $s$ by abuse of notation. Therefore we get an equivalent description of the két log 1-motive $M=[Y\xrightarrow{u}G_{{\mathrm}{log}}]$ of the form $$\label{eq2.6} \xymatrix{ (v\times v^{\vee})^*P^{{\mathrm}{log}}\ar[r]\ar[d] &P^{{\mathrm}{log}}\ar[d] \\ Y\times X\ar[r]^{v\times v^{\vee}}\ar@/^/[u]^s &B\times B^{\vee} },$$ where $(v\times v^{\vee})^*P^{{\mathrm}{log}}$ denotes the pull-back of $P^{{\mathrm}{log}}$ along $v\times v^{\vee}$. The description (\[eq2.6\]) is symmetric. If we switch the roll of $Y$ and $X$, $v$ and $v^{\vee}$, $B$ and $B^{\vee}$, we get another két log 1-motive $M^{\vee}=[X\xrightarrow{u^{\vee}}G_{{\mathrm}{log}}^{\vee}]$, where $G_{{\mathrm}{log}}^{\vee}$ is the log augmentation of $G^{\vee}$ (see (\[eq2.4\])). The association of $M^{\vee}$ to $M$ is clearly a duality. We call the két log 1-motive $M^{\vee}=[X\xrightarrow{u^{\vee}}G_{{\mathrm}{log}}^{\vee}]$ the **dual két log 1-motive** of the két log 1-motive $M=[Y\xrightarrow{u}G_{{\mathrm}{log}}]$. Extending tamely ramified strict 1-motives into két log 1-motives {#sec3} ================================================================= From now on, $R$ is a complete discrete valuation ring with fraction field $K$, residue field $k$, and a chosen uniformizer $\pi$, $S={\mathop{{\mathrm}{Spec}}}R$, and we endow $S$ with the log structure associated to ${{\mathbb N}}\rightarrow R,1\mapsto \pi$. Let $s$ (resp. $\eta$) be the closed (resp. generic) point of $S$, we denote by $i:s\hookrightarrow S$ (resp. $j:\eta\hookrightarrow S$) the closed (resp. open) immersion of $s$ (resp. $\eta$) into $S$. We endow $s$ with the induced log structure from $S$. Following [@ray2 Def. 4.2.3], a 1-motive $M_K=[Y_K\xrightarrow{u_K}G_K]$ over $K$ is called **strict**, if $G_K$ has potentially good reduction. We call a 1-motive $M_K=[Y_K\xrightarrow{u_K}G_K]$ over $K$ **tamely ramified**, if there exists a tamely ramified finite field extension $K'$ of $K$ such that both $Y_K\times_KK'$ and $G_K\times_KK'$ have good reduction. The main goal is to prove the following theorem. \[thm3.1\] Let $M_K=[Y_K\xrightarrow{u_K}G_K]$ be a tamely ramified strict 1-motive over $K$ with $G_K$ an extension of an abelian variety $B_K$ by a torus $T_K$. Then $M_K$ extends to a két log 1-motive $M^{{\mathrm}{log}}$ over $S$. Before going to the proof of the above theorem, we treat some special cases in the first few subsections. Extending tamely ramified lattices into két lattices {#subsec3.1} ---------------------------------------------------- Let $Y_K$ be a lattice over $K$, i.e. a group scheme over $K$ which is étale locally representable by a finite rank free abelian group. Assume that $Y_K$ is tamely ramified, then $Y_K$ extends to a két lattice $Y$ over $S$. Let $K'$ be a tamely ramified finite Galois field extension of $K$ such that $Y_K\times_KK'$ is unramified. If necessary, by enlarging $K'$ by a further unramified extension, we may assume that $Y_K\times_KK'$ is constant. Let $R'$ be the integral closure of $R$ in $K'$ and $\pi'$ a uniformizer of $R'$. We endow $S':={\mathop{{\mathrm}{Spec}}}R'$ with the log structure associated to ${{\mathbb N}}\rightarrow R',1\mapsto\pi'$. Then $S'$ is a finite Kummer étale Galois cover of $S$ with Galois group ${\mathrm}{Gal}(K'/K)$. Therefore $Y_K$ extends to a Kummer étale locally constant sheaf $Y$ on $S$. This finishes the proof. Extending tamely ramified tori into két tori {#subsec3.2} -------------------------------------------- Let $T_K$ be a torus over $K$. Assume that $T_K$ is tamely ramified, i.e. there exists a tamely ramified finite field extension $K'$ of $K$ such that $T_K\times_KK'$ has good reduction. Then $T_K$ extends to a két torus $T$ over $S$. Let $X_K:={\mathcal}{H}om_{({\mathop{{\mathrm}{Spec}}}K)_{{\mathrm}{\acute{e}t}}}(T_K,{\mathbb{G}_{{\mathrm}{m}}})$ be the character group of $T_K$. Then $X_K$ is tamely ramified. Hence $X_K$ extends to a két lattice over $S$. It follows that $T:={\mathcal}{H}om_{S_{{\mathrm}{k\acute{e}t}}}(X,{\mathbb{G}_{{\mathrm}{m}}})$ is a két torus over $S$ which extends $T_K$. Extending tamely ramified abelian varieties into két abelian schemes {#subsec3.3} -------------------------------------------------------------------- Let $B_K$ be a tamely ramified abelian variety over $K$, and let $K'$ be a tamely ramified finite Galois field extension of $K$ such that $B_{K'}:=B_K\times_KK'$ has good reduction. Let $R'$ be the integral closure of $R$ in $K'$, then $B_{K'}$ extends to an abelian scheme $B'$ over $S':={\mathop{{\mathrm}{Spec}}}R'$. Let $\pi'$ be a uniformizer of $R'$, and we endow $S'$ with the log structure associated to ${{\mathbb N}}\rightarrow R',1\mapsto\pi'$. Then $S'$ is a finite Galois Kummer étale cover of $S$ with Galois group $\Gamma:={\mathrm}{Gal}(K'/K)$. Let $\rho:\Gamma\times S'\rightarrow S'$ be the canonical action of $\Gamma$ on $S'$, then the morphism $(\rho,{\mathrm}{pr}_2):\Gamma\times S'\rightarrow S'\times_SS'$ is an isomorphism. By [@b-l-r1 §1.2, Prop. 8], $B'$ is the Néron model of $B_{K'}$. By the universal property of Néron model, the $\Gamma$-action on $B_{K'}$ extends to a unique $\Gamma$-action $$\label{eq3.1} \tilde{\rho}:\Gamma\times B'\rightarrow B'$$ on $B'$ which is compatible with the $\Gamma$-action $\rho$ on $S'$ and the group structure of $B'$. We endow $B'$ with the induced log structure from $S'$. Let $p'$ denote the structure morphism $B'\rightarrow S'$, $\alpha$ denote the morphism $S'\rightarrow S$, and $p:=\alpha\circ p'$. For any $U\in({\mathrm}{fs}/S)$ and any $(a,b)\in (B'\times_SB')(U)$, we have $\alpha(p'(a))=\alpha(p'(b))$. Hence there exists a unique $\gamma\in\Gamma$ such that $p'(a)=\rho(\gamma,p'(b))$. Since $p'(\tilde{\rho}(\gamma,b))=\rho(\gamma,p'(b))=p'(a)$, we get $(a,\tilde{\rho}(\gamma,b))\in (B'\times_{S'}B')(U)$. We define a morphism $$\Phi: B'\times_SB'\rightarrow \Gamma\times (B'\times_{S'}B')$$ by sending $(a,b)$ to $(\gamma^{-1},(a,\tilde{\rho}(\gamma,b))$. The morphism $\Phi$ is an isomorphism with inverse $$\Psi:\Gamma\times (B'\times_{S'}B')\rightarrow B'\times_SB',\quad (\gamma,(a,b))\mapsto (a,\tilde{\rho}(\gamma,b))$$ for any $U\in({\mathrm}{fs}/S)$, any $(a,b)\in (B'\times_{S'}B')(U)$, and any $\gamma\in\Gamma$. Clearly $\Phi$ and $\Psi$ are inverse to each other. \[lem3.2\] The canonical morphism $$(\tilde{\rho},{\mathrm}{pr}_2):\Gamma\times B'\rightarrow B'\times_SB'$$ is a monomorphism of sheaves on $({\mathrm}{fs}/S)_{{\mathrm}{k\acute{e}t}}$. The composition $$\Gamma\times B'\xrightarrow{(\tilde{\rho},{\mathrm}{pr}_2)} B'\times_SB'\xrightarrow {\iota}B'\times_SB'\xrightarrow {\Phi}\Gamma\times (B'\times_{S'}B')$$ is identified with the morphism $1_{\Gamma}\times\Delta_{B'/S'}$, where $\iota$ denotes the morphism switching the two factors. Therefore the result follows. By [@stacks-project [Tag 0234](https://stacks.math.columbia.edu/tag/0234)], the action $\tilde{\rho}$ defines a groupoid scheme over $S$, hence by [@stacks-project [Tag 0232](https://stacks.math.columbia.edu/tag/0232)] the morphism $$(\tilde{\rho},{\mathrm}{pr}_2):\Gamma\times B'\rightarrow B'\times_SB'$$ is a pre-equivalent relation. Moreover, $(\tilde{\rho},{\mathrm}{pr}_2)$ is an equivalent relation by Lemma \[lem3.2\]. The morphism $$(\rho,{\mathrm}{pr}_2):\Gamma\times S'\rightarrow S'\times_SS'$$ being an isomorphism is clearly an equivalent relation. Now we are following [@stacks-project [Tag 02VE](https://stacks.math.columbia.edu/tag/02VE)] to construct a két abelian scheme over $S$. We remark that although the setting-up there does not agree with ours, but the proofs there work verbatim in our case. Following the approach of [@stacks-project [Tag 02VG](https://stacks.math.columbia.edu/tag/02VG)], we take the quotient sheaves for the equivalence relations $(\tilde{\rho},{\mathrm}{pr}_2)$ and $(\rho,{\mathrm}{pr}_2)$ on the site $({\mathrm}{fs}/S)_{{\mathrm}{k\acute{e}t}}$. Since $(\rho,{\mathrm}{pr}_2)$ is an isomorphism, the corresponding quotient sheaf is representable by the initial object $S$. Let $B$ be the quotient sheaf for equivalence relation $(\tilde{\rho},{\mathrm}{pr}_2)$. Since the two equivalence relations are compatible, we get a morphism $B\rightarrow S$. Since the equivalence relation $(\tilde{\rho},{\mathrm}{pr}_2)$ is compatible with the group structure of $B'$, the quotient sheaf $B'$ carries a structure of sheaf of abelian groups. The verbatim translations of the proof of [@stacks-project [Tag 045Y](https://stacks.math.columbia.edu/tag/045Y)] and the proof of [@stacks-project [Tag 07S3](https://stacks.math.columbia.edu/tag/07S3)] show that $$\label{eq3.3} \Gamma\times B'\xrightarrow{\cong}B'\times_BB'$$ and $$\label{eq3.4} B'\xrightarrow{\cong}B\times_SS'$$ respectively, hence $B$ is a két abelian scheme over $S$. To conclude, we get the following theorem. \[thm3.2\] Let $B_K$ be a tamely ramified abelian variety over $K$. Then $B_K$ extends to a két abelian scheme $B$ over $S$. The association gives rise to a functor $$K\acute{e}t:{\mathrm}{TameAb}_K\rightarrow{\mathrm}{K\acute{e}tAb}_S,\quad B_K\mapsto B$$ from the category of tamely ramified abelian varieties over $K$ to the category of két abelian schemes over $S$. It is natural to investigate if the functor Két is compatible with the dualities on both sides. The functor $K\acute{e}t:{\mathrm}{TameAb}_K\rightarrow{\mathrm}{K\acute{e}tAb}_S$ is compatible with the dualities, i.e. we have a canonical identification $$K\acute{e}t(B_K^{\vee})\cong K\acute{e}t(B_K)^{\vee}.$$ Let $S'$, $\Gamma$, $B'$, and $B$ be as in (\[eq3.3\]) and (\[eq3.4\]), then $B=K\acute{e}t(B_K)$. By (\[eq3.4\]), we have $$\begin{aligned} B^{\vee}\times_SS'=&{\mathcal}{E}xt^1_{S_{{\mathrm}{kfl}}}(B,{\mathbb{G}_{{\mathrm}{m}}})\times_SS'={\mathcal}{E}xt^1_{S'_{{\mathrm}{kfl}}}(B\times_SS',{\mathbb{G}_{{\mathrm}{m}}}) \\ =&{\mathcal}{E}xt^1_{S'_{{\mathrm}{kfl}}}(B',{\mathbb{G}_{{\mathrm}{m}}})=B^{'\vee}.\end{aligned}$$ It follows that $B^{\vee}={\mathcal}{E}xt^1_{S_{{\mathrm}{k\acute{e}t}}}(B,{\mathbb{G}_{{\mathrm}{m}}})$ is the quotient sheaf for a descent data with respect to the Galois Kummer étale cover $S'/S$. Such a descent data is given by a group action $\tau:\Gamma\times B^{'\vee}\rightarrow B^{'\vee}$. In order to have the identification $K\acute{e}t(B_K^{\vee})\cong K\acute{e}t(B_K)^{\vee}=B^{\vee}$, we are reduced to identify the action $\tau$ with the action $\tilde{\rho}^{\vee}:\Gamma\times B^{'\vee}\rightarrow B^{'\vee}$ for $B_K^{\vee}$ which corresponds to the action (\[eq3.1\]) for $B_K$. But this is clear, since $\Gamma\times B'=\sqcup_{\gamma\in\Gamma}B'$ and these two actions agree over the generic fiber. Proof of Theorem \[thm3.1\] --------------------------- In this subsection, we prove Theorem \[thm3.1\]. Let $v_K$ be the composition $Y_K\xrightarrow{u_K}G_K\rightarrow B_K$, $X_K$ the character group of the torus $T_K$, and $v_K^{\vee}:X_K\rightarrow B_K^{\vee}$ the homomorphism corresponding to the semi-abelian variety $G_K$. By [@ray2 2.4.1], the 1-motive $M_K$ is uniquely determined by a commutative diagram of the form $$\label{eq3.7} \xymatrix{ &P_K\ar[d] \\ Y_K\times_{{\mathop{{\mathrm}{Spec}}}K}X_K\ar[r]^-{v_K\times v_K^{\vee}}\ar[ru]^{s_K} &B_K\times_{{\mathop{{\mathrm}{Spec}}}K} B_K^{\vee} },$$ where $s_K$ is a bilinear map. Note that $s_K$ corresponds to a unique section $$\label{eq3.8} t_K:Y_K\times_{{\mathop{{\mathrm}{Spec}}}K}X_K\rightarrow E_K,$$ where $E_K$ denotes the pull-back of the Weil biextension $P_K$ of $B_K$ and its dual $B_K^{\vee}$ along $v_K\times v_K^{\vee}$. Let $K'$ be a finite tamely ramified Galois extension of $K$ such that $B_K$ extends to an abelian scheme $B'$ over $S':={\mathop{{\mathrm}{Spec}}}R'$, $Y_K$ extends to a constant group scheme over $S'$, and $T_K$ extends to a split torus over $S'$, where $R'$ denotes the integral closure of $R$ in $K'$. Let $\pi'$ be a uniformizer of $R'$ such that $\pi'=\pi^{\frac{1}{e}}$ with $e$ the ramification index of the extension $K'/K$, and we endow $S'$ with the log structure associated to ${{\mathbb N}}\rightarrow R',1\mapsto\pi'$. Then $S'$ is a finite Galois Kummer étale cover of $S$ with Galois group $\Gamma:={\mathrm}{Gal}(K'/K)$. Let $Y$ (resp. $X$) be the két lattice over $S$ extending $Y_K$ (resp. $X_K$) as constructed in Subsection \[subsec3.1\], then $Y$ (resp. $X$) can be regarded as a $\Gamma$-module. Let $T$ be the két torus over $S$ extending $T_K$ as constructed in Subsection \[subsec3.2\]. Note that $T$ is nothing but ${\mathcal}{H}om_{S_{{\mathrm}{k\acute{e}t}}}(X,{\mathbb{G}_{{\mathrm}{m}}})$. Let $B$ (resp. $B^{\vee}$) be the két abelian scheme extending $B_K$ (resp. $B_K^{\vee}$) as constructed in Subsection \[subsec3.3\], and let $P$ be the Weil biextension of $(B,B^{\vee})$ by ${\mathbb{G}_{{\mathrm}{m}}}$. \[lem3.3\] The homomorphism $v_K$ (resp. $v_K^{\vee}$) extends to a unique homomorphism $v:Y\rightarrow B$ (resp. $v^{\vee}:X\rightarrow B^{\vee}$). We only treat the case of $v_K$, the other one can be done in the same way. We have $B\times_SS'=B'$ by (\[eq3.4\]). Therefore $$B(S')=B'(S')=B'({\mathop{{\mathrm}{Spec}}}K')=B_K({\mathop{{\mathrm}{Spec}}}K').$$ Since $Y$ is equivalent to a $\Gamma$-module, we get $${\mathrm}{Hom}_S(Y,B)={\mathrm}{Hom}_{{{\mathbb Z}}-{\mathrm}{Mod}}(Y,B(S'))^{\Gamma}={\mathrm}{Hom}_{{{\mathbb Z}}-{\mathrm}{Mod}}(Y_K,B_K({\mathop{{\mathrm}{Spec}}}K'))^{\Gamma}.$$ It follows that $v_K$ extends to a unique homomorphism $v:Y\rightarrow B$. By Lemma \[lem3.3\], we get a map $v\times v^{\vee}:Y\times_SX\rightarrow B\times_S B^{\vee}$. Let $P^{{\mathrm}{log}}$ be the push-out of $P$ along the inclusion ${\mathbb{G}_{{\mathrm}{m}}}\hookrightarrow{\mathbb{G}_{{\mathrm}{m,log}}}$, we get the following diagram $$\label{eq3.9} \xymatrix{ &P^{{\mathrm}{log}}\ar[d] \\ Y\times_SX\ar[r]^-{v\times v^{\vee}}\ar@{..>}[ru]^{s_K} &B\times_S B^{\vee} }$$ over $S$. The dotted arrow in (\[eq3.9\]) means that it is only a map over ${\mathop{{\mathrm}{Spec}}}K$. The restriction of (\[eq3.9\]) to ${\mathop{{\mathrm}{Spec}}}K$ is clearly just the diagram (\[eq3.7\]). The bilinear map $s_K$ from (\[eq3.7\]) extends uniquely to a bilinear map $s^{{\mathrm}{log}}:Y\times_SX\rightarrow P^{{\mathrm}{log}}$ making the diagram (\[eq3.9\]) commutative. Let $E$ be the pull-back of $P$ long the map $v\times v^{\vee}$ on $({\mathrm}{fs}/S)_{{\mathrm}{kfl}}$, and let $E^{{\mathrm}{log}}$ be the push-out of $E$ along the canonical map ${\mathbb{G}_{{\mathrm}{m}}}\hookrightarrow{\mathbb{G}_{{\mathrm}{m,log}}}$ on $({\mathrm}{fs}/S)_{{\mathrm}{kfl}}$. Since both $Y$ and $X$ are Kummer étale locally representable by a finitely generated free abelian group, we have $${\mathrm}{Biext}^1_{S_{{\mathrm}{kfl}}}(Y,X;-)={\mathrm}{Ext}^1_{S_{{\mathrm}{kfl}}}(Y\otimes^{\mathbb{L}} X,-)={\mathrm}{Ext}^1_{S_{{\mathrm}{kfl}}}(Y\otimes X,-)$$ by [@sga7-1 Exp. VII, 3.6.5]. Therefore $E$ (resp. $E^{{\mathrm}{log}}$) can be regarded as an element of ${\mathrm}{Ext}^1_{S_{{\mathrm}{kfl}}}(Y\otimes X,{\mathbb{G}_{{\mathrm}{m}}})$ (resp. ${\mathrm}{Ext}^1_{S_{{\mathrm}{kfl}}}(Y\otimes X,{\mathbb{G}_{{\mathrm}{m,log}}})$), and $E^{{\mathrm}{log}}$ is still the push-out of $E$ under these identifications. Similarly, $E_K:=E\times_S{\mathop{{\mathrm}{Spec}}}K$ can be regarded as an element of ${\mathrm}{Ext}^1_{({\mathop{{\mathrm}{Spec}}}K)_{{\mathrm}{fl}}}(Y_K\otimes X_K,{\mathbb{G}_{{\mathrm}{m}}})$. Note that both $E$ and $E^{{\mathrm}{log}}$ over $S$ restrict to $E_K$ over $K$. The extensions $E$, $E^{{\mathrm}{log}}$, and $E_K$, give rise to exact sequences $$\label{eq3.10} 0\rightarrow {\mathbb{G}_{{\mathrm}{m}}}(S')\rightarrow E(S')\rightarrow Y\otimes X(S')\rightarrow H^1_{{\mathrm}{kfl}}(S',{\mathbb{G}_{{\mathrm}{m}}}),$$ $$\label{eq3.11} 0\rightarrow {\mathbb{G}_{{\mathrm}{m,log}}}(S')\rightarrow E^{{\mathrm}{log}}(S')\rightarrow Y\otimes X(S')\rightarrow H^1_{{\mathrm}{kfl}}(S',{\mathbb{G}_{{\mathrm}{m,log}}}),$$ and $$\label{eq3.12} 0\rightarrow {\mathbb{G}_{{\mathrm}{m}}}(K')\rightarrow E_K(K')\rightarrow Y_K\otimes X_K(K')\rightarrow H^1_{{\mathrm}{fl}}({\mathop{{\mathrm}{Spec}}}K',{\mathbb{G}_{{\mathrm}{m}}})$$ respectively. Clearly we have $$H^1_{{\mathrm}{fl}}(S',{\mathbb{G}_{{\mathrm}{m}}})=H^1_{{\mathrm}{\acute{e}t}}(S',{\mathbb{G}_{{\mathrm}{m}}})=0$$ and $$H^1_{{\mathrm}{fl}}({\mathop{{\mathrm}{Spec}}}K',{\mathbb{G}_{{\mathrm}{m}}})=H^1_{{\mathrm}{\acute{e}t}}({\mathop{{\mathrm}{Spec}}}K',{\mathbb{G}_{{\mathrm}{m}}})=0.$$ The short exact sequence $0\rightarrow{\mathbb{G}_{{\mathrm}{m}}}\rightarrow{\mathbb{G}_{{\mathrm}{m,log}}}\rightarrow ({\mathbb{G}_{{\mathrm}{m,log}}}/{\mathbb{G}_{{\mathrm}{m}}})_{S_{{\mathrm}{fl}}}\rightarrow0$ gives rise to an exact sequence $$\rightarrow H^1_{{\mathrm}{fl}}(S',{\mathbb{G}_{{\mathrm}{m}}})\rightarrow H^1_{{\mathrm}{fl}}(S',{\mathbb{G}_{{\mathrm}{m,log}}})\rightarrow H^1_{{\mathrm}{fl}}(S',({\mathbb{G}_{{\mathrm}{m,log}}}/{\mathbb{G}_{{\mathrm}{m}}})_{S_{{\mathrm}{fl}}})\rightarrow.$$ Since $H^1_{{\mathrm}{fl}}(S',({\mathbb{G}_{{\mathrm}{m,log}}}/{\mathbb{G}_{{\mathrm}{m}}})_{S_{{\mathrm}{fl}}})=H^1_{{\mathrm}{fl}}(S',i'_*{{\mathbb Z}})=H^1_{{\mathrm}{fl}}(s',{{\mathbb Z}})=H^1_{{\mathrm}{\acute{e}t}}(s',{{\mathbb Z}})=0$, where $i'$ denotes the inclusion of the closed point $s'$ of $S'$ into itself, we get $H^1_{{\mathrm}{fl}}(S',{\mathbb{G}_{{\mathrm}{m,log}}})=0$. By Kato’s logarithmic Hilbert 90, see [@niz1 Thm. 3.20], we get $$H^1_{{\mathrm}{kfl}}(S',{\mathbb{G}_{{\mathrm}{m,log}}})=H^1_{{\mathrm}{fl}}(S',{\mathbb{G}_{{\mathrm}{m,log}}})=0.$$ The exact sequences (\[eq3.10\]), (\[eq3.11\]), and (\[eq3.12\]) fit into the following commutative diagram $$\label{eq3.13} \xymatrix{ 0\ar[r] &{\mathbb{G}_{{\mathrm}{m}}}(S')\ar[r]\ar[d] &E(S')\ar[r]\ar[d] &Z(S')\ar@{=}[d]\ar[r]^-{\delta} &H^1_{{\mathrm}{kfl}}(S',{\mathbb{G}_{{\mathrm}{m}}})\ar[d] \\ 0\ar[r] &{\mathbb{G}_{{\mathrm}{m,log}}}(S')\ar[r]\ar[d] &E^{{\mathrm}{log}}(S')\ar[r]\ar[d] &Z(S')\ar[r]\ar[d] &0 \\ 0\ar[r] &{\mathbb{G}_{{\mathrm}{m}}}({\mathop{{\mathrm}{Spec}}}K')\ar[r] &E_K({\mathop{{\mathrm}{Spec}}}K')\ar[r] &Z_K({\mathop{{\mathrm}{Spec}}}K')\ar[r] &0 }$$ with exact rows, where $Z$ and $Z_K$ denote $Y\otimes X$ and $Y_K\otimes X_K$ respectively. Since $Y$ and $X$ become constant over $S'$, the map $$Z(S')\rightarrow Z_K({\mathop{{\mathrm}{Spec}}}K')$$ is an isomorphism. The map ${\mathbb{G}_{{\mathrm}{m,log}}}(S')\rightarrow {\mathbb{G}_{{\mathrm}{m}}}({\mathop{{\mathrm}{Spec}}}K')$ is also an isomorphism. Therefore the restriction map $$E^{{\mathrm}{log}}(S')\rightarrow E_K({\mathop{{\mathrm}{Spec}}}K')=E^{{\mathrm}{log}}({\mathop{{\mathrm}{Spec}}}K')$$ is an isomorphism. We regard $E_K$ as an extension of $Y_K\otimes X_K$ by ${\mathbb{G}_{{\mathrm}{m}}}$, then the section $t_K$ (see (\[eq3.8\])) of $E_K$ induces a section to the surjection $E^{{\mathrm}{log}}(S')\rightarrow Y\otimes X(S')$. This induced section is clearly ${\mathrm}{Gal}(S'/S)$-equivariant, therefore gives rise to a section $$\label{eq3.14} t^{{\mathrm}{log}}:Y\otimes X\rightarrow E^{{\mathrm}{log}}$$ to the extension $E^{{\mathrm}{log}}$ of $Y\otimes X$ by ${\mathbb{G}_{{\mathrm}{m,log}}}$. The homomorphism $t^{{\mathrm}{log}}$ is automatically also a section to the corresponding biextension $E^{{\mathrm}{log}}$ of $(Y,X)$ by ${\mathbb{G}_{{\mathrm}{m,log}}}$. Note that $E^{{\mathrm}{log}}$ is also the pull-back of $P^{{\mathrm}{log}}$ along $v\times v^{\vee}$, and $t^{{\mathrm}{log}}$ gives rise to a bilinear map $s^{{\mathrm}{log}}:Y\times_{S}X\rightarrow P^{{\mathrm}{log}}$ which extends $s_K$. Clearly we have the following commutative diagram $$\label{eq3.15} \xymatrix{ &P^{{\mathrm}{log}}\ar[d] \\ Y\times_{S}X\ar[r]^-{v\times v^{\vee}}\ar[ru]^{s^{{\mathrm}{log}}} &B\times B^{\vee} }.$$ This finishes the proof. Now we are ready to prove Theorem \[thm3.1\]. Recall that $T={\mathcal}{H}om_S(X,{\mathbb{G}_{{\mathrm}{m}}})$, and let $T_{{\mathrm}{log}}:={\mathcal}{H}om_S(X,{\mathbb{G}_{{\mathrm}{m,log}}})$. We have the following two commutative diagrams $$\xymatrix{ {\mathrm}{Ext}^1_{S_{{\mathrm}{kfl}}}(B,T)\ar[d]_{v^*}\ar[r]^-{\cong} &{\mathrm}{Biext}^1_{S_{{\mathrm}{kfl}}}(B,X;{\mathbb{G}_{{\mathrm}{m}}})\ar[d]^{(v,1_X)^*} \\ {\mathrm}{Ext}^1_{S_{{\mathrm}{kfl}}}(Y,T)\ar[r]^-{\cong} &{\mathrm}{Biext}^1_{S_{{\mathrm}{kfl}}}(Y,X;{\mathbb{G}_{{\mathrm}{m}}}) }$$ and $$\xymatrix{ {\mathrm}{Ext}^1_{S_{{\mathrm}{kfl}}}(B,T_{{\mathrm}{log}})\ar[d]_{v^*}\ar[r]^-{\cong} &{\mathrm}{Biext}^1_{S_{{\mathrm}{kfl}}}(B,X;{\mathbb{G}_{{\mathrm}{m,log}}})\ar[d]^{(v,1_X)^*} \\ {\mathrm}{Ext}^1_{S_{{\mathrm}{kfl}}}(Y,T_{{\mathrm}{log}})\ar[r]^-{\cong} &{\mathrm}{Biext}^1_{S_{{\mathrm}{kfl}}}(Y,X;{\mathbb{G}_{{\mathrm}{m,log}}}) },$$ where the horizontal maps being isomorphisms comes from $${\mathcal}{E}xt^1_{S_{{\mathrm}{kfl}}}(X,{\mathbb{G}_{{\mathrm}{m}}})={\mathcal}{E}xt^1_{S_{{\mathrm}{kfl}}}(X,{\mathbb{G}_{{\mathrm}{m,log}}})=0$$ with the help of [@sga7-1 Exp. VIII, 1.1.4]. Let $G\in {\mathrm}{Ext}^1_{S_{{\mathrm}{kfl}}}(B,T)$ (resp. $G_{{\mathrm}{log}}\in {\mathrm}{Ext}^1_{S_{{\mathrm}{kfl}}}(B,T_{{\mathrm}{log}})$) be the extension corresponding to the biextension $(1_B,v^{\vee})^*P$ (resp. $(1_B,v^{\vee})^*P^{{\mathrm}{log}}$), then the section $s^{{\mathrm}{log}}$ of $E^{{\mathrm}{log}}$ gives rise to a homomorphism $u^{{\mathrm}{log}}:Y\rightarrow G_{{\mathrm}{log}}$ fitting into the following commutative diagram $$\label{eq3.16} \xymatrix{ &&&Y\ar[d]^v\ar[ld]_{u^{{\mathrm}{log}}} \\ 0\ar[r] &T_{{\mathrm}{log}}\ar[r] &G_{{\mathrm}{log}}\ar[r] &B\ar[r] &0 \\ 0\ar[r] &T\ar[r]\ar[u] &G\ar[r]\ar[u] &B\ar[r]\ar@{=}[u] &0 }$$ of sheaves of abelian groups on $({\mathrm}{fs}/S)_{{\mathrm}{kfl}}$. This gives a two-term complex $$Y\xrightarrow{u^{{\mathrm}{log}}}G_{{\mathrm}{log}}.$$ Since both $X$ and $Y$ are representable by a finitely generated free abelian group over $S'$, we have that $G\times_SS'$ is an extension of the abelian scheme $B\times_SS'$ by the torus $T\times_SS'$ on $({\mathrm}{fs}/S')_{{\mathrm}{\acute{e}t}}$ by Remark \[rmk2.1\] and $u^{{\mathrm}{log}}\times_SS':Y\times_SS'\rightarrow G_{{\mathrm}{log}}\times_SS'$ is a log 1-motive over $S'$. Therefore $Y\xrightarrow{u^{{\mathrm}{log}}}G_{{\mathrm}{log}}$ is a két log 1-motive over $S$. Clearly the két log 1-motive $Y\xrightarrow{u^{{\mathrm}{log}}}G_{{\mathrm}{log}}$ extends $M_K$. \[cor3.1\] Let the notation and the assumptions be as in Theorem \[thm3.1\]. We further assume that both $Y_K$ and $G_K$ have good reduction. Then the két log 1-motive $M^{{\mathrm}{log}}=[Y\xrightarrow{u^{{\mathrm}{log}}}G_{{\mathrm}{log}}]$ associated to $M_K$ is a log 1-motive. Since both $Y_K$ and $G_K$ have good reduction, both $X$ and $Y$ are étale locally representable by a finitely generated free abelian group over $S$. Therefore $G$ is an extension of the abelian scheme $B$ by the torus $T$ on $({\mathrm}{fs}/S)_{{\mathrm}{kfl}}$. By Remark \[rmk2.1\], $G$ comes from an extension on $({\mathrm}{fs}/S)_{{\mathrm}{\acute{e}t}}$. It follows that $Y\xrightarrow{u^{{\mathrm}{log}}}G_{{\mathrm}{log}}$ is a log 1-motive over $S$. Corollary \[cor3.1\] shows that a log 1-motive in the sense of [@k-t1 4.6.1] extends uniquely to a log 1-motive in our sense (i.e. in the sense of [@k-k-n2 Defn. 2.2]). Monodromy {#sec4} ========= In this section, we construct a pairing for a tamely ramified strict 1-motive $M_K$ over a complete discrete valuation field via the két log 1-motive $M^{{\mathrm}{log}}$ associated to $M_K$. We compare it with the geometric monodromy pairing from [@ray2 4.3]. Logarithmic monodromy pairing ----------------------------- We adopt the notation from last section. Consider the following push-out diagram $$\label{eq4.1} \xymatrix{ 0\ar[r] &{\mathbb{G}_{{\mathrm}{m}}}\ar[r]\ar@{^(->}[d] &E\ar[r]\ar@{^(->}[d] &Y\otimes_{{{\mathbb Z}}}X\ar[r]\ar@{=}[d] &0 \\ 0\ar[r] &{\mathbb{G}_{{\mathrm}{m,log}}}\ar[r] &E^{{\mathrm}{log}}\ar[r] &Y\otimes_{{{\mathbb Z}}}X\ar[r]\ar@/^1pc/[l]^{t^{{\mathrm}{log}}} &0 },$$ where $t^{{\mathrm}{log}}$ is the section (\[eq3.14\]). Then the section $t^{{\mathrm}{log}}$ induces a linear map $$Y\otimes_{{{\mathbb Z}}}X\rightarrow E^{{\mathrm}{log}}/E\cong ({\mathbb{G}_{{\mathrm}{m,log}}}/{\mathbb{G}_{{\mathrm}{m}}})_{S_{{\mathrm}{kfl}}},$$ which corresponds to a bilinear map $$\label{eq4.2} \langle-,-\rangle:Y\times X\rightarrow ({\mathbb{G}_{{\mathrm}{m,log}}}/{\mathbb{G}_{{\mathrm}{m}}})_{S_{{\mathrm}{kfl}}}.$$ This pairing is nothing but the monodromy pairing (\[defn2.6\]) for the két log 1-motive $M^{{\mathrm}{log}}$. We call the pairing (\[eq4.2\]) the **logarithmic monodromy pairing** of the tamely ramified strict 1-motive $M_K$. \[prop4.1\] Let the assumption and the notation be as in Theorem \[thm3.1\] and its proof. The monodromy pairing (\[eq4.2\]) vanishes if and only if the section $t^{{\mathrm}{log}}$ is induced from a section $t:Y\otimes_SX\rightarrow E$ of $E$. When such a section $t$ exists, it corresponds to a section $s:Y\times_SX\rightarrow P$ which further corresponds to a map $u:Y\rightarrow G$. The map $s$ and $u$ extend the diagrams (\[eq3.15\]) and (\[eq3.16\]) to the commutative diagrams $$\xymatrix{ &P\ar[r]\ar[d] &P^{{\mathrm}{log}}\ar[d] \\ Y\times_{S}X\ar[r]_-{v\times v^{\vee}}\ar[ru]^s\ar@{-->}[rru]^{s^{{\mathrm}{log}}} &B\times_S B^{\vee}\ar@{=}[r] &B\times_S B^{\vee} }.$$ and $$\xymatrix{ &&&Y\ar[d]^v\ar[ld]_{u^{{\mathrm}{log}}}\ar@{-->}[ldd]_u \\ 0\ar[r] &T_{{\mathrm}{log}}\ar[r] &G_{{\mathrm}{log}}\ar[r] &B\ar[r] &0 \\ 0\ar[r] &T\ar[r]\ar[u] &G\ar[r]\ar[u] &B\ar[r]\ar@{=}[u] &0 }$$ respectively. Therefore the given 1-motive $M_K$ extends to a unique két 1-motive $M=[Y\xrightarrow{u}G]$ such that the két log 1-motive $M^{{\mathrm}{log}}$ associated to $M_K$ is induced from $M$. By the construction the monodromy pairing, its vanishing is clearly equivalent to $t^{{\mathrm}{log}}$ being induced from a section $t:Y\otimes_SX\rightarrow E$ of $E$. The proof of the rest is similar to the proof of Theorem \[thm3.1\]. \[prop4.2\] Let $M_K$ be a tamely ramified strict 1-motive over $K$, and $M^{{\mathrm}{log}}=[Y\xrightarrow{u^{{\mathrm}{log}}}G_{{\mathrm}{log}}]$ the két log 1-motive associated to $M_K$. Assume that the logarithmic monodromy pairing of $M_K$ is induced by a pairing $\mu_{\pi}:Y\times X\rightarrow\pi^{{{\mathbb Z}}}$. Let $$u_{2,\pi}^{{\mathrm}{log}}:Y\rightarrow T_{{\mathrm}{log}}={\mathcal}{H}om_{S_{{\mathrm}{kfl}}}(X,{\mathbb{G}_{{\mathrm}{m,log}}})\subset G_{{\mathrm}{log}}$$ be the map induced by $\mu_{\pi}$, and $u_{1,\pi}^{{\mathrm}{log}}:=u^{{\mathrm}{log}}-u_{2,\pi}^{{\mathrm}{log}}$. Then $u_{1,\pi}^{{\mathrm}{log}}$ factors as $$Y\xrightarrow{u_{1,\pi}}G\hookrightarrow G_{{\mathrm}{log}},$$ i.e. the két log 1-motive $[Y\xrightarrow{u_{1,\pi}^{{\mathrm}{log}}}G_{{\mathrm}{log}}]$ is induced from the két 1-motive $[Y\xrightarrow{u_{1,\pi}}G]$. It suffices to prove that $u_{1,\pi}^{{\mathrm}{log}}$ factors through $G\hookrightarrow G_{{\mathrm}{log}}$, the rest is clear. The monodromy pairing of the két log 1-motive $[Y\xrightarrow{u_{1,\pi}^{{\mathrm}{log}}}G_{{\mathrm}{log}}]$ is the difference of the monodromy pairings of $[Y\xrightarrow{u^{{\mathrm}{log}}}G_{{\mathrm}{log}}]$ and $[Y\xrightarrow{u_{2,\pi}^{{\mathrm}{log}}}G_{{\mathrm}{log}}]$. Since the two monodromy pairings agree, we have that the monodromy pairing of $[Y\xrightarrow{u_{1,\pi}^{{\mathrm}{log}}}G_{{\mathrm}{log}}]$ vanishes. By Proposition \[prop4.1\], we are done. Let $M_K=[Y_K\xrightarrow{u_K} G_K]$ be a tamely ramified strict 1-motive over $K$. Assume that both $Y_K$ and $G_K$ have good reduction. Then both $Y$ and $X$ are étale locally constant. Therefore the monodromy pairing $\langle-,-\rangle:Y\times X\rightarrow ({\mathbb{G}_{{\mathrm}{m,log}}}/{\mathbb{G}_{{\mathrm}{m}}})_{S_{{\mathrm}{kfl}}}$ factors through the canonical homomorphism $$\pi^{{{\mathbb Z}}}\cong M^{{\mathrm}{gp}}_S/{\mathcal}{O}_S^{\times}\rightarrow ({\mathbb{G}_{{\mathrm}{m,log}}}/{\mathbb{G}_{{\mathrm}{m}}})_{S_{{\mathrm}{kfl}}}.$$ In other words, the monodromy pairing of $M_K$ satisfies the assumption of Proposition \[prop4.2\] in this case. The construction of the decomposition $u^{{\mathrm}{log}}=u_{1,\pi}^{{\mathrm}{log}}+u_{2,\pi}^{{\mathrm}{log}}$ involves the chosen uniformizer $\pi$. Next we look for a decomposition $u^{{\mathrm}{log}}=u_{1}^{{\mathrm}{log}}+u_{2}^{{\mathrm}{log}}$ independent of the choice of a uniformizer, such that $$\label{eq4.5} \text{$u_{1}^{{\mathrm}{log}}$ is induced by some map $u_{1}:Y\rightarrow G$ and $u_{2}^{{\mathrm}{log}}$ factors through $T_{{\mathrm}{log}}\hookrightarrow G_{{\mathrm}{log}}$.}$$ Let $M_K$ be a tamely ramified strict 1-motive over $K$, and $M^{{\mathrm}{log}}=[Y\xrightarrow{u^{{\mathrm}{log}}}G_{{\mathrm}{log}}]$ the két log 1-motive associated to $M_K$. The decompositions $u^{{\mathrm}{log}}=u_{1}^{{\mathrm}{log}}+u_{2}^{{\mathrm}{log}}$ satisfying the condition (\[eq4.5\]) correspond canonically to the trivializations $t:Y\otimes X\rightarrow E$ of the extension $E$ from (\[eq4.1\]). Then the homomorphism $u_2^{{\mathrm}{log}}$ corresponds to the difference homomorphism $t^{{\mathrm}{log}}-t$, where $t^{{\mathrm}{log}}$ is as in (\[eq4.1\]). Given a decomposition $u^{{\mathrm}{log}}=u_{1}^{{\mathrm}{log}}+u_{2}^{{\mathrm}{log}}$ satisfying the condition (\[eq4.5\]), the map $u_1$ associated to $u_{1}^{{\mathrm}{log}}$ gives rise to a section $t:Y\otimes X\rightarrow E$ of $E$. Conversely, given a section $t:Y\otimes X\rightarrow E$ of $E$, the decomposition $t^{{\mathrm}{log}}=t+(t^{{\mathrm}{log}}-t):=t_1+t_2$ gives rise to a decomposition $u^{{\mathrm}{log}}=u_{1}^{{\mathrm}{log}}+u_{2}^{{\mathrm}{log}}$ with $u_i^{{\mathrm}{log}}$ induced by $t_i$. It is clear that $u_1^{{\mathrm}{log}}$ factors through $G\hookrightarrow G_{{\mathrm}{log}}$. By an easy calculation $t^{{\mathrm}{log}}-t$ factors through ${\mathbb{G}_{{\mathrm}{m,log}}}\hookrightarrow E^{{\mathrm}{log}}$, therefore $u_2^{{\mathrm}{log}}$ factors as $Y\rightarrow T_{{\mathrm}{log}}\rightarrow G_{{\mathrm}{log}}$. Hence the decomposition $u^{{\mathrm}{log}}=u_{1}^{{\mathrm}{log}}+u_{2}^{{\mathrm}{log}}$ satisfies the condition (\[eq4.5\]). As before, let $Z:=Y\otimes X$. We abbreviate $({\mathbb{G}_{{\mathrm}{m,log}}}/{\mathbb{G}_{{\mathrm}{m}}})_{S_{{\mathrm}{kfl}}}$ as ${\overline{\mathbb{G}}_{{\mathrm}{m,log}}}$. Applying the functor ${\mathrm}{Hom}_{S_{{\mathrm}{kfl}}}(Z,-)$ to the short exact sequence $$0\rightarrow{\mathbb{G}_{{\mathrm}{m}}}\rightarrow{\mathbb{G}_{{\mathrm}{m,log}}}\rightarrow{\overline{\mathbb{G}}_{{\mathrm}{m,log}}}\rightarrow0,$$ we get an exact sequence $${\mathrm}{Hom}_{S_{{\mathrm}{kfl}}}(Z,{\mathbb{G}_{{\mathrm}{m,log}}})\xrightarrow{\alpha}{\mathrm}{Hom}_{S_{{\mathrm}{kfl}}}(Z,{\overline{\mathbb{G}}_{{\mathrm}{m,log}}})\rightarrow{\mathrm}{Ext}^1_{S_{{\mathrm}{kfl}}}(Z,{\mathbb{G}_{{\mathrm}{m}}})\rightarrow{\mathrm}{Ext}^1_{S_{{\mathrm}{kfl}}}(Z,{\mathbb{G}_{{\mathrm}{m,log}}}).$$ Let $\mu^{{\mathrm}{log}}\in{\mathrm}{Hom}_{S_{{\mathrm}{kfl}}}(Z,{\overline{\mathbb{G}}_{{\mathrm}{m,log}}})$ be the element corresponding to the logarithmic monodromy pairing $\langle-,-\rangle$ of $M_K$. Then the element $E$ of ${\mathrm}{Ext}^1_{S_{{\mathrm}{kfl}}}(Z,{\mathbb{G}_{{\mathrm}{m}}})$ is the image of $\mu^{{\mathrm}{log}}$ along the map ${\mathrm}{Hom}_{S_{{\mathrm}{kfl}}}(Z,{\overline{\mathbb{G}}_{{\mathrm}{m,log}}})\rightarrow{\mathrm}{Ext}^1_{S_{{\mathrm}{kfl}}}(Z,{\mathbb{G}_{{\mathrm}{m}}})$. If $E$ is trivial, then the subset $$\alpha^{-1}(\mu^{{\mathrm}{log}})\subset {\mathrm}{Hom}_{S_{{\mathrm}{kfl}}}(Z,{\mathbb{G}_{{\mathrm}{m,log}}})={\mathrm}{Hom}_{S_{{\mathrm}{kfl}}}(Y,T_{{\mathrm}{log}})$$ is not empty, and its elements correspond to the choices of $u_2^{{\mathrm}{log}}$. Comparison with Raynaud’s geometric monodromy --------------------------------------------- Since $B$ and $B^{\vee}$ become abelian schemes after base change to $S'$, $P\times_SS'$ is the Weil biextension of the abelian schemes $B\times_SS'$ and $B^{\vee}\times_SS'$, in particular $$P\times_SS'\in{\mathrm}{Biext}^1_{S'_{{\mathrm}{fl}}}(B\times_SS',B^{\vee}\times_SS';{\mathbb{G}_{{\mathrm}{m}}}).$$ It follows that the extension $E\times_SS'$ lies in the subgroup ${\mathrm}{Ext}^1_{S'_{{\mathrm}{fl}}}((Y\otimes X)\times_SS',{\mathbb{G}_{{\mathrm}{m}}})$ of the group ${\mathrm}{Ext}^1_{S'_{{\mathrm}{kfl}}}((Y\otimes X)\times_SS',{\mathbb{G}_{{\mathrm}{m}}})$. Therefore the image of the map $\delta$ from (\[eq3.13\]) lands in the subgroup $H^1_{{\mathrm}{fl}}(S',{\mathbb{G}_{{\mathrm}{m}}})$ of $H^1_{{\mathrm}{kfl}}(S',{\mathbb{G}_{{\mathrm}{m}}})$. Since $H^1_{{\mathrm}{fl}}(S',{\mathbb{G}_{{\mathrm}{m}}})=0$, the diagram (\[eq3.13\]) gives rise to the following commutative diagram $$\label{eq4.6} \xymatrix{ 0\ar[r] &{\mathbb{G}_{{\mathrm}{m}}}(S')\ar[r]\ar[d] &E(S')\ar[r]\ar[d] &Y\otimes X(S')\ar@{=}[d]\ar[r] &0 \\ 0\ar[r] &{\mathbb{G}_{{\mathrm}{m,log}}}(S')\ar[r]\ar[d]^{\cong} &E^{{\mathrm}{log}}(S')\ar[r]\ar[d]^{\cong} &Y\otimes X(S')\ar[r]\ar[d]^{\cong}\ar@/^1pc/[l]^{t^{{\mathrm}{log}}} &0 \\ 0\ar[r] &{\mathbb{G}_{{\mathrm}{m}}}({\mathop{{\mathrm}{Spec}}}K')\ar[r] &E_K({\mathop{{\mathrm}{Spec}}}K')\ar[r] &Y_K\otimes X_K({\mathop{{\mathrm}{Spec}}}K')\ar[r]\ar@/^1pc/[l]^{t_K} &0 }$$ with exact rows. Then the pairing (\[eq4.2\]) induces a pairing $$\langle-,-\rangle:Y(S')\times X(S')\rightarrow {\mathbb{G}_{{\mathrm}{m,log}}}(S')/{\mathbb{G}_{{\mathrm}{m}}}(S').$$ Since ${\mathbb{G}_{{\mathrm}{m}}}(S')=R^{'\times}$ and ${\mathbb{G}_{{\mathrm}{m,log}}}(S')=R^{'\times}\times\pi^{'{{\mathbb Z}}}$, we get a ${\mathrm}{Gal}(S'/S)$-equivariant pairing $$\label{eq4.7} \langle-,-\rangle:Y(S')\times X(S')\rightarrow \pi^{'{{\mathbb Z}}}.$$ The pairing (\[eq4.7\]) coincides with the geometric monodromy pairing $\mu:Y_K\times X_K\rightarrow\pi^{{{\mathbb Q}}}$ from [@ray2 4.3]. The map $t_K$ in the diagram (\[eq4.6\]) induces a homomorphism $$Y_K\otimes X_K({\mathop{{\mathrm}{Spec}}}K')\rightarrow E_K({\mathop{{\mathrm}{Spec}}}K')/E(S')$$ which gives rise to exactly the monodromy pairing from [@ray2 4.3]. Since the second row and the third row are isomorphic in the diagram (\[eq4.6\]), we are done. If Raynaud’s geometric monodromy pairing $\mu$ factors through $\pi^{{{\mathbb Z}}}$, [@ray2 Prop. 4.5.1] gives a decomposition $u_K=u^1_{K,\pi}+u^2_{K,\pi}$ such that $$\label{eq4.8} \begin{split} &\text{the $K$-1-motive $M^1_{K,\pi}=[Y_K\xrightarrow{u^1_{K,\pi}}G_K]$ has potentially good reduction;} \\ &\text{and $u^2_{K,\pi}$ factors through the torus part $T_K$ of $G_K$.} \end{split}$$ Moreover such a decomposition is made independent of the choice of the uniformizer $\pi$ in [@ray2 Prop. 4.5.3], namely a decomposition $u_K=u^1_K+u^2_K$ satisfying the condition analogous to (\[eq4.8\]), corresponds to a trivialisation of the extension $\tau:Z_K:=Y_K\otimes Y_K\rightarrow {\mathcal}{E}_{{\mathrm}{rig}}$, where ${\mathcal}{E}_{{\mathrm}{rig}}$ is as defined in [@ray2 Rmk. 4.5.2 (iii)]. Our decompositions $u^{{\mathrm}{log}}=u_{1,\pi}^{{\mathrm}{log}}+u_{2,\pi}^{{\mathrm}{log}}$ and $u^{{\mathrm}{log}}=u_{1}^{{\mathrm}{log}}+u_{2}^{{\mathrm}{log}}$ are compatible with Raynaud’s decompositions $u_K=u^1_{K,\pi}+u^2_{K,\pi}$ and $u_K=u^1_K+u^2_K$. More precisely, we have the following. The restrictions of the decompositions $u^{{\mathrm}{log}}=u_{1,\pi}^{{\mathrm}{log}}+u_{2,\pi}^{{\mathrm}{log}}$ and $u^{{\mathrm}{log}}=u_{1}^{{\mathrm}{log}}+u_{2}^{{\mathrm}{log}}$ give rise to Raynaud’s decompositions $u_K=u^1_{K,\pi}+u^2_{K,\pi}$ and $u_K=u^1_K+u^2_K$ respectively. Log finite group objects associated to két log 1-motives {#sec5} ======================================================== Log finite group objects ------------------------ Let $S$ be a locally noetherian fs log scheme. Kato has developed a theory of log finite group objects, which is parallel to the theory of finite flat groups scheme in the non-log world. The main references are [@kat4] and [@mad2]. \[defn5.1\] The category $({\mathrm}{fin}/S)_{{\mathrm}{c}}$ is the full subcategory of the category of sheaves of finite abelian groups over $({\mathrm}{fs}/S)_{{\mathrm}{kfl}}$ consisting of objects which are representable by a classical finite flat group scheme over $S$. Here classical means the log structure of the representing log scheme is the one induced from $S$. The category $({\mathrm}{fin}/S)_{{\mathrm}{f}}$ is the full subcategory of the category of sheaves of finite abelian groups over $({\mathrm}{fs}/S)_{{\mathrm}{kfl}}$ consisting of objects which are representable by a classical finite flat group scheme over a kummer flat cover of $S$. For $F\in ({\mathrm}{fin}/S)_{{\mathrm}{f}}$, let $U\rightarrow S$ be a log flat cover of $S$ such that $F_U:=F\times_S U\in ({\mathrm}{fin}/U)_{{\mathrm}{c}}$, then the rank of $F$ is defined to be the rank of $F_U$ over $U$. The category $({\mathrm}{fin}/S)_{{\mathrm}{\acute{e}}}$ is the full subcategory of $({\mathrm}{fin}/S)_{{\mathrm}{f}}$ consisting of objects which are representable by a classical finite flat group scheme over a kummer étale cover of $S$. The category $({\mathrm}{fin}/S)_{{\mathrm}{r}}$ is the full subcategory of $({\mathrm}{fin}/S)_{{\mathrm}{f}}$ consisting of objects which are representable by a log scheme over $S$. Let $F\in ({\mathrm}{fin}/S)_{{\mathrm}{f}}$, the Cartier dual of $F$ is the sheaf $F^*:={\mathcal}{H}om_{S_{{\mathrm}{kfl}}}(F,{\mathbb{G}_{{\mathrm}{m}}})$. By the definition of $({\mathrm}{fin}/S)_{{\mathrm}{f}}$, it is clear that $F^*\in ({\mathrm}{fin}/S)_{{\mathrm}{f}}$. The category $({\mathrm}{fin}/S)_{{\mathrm}{d}}$ is the full subcategory of $({\mathrm}{fin}/S)_{{\mathrm}{r}}$ consisting of objects whose Cartier dual also lies in $({\mathrm}{fin}/S)_{{\mathrm}{r}}$. \[prop5.1\] The categories $({\mathrm}{fin}/S)_{{\mathrm}{f}}$, $({\mathrm}{fin}/S)_{{\mathrm}{\acute{e}}}$, $({\mathrm}{fin}/S)_{{\mathrm}{r}}$, and\ $({\mathrm}{fin}/S)_{{\mathrm}{d}}$ are closed under extensions in the category of sheaves of abelian groups on $({\mathrm}{fs}/S)_{{\mathrm}{kfl}}$. See [@kat4 Prop. 2.3]. \[defn5.2\] Let $p$ be a prime number. A **log $p$-divisible group** (resp. **két log $p$-divisible group**, resp. **kfl log $p$-divisible group**) over $S$ is a sheaf of abelian groups $G$ on $({\mathrm}{fs}/S)_{{\mathrm}{kfl}}$ satisfying: (1) $G=\bigcup_{n\geq 0}G_n$ with $G_n:={\mathrm}{ker}(p^n:G\rightarrow G)$; (2) $p:G\rightarrow G$ is surjective; (3) $G_n\in ({\mathrm}{fin}/S)_{{\mathrm}{r}}$ (resp. $G_n\in ({\mathrm}{fin}/S)_{{\mathrm}{\acute{e}}}$, resp. $G_n\in ({\mathrm}{fin}/S)_{{\mathrm}{f}}$) for any $n> 0$. We denote the category of log $p$-divisible groups (resp. két log $p$-divisible groups, resp. kfl log $p$-divisible groups) over $S$ by $(\text{$p$-div}/S)^{{\mathrm}{log}}$ (resp. $(\text{$p$-div}/S)^{{\mathrm}{log}}_{{\mathrm}{\acute{e}}}$, resp. $(\text{$p$-div}/S)^{{\mathrm}{log}}_{{\mathrm}{f}}$). The full subcategory of $(\text{$p$-div}/S)^{{\mathrm}{log}}$ consisting of objects $G$ with $G_1\in ({\mathrm}{fin}/S)_{{\mathrm}{d}}$ for $n>0$ will be denoted by $(\text{$p$-div}/S)^{{\mathrm}{log}}_{{\mathrm}{d}}$. A log $p$-divisible group $G$ with $G_n\in ({\mathrm}{fin}/S)_{{\mathrm}{c}}$ for $n>0$ is clearly just a classical $p$-divisible group, and we denote the full subcategory of $(\text{$p$-div}/S)^{{\mathrm}{log}}_{{\mathrm}{d}}$ consisting of classical $p$-divisible groups by $(\text{$p$-div}/S)$. Log finite group objects associated to két log 1-motives {#log-finite-group-objects-associated-to-két-log-1-motives} -------------------------------------------------------- Let $S$ be an fs log scheme, $M^{{\mathrm}{log}}=[Y\xrightarrow{u}G_{{\mathrm}{log}}]$ a két log 1-motive over $S$, and $n$ a positive integer. By Lemma \[lem2.2\] and Corollary \[cor2.1\], we can regard $M^{{\mathrm}{log}}$ as a complex of sheaves on $({\mathrm}{fs}/S)_{{\mathrm}{kfl}}$, and define $$T_n(M^{{\mathrm}{log}}):=H^{-1}(M^{{\mathrm}{log}}\otimes_{{{\mathbb Z}}}^{{\mathrm}{L}}{{\mathbb Z}}/n{{\mathbb Z}}).$$ Let $S$ be a locally noetherian fs log scheme, $$M^{{\mathrm}{log}}=[Y\xrightarrow{u}G_{{\mathrm}{log}}]$$ a két log 1-motive over $S$, and $n$ a positive integer. Then we have the following. (1) $T_n(M^{{\mathrm}{log}})$ fits into the following exact sequence $$0\rightarrow G_{{\mathrm}{log}}[n]\rightarrow T_n(M^{{\mathrm}{log}})\rightarrow Y/nY\rightarrow0$$ of sheaves of abelian groups on $({\mathrm}{fs}/S)_{{\mathrm}{kfl}}$. (2) $T_n(M^{{\mathrm}{log}})\in({\mathrm}{fin}/S)_{{\mathrm}{\acute{e}}}$. (3) Let $m$ be another positive integer, then the map $T_{mn}(M^{{\mathrm}{log}})\rightarrow T_{n}(M^{{\mathrm}{log}})$ induced by ${{\mathbb Z}}/mn{{\mathbb Z}}\xrightarrow{m}{{\mathbb Z}}/n{{\mathbb Z}}$ is surjective. (4) If $M^{{\mathrm}{log}}$ is a log 1-motive, then $T_n(M^{{\mathrm}{log}})\in({\mathrm}{fin}/S)_{{\mathrm}{d}}$. For part (1), by [@ray2 §3.1], it suffices to show that the multiplication by $n$ is injective on $Y$ and surjective on $G_{{\mathrm}{log}}$ for the Kummer flat topology. The injectivity of the map $Y\xrightarrow{n}Y$ is trivial. We are reduced to show the surjectivity of the map $G_{{\mathrm}{log}}\xrightarrow{n}G_{{\mathrm}{log}}$. Without loss of generality, we may assume that $M^{{\mathrm}{log}}$ is a log 1-motive. Let $G$ be an extension of an abelian scheme $B$ by a torus $T$ over $S$. Consider the following commutative diagram $$\xymatrix{ 0\ar[r] &T_{{\mathrm}{log}}\ar[r]\ar[d]^n &G_{{\mathrm}{log}}\ar[r]\ar[d]^n &B\ar[r]\ar[d]^n &0 \\ 0\ar[r] &T_{{\mathrm}{log}}\ar[r] &G_{{\mathrm}{log}}\ar[r] &B\ar[r] &0 }$$ with exact rows. The multiplication by $n$ is clearly surjective on $B$, and the surjectivity of the multiplication by $n$ on $T_{{\mathrm}{log}}$ follows from the surjectivity of ${\mathbb{G}_{{\mathrm}{m,log}}}\xrightarrow{n}{\mathbb{G}_{{\mathrm}{m,log}}}$. It follows that $G_{{\mathrm}{log}}\xrightarrow{n}G_{{\mathrm}{log}}$ is surjective. For part (2), we may still assume that $M^{{\mathrm}{log}}$ is a log 1-motive. We have a short exact sequence $0\rightarrow T_{{\mathrm}{log}}[n]\rightarrow G_{{\mathrm}{log}}[n]\rightarrow B[n]\rightarrow0$. Let $X$ be the character group of $T$, then we get an exact sequence $$0\rightarrow T\rightarrow T_{{\mathrm}{log}}\rightarrow{\mathcal}{H}om_{S_{{\mathrm}{kfl}}}(X,{\mathbb{G}_{{\mathrm}{m,log}}}/{\mathbb{G}_{{\mathrm}{m}}})\rightarrow0.$$ Since ${\mathbb{G}_{{\mathrm}{m,log}}}/{\mathbb{G}_{{\mathrm}{m}}}$ is torsion-free, we get $T[n]=T_{{\mathrm}{log}}[n]$. Then we get a short exact sequence $0\rightarrow T[n]\rightarrow G_{{\mathrm}{log}}[n]\rightarrow B[n]\rightarrow0$. Therefore $G_{{\mathrm}{log}}[n]\in({\mathrm}{fin}/S)_{{\mathrm}{r}}$ by Proposition \[prop5.1\]. Applying Proposition \[prop5.1\] again to the short exact sequence $$0\rightarrow G_{{\mathrm}{log}}[n]\rightarrow T_n(M^{{\mathrm}{log}})\rightarrow Y/nY\rightarrow0,$$ we get $T_n(M^{{\mathrm}{log}})\in({\mathrm}{fin}/S)_{{\mathrm}{r}}$. Part (3) is clearly true for the two két log 1-motives $[Y\rightarrow 0]$ and $[0\rightarrow G_{{\mathrm}{log}}]$. It follows that it also holds for $M^{{\mathrm}{log}}$. At last, we prove part (4). By the proof of part (2) we get $T_n(M^{{\mathrm}{log}})\in({\mathrm}{fin}/S)_{{\mathrm}{r}}$. Similarly, we have $T_n(M^{{\mathrm}{log}})^{*}=T_n((M^{{\mathrm}{log}})^{\vee})\in({\mathrm}{fin}/S)_{{\mathrm}{r}}$, where $(M^{{\mathrm}{log}})^{\vee}$ denotes the dual of the log 1-motive $M^{{\mathrm}{log}}$. It follows that $T_n(M^{{\mathrm}{log}})\in({\mathrm}{fin}/S)_{{\mathrm}{d}}$. Let $S$ be a locally noetherian fs log scheme, $$M^{{\mathrm}{log}}=[Y\xrightarrow{u}G_{{\mathrm}{log}}]$$ a két log 1-motive over $S$, and $p$ a prime number. The **két log $p$-divisible group of $M^{{\mathrm}{log}}$** is defined to be $M^{{\mathrm}{log}}[p^{\infty}]:=\bigcup_n T_{p^n}(M^{{\mathrm}{log}})$. Extending finite group schemes associated to tamely ramified strict 1-motives ----------------------------------------------------------------------------- \[thm5.1\] Let the notation and the assumptions be as in Theorem \[thm3.1\], and let $n$ be a positive integer. Then $T_n(M^{{\mathrm}{log}})$ lies in $({\mathrm}{fin}/S)_{{\mathrm}{\acute{e}}}$, and it extends the finite group scheme $T_n(M_K)$ over $K$ to $S$. Since $T_n(M^{{\mathrm}{log}})\times_SS'=T_n(M^{{\mathrm}{log}}\times_SS')\in({\mathrm}{fin}/S)_{{\mathrm}{r}}$ and $S'$ is a Kummer étale cover of $S$, we get $T_n(M^{{\mathrm}{log}})\in({\mathrm}{fin}/S)_{{\mathrm}{\acute{e}}}$. Since $M^{{\mathrm}{log}}\times_S{\mathop{{\mathrm}{Spec}}}K=M_K$, we get $T_n(M^{{\mathrm}{log}})\times_S{\mathop{{\mathrm}{Spec}}}K=T_n(M_K)$. The following theorem is stated in [@kat4 §4.3] without proof. Here we present a proof. \[thm5.2\] Let $K$ be a complete discrete valuation field with ring of integers $R$, $p$ a prime number, and $A_K$ a tamely ramified abelian variety over $K$. We endow $S:={\mathop{{\mathrm}{Spec}}}R$ with the canonical log structure. Then the $p$-divisible group $A_K[p^{\infty}]$ of $A_K$ extends to an object of $(\text{$p$-div}/S)^{{\mathrm}{log}}_{{\mathrm}{\acute{e}}}$. It extends to an object of $(\text{$p$-div}/S)^{{\mathrm}{log}}_{{\mathrm}{d}}$ if any of the following two conditions is satisfied. (1) $A_K$ has semi-stable reduction. (2) $p$ is invertible in $R$. By [@ray2 §4.2], there exists a tamely ramified strict 1-motive $M_K=[Y_K\xrightarrow{u_K}G_K]$ such that $M_K[p^{\infty}]=A_K[p^{\infty}]$, and $M_K$ has good reduction if $A_K$ has semi-stable reduction. By Theorem \[thm3.1\], $M_K$ extends to a két log 1-motive $M^{{\mathrm}{log}}=[Y\xrightarrow{u^{{\mathrm}{log}}}G_{{\mathrm}{log}}]$. Then $M_K[p^{\infty}]$ extends to $M^{{\mathrm}{log}}[p^{\infty}]\in (\text{$p$-div}/S)^{{\mathrm}{log}}_{{\mathrm}{\acute{e}}}$ by Theorem \[thm5.1\]. If $A_K$ has semi-stable reduction, then $M_K$ has good reduction. Therefore the két log 1-motive $M^{{\mathrm}{log}}$ is actually a log 1-motive over $S$. It follows that $M^{{\mathrm}{log}}[p^{\infty}]\in (\text{$p$-div}/S)^{{\mathrm}{log}}_{{\mathrm}{d}}$. If $p$ is invertible in $R$, then the object $T_{p^n}(M^{{\mathrm}{log}})\in ({\mathrm}{fin}/S)_{{\mathrm}{\acute{e}}}$ actually lies in $({\mathrm}{fin}/S)_{{\mathrm}{d}}$ by [@kat4 Prop. 2.1]. It follows that $M^{{\mathrm}{log}}[p^{\infty}]\in (\text{$p$-div}/S)^{{\mathrm}{log}}_{{\mathrm}{d}}$. Acknowledgement {#acknowledgement .unnumbered} =============== In an email, Professor Chikara Nakayama informed to the author that Kazuya Kato thought it plausible that every abelian variety (not necessarily with semistable reduction) on a complete discrete valuation field extends uniquely to a kummer log flat log abelian variety on the corresponding discrete valuation ring. This work is partly motivated by that piece of information. It is also motivated by Theorem \[thm5.2\] which is taken from [@kat4 §4.3]. The author thank Professor Chikara Nakayama for his generosity. The author would also thank Professor Ulrich Görtz for very helpful discussions concerning taking quotient for equivalence relations, as well as for his support during the past few years. This work has been partially supported by SFB/TR 45 “Periods, moduli spaces and arithmetic of algebraic varieties”.